text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/Grey_alien#cite_ref-AliensAndHybrids_4-2] | [TOKENS: 2835]
Contents Grey alien Grey aliens, also referred to as Zeta Reticulans, Roswell Greys, or simply, Greys,[a] are purported extraterrestrial beings. They are frequently featured in claims of close encounter and alien abduction. Greys are typically described as having small, humanoid bodies, smooth, grey skin, disproportionately large, hairless heads, and large, black, almond-shaped eyes. The 1961 Barney and Betty Hill abduction claim was key to the popularization of Grey aliens. Precursor figures have been described in science fiction and similar descriptions appeared in later accounts of the 1947 Roswell UFO incident and early accounts of the 1948 Aztec UFO hoax. The Grey alien is cited an archetypal image of an intelligent non-human creature and extraterrestrial life in general, as well as an iconic trope of popular culture in the age of space exploration. Description Greys are typically depicted as grey-skinned, diminutive humanoid beings that possess reduced forms of, or completely lack, external human body parts such as noses, ears, or sex organs. Their bodies are usually depicted as being elongated, having a small chest, and lacking in muscular definition and visible skeletal structure. Their legs are depicted as being shorter and jointed differently from humans with limbs proportionally different from a human. Greys are depicted as having unusually large heads in proportion to their bodies, and as having no hair, no noticeable outer ears or noses, and small orifices for ears, nostrils, and mouths. In drawings, Greys are almost always shown with very large, opaque, black eyes, without eye whites. They are frequently described as shorter than average adult humans. The association between Grey aliens and Zeta Reticuli originated with the interpretation of a map drawn by Betty Hill by a school-teacher named Marjorie Fish sometime in 1969. Betty Hill, under hypnosis, had claimed to have been shown a map that displayed the aliens' home system and nearby stars. Upon learning of this, Fish attempted to create a model from a drawing produced by Hill, eventually determining that the stars marked as the aliens' home were Zeta Reticuli, a binary star system. History In literature, descriptions of beings similar to Grey aliens predate claims of supposed encounters with them. In 1893, H. G. Wells presented a description of humanity's future appearance in the article "The Man of the Year Million", describing humans as having no mouths, noses, or hair, and with large heads. In 1895, Wells also depicted the Eloi, a successor species to humanity, in similar terms in the novel The Time Machine. Both share many characteristics with future perceptions of Greys. As early as 1917, the occultist Aleister Crowley described a meeting with a "preternatural entity" named Lam that was similar in appearance to a modern Grey. Crowley claimed to have contacted Lam through a process called the "Amalantrah Workings," which he believed allowed humans to contact beings from outer space and across dimensions. Other occultists and ufologists, many of whom have retroactively linked Lam to later Grey encounters, have since described their own visitations from him, with one describing the being as a "cold, computer-like intelligence," and utterly beyond human comprehension. ...the creatures did not resemble any race of humans. They were short, shorter than the average Japanese, and their heads were big and bald, with strong, square foreheads, and very small noses and mouths, and weak chins. What was most extraordinary about them were the eyes—large, dark, gleaming, with a sharp gaze. They wore clothes made of soft grey fabric, and their limbs seemed to be similar to those of humans. In 1933, the Swedish novelist Gustav Sandgren, using the pen name Gabriel Linde, published a science fiction novel called Den okända faran (The Unknown Danger), in which he describes a race of extraterrestrials who wore clothes made of soft grey fabric and were short, with big bald heads, and large, dark, gleaming eyes. The novel, aimed at young readers, included illustrations of the imagined aliens. This description would become the template upon which the popular image of grey aliens is based. The conception remained a niche one until 1965, when newspaper reports of the Betty and Barney Hill abduction made the archetype famous. The alleged abductees, Betty and Barney Hill, claimed that in 1961, humanoid alien beings with greyish skin had abducted them and taken them to a flying saucer. In his 1990 article "Entirely Unpredisposed", Martin Kottmeyer suggested that Barney's memories revealed under hypnosis might have been influenced by an episode of the science-fiction television show The Outer Limits titled "The Bellero Shield", which was broadcast 12 days before Barney's first hypnotic session. The episode featured an extraterrestrial with large eyes, who says, "In all the universes, in all the unities beyond the universes, all who have eyes have eyes that speak." The report from the regression featured a scenario that was in some respects similar to the television show. In part, Kottmeyer wrote: Wraparound eyes are an extreme rarity in science fiction films. I know of only one instance. They appeared on the alien of an episode of an old TV series The Outer Limits entitled "The Bellero Shield." A person familiar with Barney's sketch in "The Interrupted Journey" and the sketch done in collaboration with the artist David Baker will find a "frisson" of "déjà vu" creeping up his spine when seeing this episode. The resemblance is much abetted by an absence of ears, hair, and nose on both aliens. Could it be by chance? Consider this: Barney first described and drew the wraparound eyes during the hypnosis session dated 22 February 1964. "The Bellero Shield" was first broadcast on 10 February 1964. Only twelve days separate the two instances. If the identification is admitted, the commonness of wraparound eyes in the abduction literature falls to cultural forces. — Martin Kottmeyer, Entirely Unpredisposed: The Cultural Background of UFO Reports Carl Sagan echoed Kottmeyer's suspicions in his 1997 book, The Demon Haunted World: Science as a Candle in the Dark, where Invaders from Mars was cited as another potential inspiration. After the Hills' encounter, Greys would go on to become an integral part of ufology and other extraterrestrial-related folklore. This is particularly true in the case of the United States: according to journalist C. D. B. Bryan, 73% of all reported alien encounters in the United States describe Grey aliens, a significantly higher proportion than other countries.: 68 During the early 1980s, Greys were linked to the alleged crash-landing of a flying saucer in Roswell, New Mexico, in 1947. A number of publications contained statements from individuals who claimed to have seen the U.S. military handling a number of unusually proportioned, bald, child-sized beings. These individuals claimed, during and after the incident, that the beings had oversized heads and slanted eyes, but scant other distinguishable facial features. In 1987, novelist Whitley Strieber published the book Communion, which, unlike his previous works, was categorized as non-fiction, and in which he describes a number of close encounters he alleges to have experienced with Greys and other extraterrestrial beings. The book became a New York Times bestseller, and New Line Cinema released a 1989 film adaption that starred Christopher Walken as Strieber. In 1988, Christophe Dechavanne interviewed the French science-fiction writer and ufologist Jimmy Guieu on TF1's Ciel, mon mardi !. Besides mentioning Majestic 12, Guieu described the existence of what he called "the little greys", which later on became better known in French under the name: les Petits-Gris. Guieu later wrote two docudramas, using as a plot the Grey aliens / Majestic-12 conspiracy theory as described by John Lear and Milton William Cooper: the series "E.B.E." (for "Extraterrestrial Biological Entity"): E.B.E.: Alerte rouge (first part) (1990) and E.B.E.: L'entité noire d'Andamooka (second part) (1991).[citation needed] Greys have since become the subject of many conspiracy theories. Many conspiracy theorists believe that Greys represent part of a government-led disinformation or plausible deniability campaign, or that they are a product of government mind-control experiments. During the 1990s, popular culture also began to increasingly link Greys to a number of military-industrial complex and New World Order conspiracy theories. In 1995, filmmaker Ray Santilli claimed to have obtained 22 reels of 16 mm film that depicted the autopsy of a "real" Grey supposedly recovered from the site of the 1947 incident in Roswell. In 2006, though, Santilli announced that the film was not original, but was instead a "reconstruction" created after the original film was found to have degraded. He maintained that a real Grey had been found and autopsied on camera in 1947, and that the footage released to the public contained a percentage of that original footage. Analysis Greys are often involved in alien abduction claims. Among reports of alien encounters, Greys make up about 50% in Australia, 73% in the United States, 48% in continental Europe, and around 12% in the United Kingdom.: 68 These reports include two distinct groups of Greys that differ in height.: 74 Abduction claims are often described as extremely traumatic, similar to an abduction by humans or even a sexual assault in the level of trauma and distress. The emotional impact of perceived abductions can be as great as that of combat, sexual abuse, and other traumatic events. The eyes are often a focus of abduction claims, which often describe a Grey staring into the eyes of an abductee when conducting mental procedures. This staring is claimed to induce hallucinogenic states or directly provoke different emotions. Neurologist Steven Novella proposes that Grey aliens are a byproduct of the human imagination, with the Greys' most distinctive features representing everything that modern humans traditionally link with intelligence. "The aliens, however, do not just appear as humans, they appear like humans with those traits we psychologically associate with intelligence." In 2005, Frederick V. Malmstrom, writing in Skeptic magazine, Volume 11, issue 4, presents his idea that Greys are actually residual memories of early childhood development. Malmstrom reconstructs the face of a Grey through transformation of a mother's face based on our best understanding of early-childhood sensation and perception. Malmstrom's study offers another alternative to the existence of Greys, the intense instinctive response many people experience when presented an image of a Grey, and the act of regression hypnosis and recovered-memory therapy in "recovering" memories of alien abduction experiences, along with their common themes. According to biologist Jack Cohen, the typical image of a Grey, assuming that it would have evolved from a world with different environmental and ecological conditions from Earth, is too physiologically similar to a human to be credible as a representation of an alien. The interdimensional hypothesis, the cryptoterrestrial hypothesis, and the time-traveller hypothesis attempt to provide an alternative explanation to the humanoid anatomy and behavior of these alleged beings. In popular culture Depictions of Grey aliens have gone on to appear in a number of films and television shows, supplanting the previously popular little green men. As early as 1966, for example, the superhero character Ultraman was explicitly based on them, and in 1977 they were featured in Close Encounters of the Third Kind. Greys have also been worked into space opera and other interstellar settings: in Babylon 5, the Greys are referred to as the "Vree", and are depicted as being allies and trade partners of 23rd-century Earth, while in the Stargate franchise they are called the "Asgard" and depicted as ancient astronauts allied with modern-day Earth.[citation needed] South Park refers to them as "visitors". During the 1990s, plotlines wherein Greys were linked to conspiracy theories became common. A well-known example is the Fox television series The X-Files, which first aired in 1993. It combined the quest to find proof of the existence of Grey-like extraterrestrials with a number of UFO conspiracy theory subplots, to form its primary story arc. Other notable examples include the XCOM video game franchise (where they are called "Sectoids"); Dark Skies, first broadcast in 1996, which expanded upon the MJ-12 conspiracy;[citation needed] and American Dad!, which features a Grey-like alien named Roger, whose backstory draws from both the Roswell incident and Area 51 conspiracy theories. The 2011 film Paul tells the story of a Grey named Paul who attributes the Greys' frequent presence in science fiction pop culture to the US government deliberately inserting the stereotypical Grey alien image into mainstream media; this is done so that if humanity came into contact with Paul's species, no immediate shock would occur as to their appearance. Child abduction by Greys is a key plot point in the 2013 film, Dark Skies. Greys appear in Syfy's 2021 science fiction dramedy series Resident Alien. The Greys appear as the main antagonistic faction in the 2023 independent game Greyhill Incident. See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/File:Mars_Internal_Structure_(2024).svg] | [TOKENS: 122]
File:Mars Internal Structure (2024).svg Summary Licensing http://creativecommons.org/publicdomain/zero/1.0/deed.enCC0Creative Commons Zero, Public Domain Dedicationfalsefalse File history Click on a date/time to view the file as it appeared at that time. File usage The following page uses this file: Global file usage The following other wikis use this file: Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Role-playing_game_terms] | [TOKENS: 132]
Contents Role-playing game terms Role-playing games (RPGs) have developed specialized terminology. This includes both terminology used within RPGs to describe in-game concepts and terminology used to describe RPGs. Role-playing games also have specialized slang and jargon associated with them. Besides the terms listed here, there are numerous terms used in the context of specific, individual RPGs such as Dungeons & Dragons (D&D), Pathfinder, Fate, and Vampire: The Masquerade. For a list of RPGs, see List of role-playing games. Terms used to play role-playing games Terms used to describe characters Terms used to describe types of games Terms used by gamers References
========================================
[SOURCE: https://en.wikipedia.org/wiki/TempleOS] | [TOKENS: 1142]
Contents TempleOS TempleOS (formerly J Operating System, LoseThos, and SparrowOS) is a biblical-themed lightweight operating system (OS) designed to be the Third Temple from the Hebrew Bible. It was created by American computer programmer Terry A. Davis, who developed it alone over the course of a decade after a series of manic episodes that he later described as a revelation from God. TempleOS could be considered as an example of coding as an art form, with the nature of his psychological instability and its influence over the project lending to comparisons to similar Outsider Art (see also Creativity and mental health). The system was characterized as a modern x86-64 Commodore 64, using an interface similar to a mixture of DOS and Turbo C. Davis proclaimed that the system's features, such as its 640×480 resolution, 16-color display, and single-voice audio, were designed according to explicit instructions from God. It was programmed with a custom JIT variant of C (named HolyC) in place of BASIC, and included an original flight simulator, compiler, and kernel. First released in 2005 as J Operating System, TempleOS was renamed in 2013 and was last updated in 2017. Background Terry A. Davis began developing TempleOS circa 1993. One of its early names was the "J Operating System" before renaming it to "LoseThos", a reference to a scene from the 1986 film Platoon. In 2008, Davis wrote that LoseThos was "primarily for making video games. It has no networking or Internet support. As far as I'm concerned, that would be reinventing the wheel". Another name he used was "SparrowOS" before settling on "TempleOS". System overview TempleOS is a 64-bit, non-preemptive multitasking, multi-core, public domain, open source, ring-0-only, single address space, non-networked, PC operating system for recreational programming. The OS uses 8-bit ASCII text and includes built-in 2D and 3D graphics libraries, running at 640×480 VGA resolution with 16 colors. It includes keyboard and mouse support. It supports ISO 9660, FAT32 and RedSea file systems (the latter created by Davis) with support for file compression. According to Davis, many of these specifications—such as the 640×480 resolution, 16-color display and single-voice audio—were directly requested of him by God. He explained that the limited resolution was to make it easier for children to draw illustrations for God. The operating system includes an original flight simulator, compiler, and kernel. One bundled program, "After Egypt", is a game in which the player travels to a burning bush to use a "high-speed stopwatch". The stopwatch is meant to act as an oracle that generates pseudorandom text, something Davis likened to a Ouija board and glossolalia. An example of generated text follows: among consigned penally result perverseness checked stated held sensation reasonings skies adversity Dakota lip Suffer approached enact displacing feast Canst pearl doing alms comprehendeth nought TempleOS was written in a programming language developed by Davis called "HolyC". Davis ultimately wrote over 100,000 lines of code for the OS. HolyC HolyC (formerly C+), possibly a pun on Holy See, is a middle ground between the C and C++ programming languages with some unique differences, designed by Terry A. Davis specifically for TempleOS. It functions as both a general-purpose language for application development and a scripting language for automating tasks within TempleOS. HolyC is the just-in-time compiled language of TempleOS. It is an imperative, statically typed programming language, although it uses some object-oriented programming paradigms. Critical reception TempleOS received mostly "sympathetic" reviews. Tech journalist David Cassel opined that "programming websites tried to find the necessary patience and understanding to accommodate Davis". TechRepublic and OSNews published positive articles on Davis's work, even though Davis was banned from the latter for hostile comments targeting its readers and staff. In his review for TechRepublic, James Sanders concluded that "TempleOS is a testament to the dedication and passion of one man displaying his technological prowess. It doesn't need to be anything more." OSNews editor Kroc Camen wrote that the OS "shows that computing can still be a hobby; why is everybody so serious these days? If I want to code an OS that uses interpretive dance as the input method, I should be allowed to do so, companies like Apple be damned." In 2017, the OS was shown as a part of an outsider art exhibition in Bourgogne, France. Legacy After Davis' death, OSNews editor Thom Holwerda wrote: "Davis was clearly a gifted programmer – writing an entire operating system is no small feat – and it was sad to see him affected by his mental illness". One fan described Davis as a "programming legend", while another, a computer engineer, compared the development of TempleOS to a one-man-built skyscraper. He added that it "actually boggles my mind that one man wrote all that" and that it was "hard for a lay person to understand what a phenomenal achievement" it is to write an entire operating system alone. On February 2026 TempleOS has been officially published on the Ethereum Virtual Machine File System (EVMFS) through Ur uncensorable application store, so becoming the first fully undeletable operating system in history. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/History_of_the_Jews_in_Eswatini] | [TOKENS: 1019]
Contents History of the Jews in Eswatini The history of the Jews of Eswatini, formerly Swaziland. Modern times Figures from 2017 Eswatini official census suggested that there are an estimated 163 Jews in the country. Before and during the Holocaust, Swaziland, as Eswatini was then called, welcomed a large group of German Jewish refugees who lived there for a few years. In 2002, Swaziland's prime minister, Barnabas Dlamini, said the country appreciates the contribution of its Jewish community: "The Jewish community is small, numbering in the tens rather than hundreds, but over the years it has had quite an influence on the development of our country, the names Kirsh and Goldblatt will be remembered long after their time" referring to two well-known Jewish Swazi entrepreneurs. Kalman Goldblatt who later changed his name to Kal Grant came from Lithuania and built his wealth through several trading stores and by developing the first townships in the country. In 2019 there is an estimated Jewish community of about 50 to 60 people. Eswatini/Swazi Jews have played an important role in the business and legal sectors of the economy. The community consists of Israelis, South African Jews, and descendants of World War II refugees. Some Holocaust survivors settled in Swaziland. Jews have experienced hardly any anti-Semitism. A notable Jew was Stanley Sapire, Chief Justice of the Swazi Court of Appeal. The Jewish community, headed by Geoff Ramokgadi in 2024, is affiliated with the African Jewish Congress which is based in South Africa and advocates on behalf of the small and scattered communities of sub-Sahara Africa. It works to ensure that the Jewish community of Eswatini has international representation. In 2024 Prime Minister Russell Mmiso Dlamini invited Jewish investors to come and invest in Eswatini. He extended this invitation during a meeting with the American Jewish Committee in New York. Ties with Israel Eswatini has had official uninterrupted diplomatic ties and relations with Israel since 1968 soon after Eswatini gained full independence from Great Britain.Foreign relations of Israel#Diplomatic relations In 1978 Premier Maphevu Dhlamini paid a state visit to Israel he was also the foreign minister and army commander and was accompanied by the Ministers of Finance and Justice and other top officials and will be hosted by Premier Menachem Begin and Foreign Minister Moshe Dayan and with Finance Minister Simcha Ehrlich. In 1979 Premier Maphevu Dhlamini and Premier Menachem Begin of Israel signed a treaty of cooperation providing for stepped-up Israeli technological assistance to Eswatini. In 2012 Israeli and Jewish leaders were received by the King of Eswatini when the Israeli Ambassador Dov Segev-Steinberg presented his credentials to King Mswati III at his official palace. The ambassador was accompanied by Rabbi Moshe Silberhaft, spiritual leader of the African Jewish Congress. Rabbi Silberhaft later inspected the two Jewish cemeteries in Eswatini. In 2017 Israeli Prime Minister Benjamin Netanyahu and his Swazi counterpart, Prime Minister Dr Barnabas Sibusiso Dlamini (1942–2018) accompanied by his Agriculture Minister, Moses Vilakati met in Jerusalem. Netanyahu expressed his appreciation for Swazi King Mswati III's warm regards and ongoing admiration for Israel. In 2024 there was speculation that Israel would re-open a full embassy in Mbabane the capital of Eswatini, closed since 1994 and then based in South Africa, as a response to neighboring South Africa's deteriorating relations with Israel. Notable people Rabbi Natan Gamedze (born 1963, Swaziland, since 2018 renamed to Eswatini) is a Haredi rabbi and lecturer. Born to the royal lineage of the Gamedze clan of the Kingdom of Swaziland, he converted to Judaism, received rabbinic ordination, and now lectures to Jewish audiences all over the world with his personal story as to how an African prince became a Black Haredi Jewish rabbi. Nathan Kirsh (born 6 January 1932) is a South African/Swazi/Eswatini billionaire businessman. He heads the Kirsh Group, which holds a majority stake in New York cash and carry operation Jetro Holdings, owner of Restaurant Depot and Jetro Cash & Carry. The Group also holds equity and investments in Australia, Swaziland (now Eswatini), the UK, the US, and Israel. Bloomberg estimated his wealth at $6.09 billion in March 2019, ranking him at #267 on its "Billionaires Index". He was also listed on the UK's Sunday Times Rich List 2018, and was named as the wealthiest person in Eswatini by Forbes. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-FOOTNOTEPerry199547-72] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Hope_(programming_language)] | [TOKENS: 340]
Contents Hope (programming language) Hope is a programming language based on functional programming developed in the 1970s at the University of Edinburgh. It predates Miranda and Haskell and is contemporaneous with ML, also developed at the university. Hope was derived from NPL, a simple functional language developed by Rod Burstall and John Darlington in their work on program transformation. NPL and Hope are notable for being the first languages with call-by-pattern evaluation and algebraic data types. Hope was named for Sir Thomas Hope (c. 1681–1771), a Scottish agriculture reformer, after whom Hope Park Square in Edinburgh, the location of the artificial intelligence department at the time of the development of Hope, was also named. The first implementation of Hope used strict evaluation, but there have since been lazy evaluation versions and strict versions with lazy constructors. A successor language Hope+, developed jointly between Imperial College and International Computers Limited, added annotations to dictate either strict or lazy evaluation. Language details A factorial program in Hope is: Changing the order of clauses does not change the meaning of the program, because Hope's pattern matching always favors more specific patterns over less specific ones. Explicit declarations of data types in Hope are required; there is no type inference algorithm. Hope provides two built-in data structures: tuples and lists. Implementations Roger Bailey's Hope tutorial in the August 1985 issue of Byte references an interpreter for IBM PC DOS 2.0. British Telecom embarked on a project with Imperial College London to implement a version of Hope. The first release was coded by Thanos Vassilakis in 1986. Further releases were coded by Mark Tasng of British Telecom. References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Hydrothermal_vent] | [TOKENS: 11347]
Contents Hydrothermal vent Hydrothermal vents are fissures on the seabed from which geothermally heated water discharges. They are commonly found near volcanically active places, areas where tectonic plates are moving apart at mid-ocean ridges, ocean basins, and hotspots. The dispersal of hydrothermal fluids throughout the global ocean at active vent sites creates hydrothermal plumes. Hydrothermal deposits are rocks and mineral ore deposits formed by the action of hydrothermal vents. Hydrothermal vents exist because the Earth is both geologically active and has large amounts of water on its surface and within its crust. Under the sea, they may form features called black smokers or white smokers, which deliver a wide range of elements to the world's oceans, thus contributing to global marine biogeochemistry. Relative to the majority of the deep sea, the areas around hydrothermal vents are biologically more productive, often hosting complex communities fueled by the chemicals dissolved in the vent fluids. Chemosynthetic bacteria and archaea found around hydrothermal vents form the base of the food chain, supporting diverse organisms including giant tube worms, clams, limpets, and shrimp. Active hydrothermal vents are thought to exist on Jupiter's moon Europa and Saturn's moon Enceladus, and it is speculated that ancient hydrothermal vents once existed on Mars. Hydrothermal vents have been hypothesized to have been a significant factor to starting abiogenesis and the survival of primitive life. The conditions of these vents have been shown to support the synthesis of molecules important to life. Some evidence suggests that certain vents such as alkaline hydrothermal vents or those containing supercritical CO2 are more conducive to the formation of these organic molecules. However, the origin of life is a widely debated topic, and there are many conflicting viewpoints. Physical properties Hydrothermal vents in the deep ocean typically form along the mid-ocean ridges, such as the East Pacific Rise and the Mid-Atlantic Ridge. These are locations where two tectonic plates are diverging and new crust is being formed. The water that issues from seafloor hydrothermal vents consists mostly of seawater drawn into the hydrothermal system close to the volcanic edifice through faults and porous sediments or volcanic strata, plus some magmatic water released by the upwelling magma. On land, the majority of water circulated within fumarole and geyser systems is meteoric water and ground water that has percolated down into the hydrothermal system from the surface, but also commonly contains some portion of metamorphic water, magmatic water, and sedimentary formational brine released by the magma. The proportion of each varies from location to location.[citation needed] In contrast to the approximately 2 °C (36 °F) ambient water temperature at these depths, water emerges from these vents at temperatures ranging from 60 °C (140 °F) up to as high as 464 °C (867 °F). Due to the high hydrostatic pressure at these depths, water may exist in either its liquid form or as a supercritical fluid at such temperatures. The critical point of (pure) water is 375 °C (707 °F) at a pressure of 218 atmospheres. However, introducing salinity into the fluid raises the critical point to higher temperatures and pressures. The critical point of seawater (3.2 wt. % NaCl) is 407 °C (765 °F) and 298.5 bars, corresponding to a depth of ~2,960 m (9,710 ft) below sea level. Accordingly, if a hydrothermal fluid with a salinity of 3.2 wt. % NaCl vents above 407 °C (765 °F) and 298.5 bars, it is supercritical. Furthermore, the salinity of vent fluids have been shown to vary widely due to phase separation in the crust. The critical point for lower salinity fluids is at lower temperature and pressure conditions than that for seawater, but higher than that for pure water. For example, a vent fluid with a 2.24 wt. % NaCl salinity has the critical point at 400 °C (752 °F) and 280.5 bars. Thus, water emerging from the hottest parts of some hydrothermal vents can be a supercritical fluid, possessing physical properties between those of a gas and those of a liquid. Examples of supercritical venting are found at several sites. Sister Peak (Comfortless Cove Hydrothermal Field, 4°48′S 12°22′W / 4.800°S 12.367°W / -4.800; -12.367, depth 2,996 m or 9,829 ft) vents low salinity phase-separated, vapor-type fluids. Sustained venting was not found to be supercritical but a brief injection of 464 °C (867 °F) was well above supercritical conditions. A nearby site, Turtle Pits, was found to vent low salinity fluid at 407 °C (765 °F), which is above the critical point of the fluid at that salinity. A vent site in the Cayman Trough named Beebe, which is the world's deepest known hydrothermal site at ~5,000 m (16,000 ft) below sea level, has shown sustained supercritical venting at 401 °C (754 °F) and 2.3 wt% NaCl. Although supercritical conditions have been observed at several sites, it is not yet known what significance, if any, supercritical venting has in terms of hydrothermal circulation, mineral deposit formation, geochemical fluxes or biological activity.[citation needed] The initial stages of a vent chimney begin with the deposition of the mineral anhydrite. Sulfides of copper, iron, and zinc then precipitate in the chimney gaps, making it less porous over the course of time. Vent growths on the order of 30 cm (1 ft) per day have been recorded. An April 2007 exploration of the deep-sea vents off the coast of Fiji found those vents to be a significant source of dissolved iron (see iron cycle). Black smokers and white smokers Some hydrothermal vents form roughly cylindrical chimney structures. These form from minerals that are dissolved in the vent fluid. When the superheated water contacts the near-freezing sea water, the minerals precipitate out to form particles which add to the height of the stacks. Some of these chimney structures can reach heights of 60 m (200 ft). An example of such a towering vent was "Godzilla", a structure on the Pacific Ocean deep seafloor near Oregon that rose to 40 m (130 ft) before it fell over in 1996. A black smoker or deep-sea vent is a type of hydrothermal vent found on the seabed, typically in the bathyal zone (with largest frequency in depths from 2,500 to 3,000 m (8,200 to 9,800 ft)), but also in lesser depths as well as deeper in the abyssal zone. They appear as black, chimney-like structures that emit a cloud of black material. Black smokers typically emit particles with high levels of sulfur-bearing minerals, or sulfides. Black smokers are formed in fields hundreds of meters wide when superheated water from below Earth's crust comes through the ocean floor (water may attain temperatures above 400 °C (752 °F)). This water is rich in dissolved minerals from the crust, most notably sulfides. When it comes in contact with cold ocean water, many minerals precipitate, forming a black, chimney-like structure around each vent. Chimneys thicken due to heat conduction encouraging crystallization. The deposited metal sulfides can become massive sulfide ore deposits in time. Some black smokers along the Azores segment of the Mid-Atlantic Ridge are exceptionally metal-rich; for instance, hydrothermal fluids from the Rainbow Vent Field contain up to 24,000 μM of dissolved iron. Black smokers were first discovered in 1979 on the East Pacific Rise by scientists from Scripps Institution of Oceanography during the RISE Project. They were observed using the deep submergence vehicle ALVIN from the Woods Hole Oceanographic Institution. Now, black smokers are known to exist in the Atlantic and Pacific Oceans, at an average depth of 2,100 m (6,900 ft). The most northerly black smokers are a cluster of five named Loki's Castle, discovered in 2008 by scientists from the University of Bergen at 73°N, on the Mid-Atlantic Ridge between Greenland and Norway. These black smokers are of interest as they are in a more stable area of the Earth's crust, where tectonic forces are less and consequently fields of hydrothermal vents are less common. The world's deepest known black smokers are located in the Cayman Trough, 5,000 m (3.1 miles) below the ocean's surface. White smoker vents emit lighter-hued minerals, such as those containing barium, calcium and silicon. These vents also tend to have lower-temperature plumes probably because they are generally distant from their heat source. Black and white smokers may coexist in the same hydrothermal field, but they generally represent proximal (close) and distal (distant) vents to the main upflow zone, respectively. However, white smokers correspond mostly to waning stages of such hydrothermal fields, as magmatic heat sources become progressively more distant from the source (due to magma crystallization) and hydrothermal fluids become dominated by seawater instead of magmatic water. Mineralizing fluids from this type of vent are rich in calcium and they form dominantly sulfate-rich (i.e., barite and anhydrite) and carbonate deposits. Hydrothermal plumes Hydrothermal plumes are fluid entities that manifest where hydrothermal fluids are expelled into the overlying water column at active hydrothermal vent sites. As hydrothermal fluids typically harbor physical (e.g., temperature, density) and chemical (e.g., pH, Eh, major ions) properties distinct from seawater, hydrothermal plumes embody physical and chemical gradients that promote several types of chemical reactions, including oxidation-reduction reactions and precipitation reactions. Hydrothermal vent fluids harbor temperatures (~40 to >400 °C) well above that of ocean floor seawater (~4 °C), meaning that hydrothermal fluid is less dense than the surrounding seawater and will rise through the water column due to buoyancy, forming a hydrothermal plume; therefore, the phase during which hydrothermal plumes rise through the water column is known as the "buoyant plume" phase. During this phase, shear forces between the hydrothermal plume and surrounding seawater generate turbulent flow that facilitates mixing between the two types of fluids, which progressively dilutes the hydrothermal plume with seawater. Eventually, the coupled effects of dilution and rising into progressively warmer (less dense) overlying seawater will cause the hydrothermal plume to become neutrally buoyant at some height above the seafloor; therefore, this stage of hydrothermal plume evolution is known as the "nonbuoyant plume" phase. Once the plume is neutrally buoyant, it can no longer continue to rise through the water column and instead begins to spread laterally throughout the ocean, potentially over several thousands of kilometers. Chemical reactions occur concurrently with the physical evolution of hydrothermal plumes. While seawater is a relatively oxidizing fluid, hydrothermal vent fluids are typically reducing in nature. Consequently, reduced chemicals such as hydrogen gas, hydrogen sulfide, methane, Fe2+, and Mn2+ that are common in many vent fluids will react upon mixing with seawater. In fluids with high concentrations of H2S, dissolved metal ions such as Fe2+ and Mn2+ readily precipitate as dark-colored metal sulfide minerals (see "black smokers"). Furthermore, Fe2+ and Mn2+ entrained within the hydrothermal plume will eventually oxidize to form insoluble Fe and Mn (oxy)hydroxide minerals. For this reason, the hydrothermal "near field" has been proposed to refer to the hydrothermal plume region undergoing active oxidation of metals while the term "far field" refers to the plume region within which complete metal oxidation has occurred. Several chemical tracers found in hydrothermal plumes are used to locate deep-sea hydrothermal vents during discovery cruises. Useful tracers of hydrothermal activity should be chemically unreactive so that changes in tracer concentration subsequent to venting are due solely to dilution. The noble gas helium fits this criterion and is a particularly useful tracer of hydrothermal activity. This is because hydrothermal venting releases elevated concentrations of helium-3 relative to seawater, a rare, naturally occurring He isotope derived exclusively from the Earth's interior. Thus, the dispersal of 3He throughout the oceans via hydrothermal plumes creates anomalous seawater He isotope compositions that signify hydrothermal venting. Another noble gas that can serve as a tracer of hydrothermal activity is radon. As all naturally occurring isotopes of Rn are radioactive, Rn concentrations in seawater can also provide information on hydrothermal plume ages when combined with He isotope data. The isotope radon-222 is utilized for this purpose as 222Rn has the longest half-life of all naturally occurring radon isotopes of roughly 3.82 days. Dissolved gases, such as H2, H2S, and CH4, and metals, such as Fe and Mn, present at high concentrations in hydrothermal vent fluids relative to seawater may also be diagnostic of hydrothermal plumes and thus active venting; however, these components are reactive and are thus less suitable as tracers of hydrothermal activity. Hydrothermal plumes represent an important mechanism through which hydrothermal systems influence marine biogeochemistry. Hydrothermal vents emit a wide variety of trace metals into the ocean, including Fe, Mn, Cr, Cu, Zn, Co, Ni, Mo, Cd, V, and W, many of which have biological functions. Numerous physical and chemical processes control the fate of these metals once they are expelled into the water column. Based on thermodynamic theory, Fe2+ and Mn2+ should oxidize in seawater to form insoluble metal (oxy)hydroxide precipitates; however, complexation with organic compounds and the formation of colloids and nanoparticles can keep these redox-sensitive elements suspended in solution far from the vent site. Fe and Mn often have the highest concentrations among metals in acidic hydrothermal vent fluids, and both have biological significance, particularly Fe, which is often a limiting nutrient in marine environments. Therefore, far-field transport of Fe and Mn via organic complexation may constitute an important mechanism of ocean metal cycling. Additionally, hydrothermal vents deliver significant concentrations of other biologically important trace metals to the ocean such as Mo, which may have been important in the early chemical evolution of the Earth's oceans and to the origin of life (see "theory of hydrothermal origin of life"). However, Fe and Mn precipitates can also influence ocean biogeochemistry by removing trace metals from the water column. The charged surfaces of iron (oxy)hydroxide minerals effectively adsorb elements such as phosphorus, vanadium, arsenic, and rare earth metals from seawater; therefore, although hydrothermal plumes may represent a net source of metals such as Fe and Mn to the oceans, they can also scavenge other metals and non-metalliferous nutrients such as P from seawater, representing a net sink of these elements. Biology of hydrothermal vents Life has traditionally been seen as driven by energy from the sun, but deep-sea organisms have no access to sunlight, so biological communities around hydrothermal vents must depend on nutrients found in the dusty chemical deposits and hydrothermal fluids in which they live. Previously, benthic oceanographers assumed that vent organisms were dependent on marine snow, as deep-sea organisms are. This would leave them dependent on plant life and thus the sun. Some hydrothermal vent organisms do consume this "rain", but with only such a system, life forms would be sparse. Compared to the surrounding sea floor, however, hydrothermal vent zones have a density of organisms 10,000 to 100,000 times greater. These organisms include yeti crabs, which have long hairy arms that they reach out over the vent to collect food with.[citation needed] The hydrothermal vents are recognized as a type of chemosynthetic based ecosystems (CBE) where primary productivity is fuelled by chemical compounds as energy sources instead of light (chemoautotrophy). Hydrothermal vent communities are able to sustain such vast amounts of life because vent organisms depend on chemosynthetic bacteria for food. The water from the hydrothermal vent is rich in dissolved minerals and supports a large population of chemoautotrophic bacteria. These bacteria use sulfur compounds, particularly hydrogen sulfide, a chemical highly toxic to most known organisms, to produce organic material through the process of chemosynthesis. The vents' impact on the living environment goes beyond the organisms that lives around them, as they act as a significant source of iron in the oceans, providing iron for the phytoplankton. The oldest confirmed record of a "modern" biological community related with a vent is the Figueroa Sulfide, from the Early Jurassic of California. The ecosystem so formed is reliant upon the continued existence of the hydrothermal vent field as the primary source of energy, which differs from most surface life on Earth, which is based on solar energy. However, although it is often said that these communities exist independently of the sun, some of the organisms are actually dependent upon oxygen produced by photosynthetic organisms, while others are anaerobic.[citation needed] The chemosynthetic bacteria grow into a thick mat which attracts other organisms, such as amphipods and copepods, which graze upon the bacteria directly. Larger organisms, such as snails, shrimp, crabs, tube worms, fish (especially eelpout, cutthroat eel, Ophidiiformes and Symphurus thermophilus), and octopuses (notably Vulcanoctopus hydrothermalis), form a food chain of predator and prey relationships above the primary consumers. The main families of organisms found around seafloor vents are annelids, gastropods, and crustaceans, with large bivalves, vestimentiferan worms, and "eyeless" shrimp making up the bulk of nonmicrobial organisms.[citation needed] Siboglinid tube worms, which may grow to over 2 m (6.6 ft) tall in the largest species, often form an important part of the community around a hydrothermal vent. They have no mouth or digestive tract, and like parasitic worms, absorb nutrients produced by the bacteria in their tissues. About 285 billion bacteria are found per ounce of tubeworm tissue. Tubeworms have red plumes which contain hemoglobin. Hemoglobin combines with hydrogen sulfide and transfers it to the bacteria living inside the worm. In return, the bacteria nourish the worm with carbon compounds. Two of the species that inhabit a hydrothermal vent are Tevnia jerichonana, and Riftia pachyptila. One discovered community, dubbed "Eel City", consists predominantly of the eel Dysommina rugosa. Though eels are not uncommon, invertebrates typically dominate hydrothermal vents. Eel City is located near Nafanua volcanic cone, American Samoa. In 1993, already more than 100 gastropod species were known to occur in hydrothermal vents. Over 300 new species have been discovered at hydrothermal vents, many of them "sister species" to others found in geographically separated vent areas. It has been proposed that before the North American Plate overrode the mid-ocean ridge, there was a single biogeographic vent region found in the eastern Pacific. The subsequent barrier to travel began the evolutionary divergence of species in different locations. The examples of convergent evolution seen between distinct hydrothermal vents is seen as major support for the theory of natural selection and of evolution as a whole. Although life is very sparse at these depths, black smokers are the centers of entire ecosystems. Sunlight is nonexistent, so many organisms, such as archaea and extremophiles, convert the heat, methane, and sulfur compounds provided by black smokers into energy through a process called chemosynthesis. More complex life forms, such as clams and tubeworms, feed on these organisms. The organisms at the base of the food chain also deposit minerals into the base of the black smoker, therefore completing the life cycle.[citation needed] A species of phototrophic bacterium has been found living near a black smoker off the coast of Mexico at a depth of 2,500 m (8,200 ft). No sunlight penetrates that far into the waters. Instead, the bacteria, part of the Chlorobiaceae family, use the faint glow from the black smoker for photosynthesis. This is the first organism discovered in nature to exclusively use a light other than sunlight for photosynthesis. New and unusual species are constantly being discovered in the neighborhood of black smokers. The Pompeii worm Alvinella pompejana, which is capable of withstanding temperatures up to 80 °C (176 °F), was found in the 1980s, and the scaly-foot gastropod (Chrysomallon squamiferum) was first found in 2001 during an expedition to the Indian Ocean's Kairei hydrothermal vent field. The latter uses iron sulfides (pyrite and greigite) for the structure of its dermal sclerites (hardened body parts), instead of calcium carbonate. The extreme pressure of 2,500 m of water (approximately 25 megapascals or 250 atmospheres) is thought to play a role in stabilizing iron sulfide for biological purposes. This armor plating probably serves as a defense against the venomous radula (teeth) of predatory snails in that community.[citation needed] In March 2017, researchers reported evidence of possibly the oldest forms of life on Earth. Putative fossilized microorganisms were discovered in hydrothermal vent precipitates in the Nuvvuagittuq Belt of Quebec, Canada, that may have lived as early as 4.280 billion years ago, not long after the oceans formed 4.4 billion years ago, and not long after the formation of the Earth 4.54 billion years ago. Hydrothermal vent ecosystems have enormous biomass and productivity, but this rests on the symbiotic relationships that have evolved at vents. Deep-sea hydrothermal vent ecosystems differ from their shallow-water and terrestrial hydrothermal counterparts due to the symbiosis that occurs between macroinvertebrate hosts and chemoautotrophic microbial symbionts in the former. Since sunlight does not reach deep-sea hydrothermal vents, organisms in deep-sea hydrothermal vents cannot obtain energy from the sun to perform photosynthesis. Instead, the microbial life found at hydrothermal vents is chemosynthetic; they fix carbon by using energy from chemicals such as sulfide, as opposed to light energy from the sun. In other words, the symbiont converts inorganic molecules (H2S, CO2, O) to organic molecules that the host then uses as nutrition. However, sulfide is an extremely toxic substance to most life on Earth. For this reason, scientists were astounded when they first found hydrothermal vents teeming with life in 1977. What was discovered was the ubiquitous symbiosis of chemoautotrophs living in (endosymbiosis) the vent animals' gills; the reason why multicellular life is capable to survive the toxicity of vent systems. Scientists are therefore now studying how the microbial symbionts aid in sulfide detoxification (therefore allowing the host to survive the otherwise toxic conditions). Work on microbiome function shows that host-associated microbiomes are also important in host development, nutrition, defense against predators, and detoxification. In return, the host provides the symbiont with chemicals required for chemosynthesis, such as carbon, sulfide, and oxygen.[citation needed] In the early stages of studying life at hydrothermal vents, there were differing theories regarding the mechanisms by which multicellular organisms were able to acquire nutrients from these environments, and how they were able to survive in such extreme conditions. In 1977, it was hypothesized that the chemoautotrophic bacteria at hydrothermal vents might be responsible for contributing to the diet of suspension-feeding bivalves. Finally, in 1981, it was understood that giant tubeworm nutrition acquisition occurred as a result of chemoautotrophic bacterial endosymbionts. As scientists continued to study life at hydrothermal vents, it was understood that symbiotic relationships between chemoautotrophs and macrofauna invertebrate species was ubiquitous. For instance, in 1983, clam gill tissue was confirmed to contain bacterial endosymbionts; in 1984 vent bathymodiolid mussels and vesicomyid clams were also found to carry endosymbionts. However, the mechanisms by which organisms acquire their symbionts differ, as do the metabolic relationships. For instance, tubeworms have no mouth and no gut, but they do have a "trophosome", which is where they deal with nutrition and where their endosymbionts are found. They also have a bright red plume, which they use to uptake compounds such as O, H2S, and CO2, which feed the endosymbionts in their trophosome. Remarkably, the tubeworms hemoglobin (which incidentally is the reason for the bright red color of the plume) is capable of carrying oxygen without interference or inhibition from sulfide, despite the fact that oxygen and sulfide are typically very reactive. In 2005, it was discovered that this is possible due to zinc ions that bind the hydrogen sulfide in the tubeworms hemoglobin, therefore preventing the sulfide from reacting with the oxygen. It also reduces the tubeworms tissue from exposure to the sulfide and provides the bacteria with the sulfide to perform chemoautotrophy. It has also been discovered that tubeworms can metabolize CO2 in two different ways, and can alternate between the two as needed as environmental conditions change. In 1988, research confirmed thiotrophic (sulfide-oxidizing) bacteria in Alviniconcha hessleri, a large vent mollusk. In order to circumvent the toxicity of sulfide, mussels first convert it to thiosulfate before carrying it over to the symbionts. In the case of motile organisms such as alvinocarid shrimp, they must track oxic (oxygen-rich) / anoxic (oxygen-poor) environments as they fluctuate in the environment.[citation needed] Organisms living at the edge of hydrothermal vent fields, such as pectinid scallops, also carry endosymbionts in their gills, and as a result their bacterial density is low relative to organisms living nearer to the vent. However, the scallop's dependence on the microbial endosymbiont for obtaining their nutrition is therefore also lessened.[citation needed] Furthermore, not all host animals have endosymbionts; some have episymbionts—symbionts living on the animal as opposed to inside the animal. Shrimp found at vents in the Mid-Atlantic Ridge were once thought of as an exception to the necessity of symbiosis for macroinvertebrate survival at vents. That changed in 1988 when they were discovered to carry episymbionts. Since then, other organisms at vents have been found to carry episymbionts as well, such as Lepetodrilis fucensis. Furthermore, while some symbionts reduce sulfur compounds, others are known as "methanotrophs" and reduce carbon compounds, namely methane. Bathmodiolid mussels are an example of a host that contains methanotrophic endosymbionts; however, the latter mostly occur in cold seeps as opposed to hydrothermal vents.[citation needed] While chemosynthesis occurring at the deep ocean allows organisms to live without sunlight in the immediate sense, they technically still rely on the sun for survival, since oxygen in the ocean is a byproduct of photosynthesis. However, if the sun were to suddenly disappear and photosynthesis ceased to occur on our planet, life at the deep-sea hydrothermal vents could continue for millennia (until the oxygen was depleted).[citation needed] The chemical and thermal dynamics in hydrothermal vents makes such environments highly suitable thermodynamically for chemical evolution processes to take place. Therefore, thermal energy flux is a permanent agent and is hypothesized to have contributed to the evolution of the planet, including prebiotic chemistry. Günter Wächtershäuser proposed the iron-sulfur world theory and suggested that life might have originated at hydrothermal vents. Wächtershäuser proposed that an early form of metabolism predated genetics. By metabolism he meant a cycle of chemical reactions that release energy in a form that can be harnessed by other processes. It has been proposed that amino acid synthesis could have occurred deep in the Earth's crust and that these amino acids were subsequently shot up along with hydrothermal fluids into cooler waters, where lower temperatures and the presence of clay minerals would have fostered the formation of peptides and protocells. This is an attractive hypothesis because of the abundance of CH4 (methane) and NH3 (ammonia) present in hydrothermal vent regions, a condition that was not provided by the Earth's primitive atmosphere. A major limitation to this hypothesis is the lack of stability of organic molecules at high temperatures, but some have suggested that life would have originated outside of the zones of highest temperature. There are numerous species of extremophiles and other organisms currently living immediately around deep-sea vents, suggesting that this is indeed a possible scenario.[citation needed] Experimental research and computer modeling indicate that the surfaces of mineral particles inside hydrothermal vents have similar catalytic properties to enzymes and are able to create simple organic molecules, such as methanol (CH3OH) and formic acid (HCO2H), out of the dissolved CO2 in the water. Additionally, the discovery of supercritical CO2 at some sites has been used to further support the theory of hydrothermal origin of life given that it can increase organic reaction rates. Its high solvation power and diffusion rate allow it to promote amino and formic acid synthesis, as well as the synthesis of other organic compounds, polymers, and the four amino acids: alanine, arginine, aspartic acid, and glycine. In situ experiments have revealed the convergence of high N2 content and supercritical CO2 at some sites, as well as evidence for complex organic material (amino acids) within supercritical CO2 bubbles. Proponents of this theory for the origin of life also propose the presence of supercritical CO2 as a solution to the "water paradox" that pervades theories on the origin of life in aquatic settings. This paradox encompasses the fact that water is both required for life and will, in abundance, hydrolyze organic molecules and prevent dehydration synthesis reactions necessary to chemical and biological evolution. Supercritical CO2, being hydrophobic, acts as a solvent that facilitates an environment conducive to dehydration synthesis. Therefore, it has been hypothesized that the presence of supercritical CO2 in Hadean hydrothermal vents played an important role in the origin of life. There is some evidence that links the origin of life to alkaline hydrothermal vents in particular. The pH conditions of these vents may have made them more suitable for emerging life. One current theory is that the naturally occurring proton gradients at these deep sea vents supplemented the lack of phospholipid bilayer membranes and proton pumps in early organisms, allowing ion gradients to form despite the lack of cellular machinery and components present in modern cells. There is some discourse around this topic. It has been argued that the natural pH gradients of these vents playing a role in the origin of life is actually implausible. The counter argument relies, among other points, on what the author describes as the unlikelihood of the formation of machinery which produces energy from the pH gradients found in hydrothermal vents without/before the existence of genetic information. This counterpoint has been responded to by Nick Lane, one of the researchers whose work it focuses on. He argues that the counterpoint largely misinterprets both his work and the work of others. Another reason that the view of deep sea hydrothermal vents as an ideal environment for the origin of life remains controversial is the absence of wet-dry cycles and exposure to UV light, which promote the formation of membranous vesicles and synthesis of many biomolecules. The ionic concentrations of hydrothermal vents differs from the intracellular fluid within the majority of life. It has instead been suggested that terrestrial freshwater environments are more likely to be an ideal environment for the formation of early cells. Meanwhile, proponents of the deep sea hydrothermal vent hypothesis suggest thermophoresis in mineral cavities to be an alternative compartment for polymerization of biopolymers. How thermophoresis within mineral cavities could promote coding and metabolism is unknown. Nick Lane suggests that nucleotide polymerization at high concentrations of nucleotides within self-replicating protocells, where "Molecular crowding and phosphorylation in such confined, high-energy protocells could potentially promote the polymerization of nucleotides to form RNA". Acetyl phosphate could possibly promote polymerization at mineral surfaces or at low water activity. A computational simulation shows that nucleotide concentration of nucleotide catalysis of "the energy currency pathway is favored, as energy is limiting; favoring this pathway feeds forward into a greater nucleotide synthesis". Fast nucleotide catalysis of CO2 fixation lowers nucleotide concentration as protocell growth and division is rapid which then leads to halving of nucleotide concentration, weak nucleotide catalysis of CO2 fixation promotes little to protocell growth and division. In biochemistry, reactions with CO2 and H2 produce precursors to biomolecules that are also produced from the acetyl-CoA pathway and Krebs cycle which would support an origin of life at deep sea alkaline vents. Acetyl phosphate produced from the reactions are capable of phosphorylating ADP to ATP, with maximum synthesis occurring at high water activity and low concentrations of ions, the Hadean ocean likely had lower concentrations of ions than modern oceans. The concentrations of Mg2+ and Ca2+ at alkaline hydrothermal systems are lower than those at the ocean. The high concentration of potassium within most life forms could be readily explained that protocells might have evolved sodium-hydrogen antiporters to pump out Na+ as prebiotic lipid membranes are less permeable to Na+ than H+. If cells originated at these environments, they would have been autotrophs with a Wood-Ljungdahl pathway and incomplete reverse Krebs cycle. Mathematical modelling of organic synthesis of carboxylic acids to lipids, nucleotides, amino acids, and sugars, and polymerization reactions are favorable at alkaline hydrothermal vents. At the beginning of his 1992 paper The Deep Hot Biosphere, Thomas Gold referred to ocean vents in support of his theory that the lower levels of the earth are rich in living biological material that finds its way to the surface. He further expanded his ideas in the book The Deep Hot Biosphere. An article on abiogenic hydrocarbon production in the February 2008 issue of Science journal used data from experiments at the Lost City hydrothermal field to report how the abiotic synthesis of low molecular mass hydrocarbons from mantle derived carbon dioxide may occur in the presence of ultramafic rocks, water, and moderate amounts of heat. Discovery and exploration In 1949, a deep water survey reported anomalously hot brines in the central portion of the Red Sea. Later work in the 1960s confirmed the presence of hot, 60 °C (140 °F), saline brines and associated metalliferous muds. The hot solutions were emanating from an active subseafloor rift. The highly saline character of the waters was not hospitable to living organisms. The brines and associated muds are currently under investigation as a source of mineable precious and base metals. In June 1976, scientists from the Scripps Institution of Oceanography obtained the first evidence for submarine hydrothermal vents along the Galápagos Rift, a spur of the East Pacific Rise, on the Pleiades II expedition, using the Deep-Tow seafloor imaging system. In 1977, the first scientific papers on hydrothermal vents were published by scientists from the Scripps Institution of Oceanography; research scientist Peter Lonsdale published photographs taken from deep-towed cameras, and PhD student Kathleen Crane published maps and temperature anomaly data. Transponders were deployed at the site, which was nicknamed "Clam-bake", to enable an expedition to return the following year for direct observations with the DSV Alvin. Chemosynthetic ecosystems surrounding the Galápagos Rift submarine hydrothermal vents were first directly observed in 1977, when a group of marine geologists funded by the National Science Foundation returned to the Clambake sites. The principal investigator for the submersible study was Jack Corliss of Oregon State University. Corliss and Tjeerd van Andel from Stanford University observed and sampled the vents and their ecosystem on February 17, 1977, while diving in the DSV Alvin, a research submersible operated by the Woods Hole Oceanographic Institution (WHOI). Other scientists on the research cruise included Richard (Dick) Von Herzen and Robert Ballard of WHOI, Jack Dymond and Louis Gordon of Oregon State University, John Edmond and Tanya Atwater of the Massachusetts Institute of Technology, Dave Williams of the U.S. Geological Survey, and Kathleen Crane of Scripps Institution of Oceanography. This team published their observations of the vents, organisms, and the composition of the vent fluids in the journal Science. In 1979, a team of biologists led by J. Frederick Grassle, at the time at WHOI, returned to the same location to investigate the biological communities discovered two year earlier. High temperature hydrothermal vents, the "black smokers", were discovered in spring 1979 by a team from the Scripps Institution of Oceanography using the submersible Alvin. The RISE expedition explored the East Pacific Rise at 21° N with the goals of testing geophysical mapping of the sea floor with the Alvin and finding another hydrothermal field beyond the Galápagos Rift vents. The expedition was led by Fred Spiess and Ken Macdonald and included participants from the U.S., Mexico and France. The dive region was selected based on the discovery of sea floor mounds of sulfide minerals by the French CYAMEX expedition in 1978. Prior to dive operations, expedition member Robert Ballard located near-bottom water temperature anomalies using a deeply towed instrument package. The first dive was targeted at one of those anomalies. On Easter Sunday April 15, 1979 during a dive of Alvin to 2,600 meters, Roger Larson and Bruce Luyendyk found a hydrothermal vent field with a biological community similar to the Galápagos vents. On a subsequent dive on April 21, William Normark and Thierry Juteau discovered the high temperature vents emitting black mineral particle jets from chimneys; the black smokers. Following this Macdonald and Jim Aiken rigged a temperature probe to Alvin to measure the water temperature at the black smoker vents. This observed the highest temperatures then recorded at deep sea hydrothermal vents (380±30 °C). Analysis of black smoker material and the chimneys that fed them revealed that iron sulfide precipitates are the common minerals in the "smoke" and walls of the chimneys. In 2005, Neptune Resources NL, a mineral exploration company, applied for and was granted 35,000 km2 of exploration rights over the Kermadec Arc in New Zealand's Exclusive Economic Zone to explore for seafloor massive sulfide deposits, a potential new source of lead-zinc-copper sulfides formed from modern hydrothermal vent fields. The discovery of a vent in the Pacific Ocean offshore of Costa Rica, named the Medusa hydrothermal vent field (after the serpent-haired Medusa of Greek mythology), was announced in April 2007. The Ashadze hydrothermal field (13°N on the Mid-Atlantic Ridge, elevation -4200 m) was the deepest known high-temperature hydrothermal field until 2010, when a hydrothermal plume emanating from the Beebe site (18°33′N 81°43′W / 18.550°N 81.717°W / 18.550; -81.717, elevation -5000 m) was detected by a group of scientists from NASA Jet Propulsion Laboratory and Woods Hole Oceanographic Institution. This site is located on the 110 km long, ultraslow spreading Mid-Cayman Rise within the Cayman Trough. In early 2013, the deepest known hydrothermal vents were discovered in the Caribbean Sea at a depth of almost 5,000 metres (16,000 ft). Oceanographers are studying the volcanoes and hydrothermal vents of the Juan de Fuca mid ocean ridge where tectonic plates are moving away from each other. Hydrothermal vents and other geothermal manifestations are currently being explored in the Bahía de Concepción, Baja California Sur, Mexico. Distribution Hydrothermal vents are distributed along the Earth's plate boundaries, although they may also be found at intra-plate locations such as hotspot volcanoes. As of 2009 there were approximately 500 known active submarine hydrothermal vent fields, with about half visually observed at the seafloor and the other half suspected from water column indicators and/or seafloor deposits. Rogers et al. (2012) recognized at least 11 biogeographic provinces of hydrothermal vent systems: Exploitation Hydrothermal vents, in some instances, have led to the formation of exploitable mineral resources via the deposition of seafloor massive sulfide deposits. The Mount Isa orebody, located in Queensland, Australia, is an excellent example. Many hydrothermal vents are rich in cobalt, gold, copper, and rare earth metals essential for electronic components. Hydrothermal venting on the Archean seafloor is considered to have formed Algoma-type banded iron formations, which have been a source of iron ore. Recently, mineral exploration companies, driven by the elevated price activity in the base metals sector during the mid-2000s, have turned their attention to the extraction of mineral resources from hydrothermal fields on the seafloor. Significant cost reductions are, in theory, possible. In countries such as Japan, where mineral resources are primarily derived from international imports, there is a particular push for the extraction of seafloor mineral resources. The world's first "large-scale" mining of hydrothermal vent mineral deposits was carried out by Japan Oil, Gas and Metals National Corporation (JOGMEC) in August – September, 2017. JOGMEC carried out this operation using the Research Vessel Hakurei. This mining was carried out at the 'Izena hole/cauldron' vent field within the hydrothermally active back-arc basin known as the Okinawa Trough, which contains 15 confirmed vent fields according to the InterRidge Vents Database. Two companies are currently engaged in the late stages of commencing to mine seafloor massive sulfides (SMS). Nautilus Minerals is in the advanced stages of commencing extraction from its Solwarra deposit, in the Bismarck Archipelago, and Neptune Minerals is at an earlier stage with its Rumble II West deposit, located on the Kermadec Arc, near the Kermadec Islands. Both companies are proposing using modified existing technology. Nautilus Minerals, in partnership with Placer Dome (now part of Barrick Gold), succeeded in 2006 in returning over 10 metric tons of mined SMS to the surface using modified drum cutters mounted on an ROV, a world first. Neptune Minerals in 2007 succeeded in recovering SMS sediment samples using a modified oil industry suction pump mounted on an ROV, also a world first. Potential seafloor mining has environmental impacts, including dust plumes from mining machinery affecting filter-feeding organisms, collapsing or reopening vents, methane clathrate release, or even sub-oceanic land slides. There are also potential environmental effects from the tools needed for mining these hydrothermal vent ecosystems, including noise pollution and anthropogenic light. Hydrothermal vent system mining would require the use of both submerged mining tools on the seafloor, including remotely operated underwater vehicles (ROVs), as well as surface support vessels on the ocean surface. Inevitably, through the operation of these machines, some level of noise will be created, which presents a problem for hydrothermal vent organisms because, as they are up to 12,000 feet below the surface of the ocean, they experience very little sound. As a result of this, these organisms have evolved to have highly sensitive hearing organs, so if there is a sudden increase in noise, such as that created by mining machinery, there is potential to damage these auditory organs and harm the vent organisms. It is also important to consider that many studies have been able to show that a large percent of benthic organisms communicate using very low-frequency sounds; therefore, increasing ambient noise levels on the seafloor could potentially mask communication between the organisms and alter behavioral patterns. Similar to how deep-sea SMS mining tools create noise pollution, they also create anthropogenic light sources on the seafloor (from mining tools) and the ocean surface (from surface support vessels). Organisms at these hydrothermal vent systems are in the aphotic zone of the ocean and have adapted to very low light conditions. Studies on deep sea shrimp have shown the potential for flood lights used on the sea floor used in studying the vent systems to cause permanent retinal damage, warranting further research into the potential risk to other vent organisms. On top of the risk presented to deep-sea organisms, the surface support vessels use nocturnal anthropogenic lighting. Research has shown that this type of lighting on the ocean surface can disorient seabirds and cause fallout, where they fly toward the anthropogenic light and become exhausted or collide with man-made objects, resulting in injury or death. There is consideration for both aquatic and land organisms when evaluating the environmental effects of hydrothermal vent mining. There are three mining waste processes, known as the side cast sediment release, dewatering process, and sediment shift or disturbance, that would be expected with the deep-sea mining processes and could result in the accumulation of a sediment plume or cloud, which can have substantial environmental implications. The side cast sediment release is a process that would occur at the seafloor and would involve the move of material at the seafloor by the submerged ROV's and would most likely contribute to the formation of sediment plumes at the seafloor. The idea of side cast release is that the ROV's would discard economically invaluable material to the side of the mining sight before transporting the sulfide material to the supporting vessel at the surface. The goal of this process is to reduce the amount of material being transferred to the surface and minimize land-based. The dewatering process is a mining waste process that would most likely contribute to the formation of sediment plumes from the surface. The method of mine waste disposal releases water from the ship that may have been obtained during the extraction and transport of the material from the seafloor to the surface. The third contribution to the formation of the sediment plume or cloud would be sediment disturbance and release. This mining waste contribution is mainly associated with the mining activity on the seafloor associated with the movement of the ROVs and the destructive disturbance of the seafloor as part of the mining process itself. The two main environmental concerns as a result of these waste mining processes that contribute to the formation of the sediment plume would be the release of heavy metals and increased amounts of sediment released. The release of heavy metals is mainly associated with the dewatering process that would take place on board the ship at the surface of the water. The main problem associated with dewatering is that it is not just the release of seawater re-entering the water column. Heavy metals such as copper and cobalt that would be sourced from the material extracted on the seafloor are also mixed in with the water that is released into the water column. The first environmental concern associated with the release of heavy metals is that it has the potential to change ocean chemistry within that localized water column area. The second concern would be that some of the heavy metals that could be released can have some level of toxicity to not only organisms inhabiting that area but also organisms passing through the mining site area. The concerns surrounding increased sediment release are mainly related to the other two mining waste processes, side cast sediment and seafloor sediment disturbance. The main environmental concern would be the smothering of organisms below as a result of redistributing large amounts of sediment to other areas on the seafloor, which could potentially threaten the population of organisms inhabiting the area. Redistribution of large quantities of sediment can also affect the feeding and gas exchange processes between organisms, posing a serious threat to the population. Finally, these processes can also increase the sedimentation rate on the seafloor, resulting in a predicted minimum of 500 m per every 1–10 km. A large amount of work is currently being engaged in by both of the above-mentioned companies to ensure that the potential environmental impacts of seafloor mining are well understood and control measures are implemented before exploitation commences. However, this process has been arguably hindered by the disproportionate distribution of research effort among vent ecosystems; the best studied and understood hydrothermal vent ecosystems are not representative of those targeted for mining. Attempts have been made in the past to exploit minerals from the seafloor. The 1960s and 1970s saw a great deal of activity (and expenditure) in the recovery of manganese nodules from the abyssal plains, with varying degrees of success. This does demonstrate, however, that the recovery of minerals from the seafloor is possible and has been possible for some time. Mining of manganese nodules served as a cover story for the elaborate attempt in 1974 by the CIA to raise the sunken Soviet submarine K-129 using the Glomar Explorer, a ship purpose-built for the task by Howard Hughes. The operation was known as Project Azorian, and the cover story of seafloor mining of manganese nodules may have served as the impetus to propel other companies to make the attempt. Conservation The conservation of hydrothermal vents has been the subject of sometimes heated discussion in the oceanographic community for the last 20 years. It has been pointed out that it may be that those causing the most damage to these fairly rare habitats are scientists. There have been attempts to forge agreements over the behaviour of scientists investigating vent sites, but, although there is an agreed code of practice, there is no formal international and legally binding agreement. Conservation of hydrothermal vent ecosystems after the fact of mining of an active system would depend on the recolonization of chemosynthetic bacteria, and therefore the continuation of the hydrothermal vent fluid as it is the main hydrothermal energy source. It is very difficult to get an idea of the effects of mining on the hydrothermal vent fluid because there have been no large scale studies done. However, there have been studies on the recolonization of these vent ecosystems after volcanic destruction. From these we can develop insight on the potential effects of mining destruction, and have learned it took 3–5 years for bacteria to recolonize the area, and around 10 years for megafauna to return. It was also found that there was a shift in the composition of species in the ecosystem compared to before the destruction, and the presence of immigrant species. Though further research into the effects of sustained seafloor SMS mining on species recolonization is needed. Geochronological dating Common methods to find out the ages of hydrothermal vents are to date the sulfide (e.g., pyrite) and sulphate minerals (e.g., baryte). Common dating methods include radiometric dating and electron spin resonance dating. Different dating methods have their own limitations, assumptions and challenges. General challenges include the high purity of extracted minerals required for dating, the age range of each dating method, heating above closure temperatures erasing ages of older minerals, and multiple episodes of mineral formation resulting in a mixture of ages. In environments with multiple phases of mineral formation, generally, electron spin resonance dating gives the average age of the bulk mineral while radiometric dates are biased to the ages of younger phases because of the decay of parent nuclei. These explain why different methods can give different ages to the same sample and why the same hydrothermal chimney can have samples with different ages. History and formation of hydrothermal vents Although some biogeochemists such as Rogers et al. (2012) have identified sites of hydrothermal vents, the locations of known hydrothermal vent formations in deep sea systems is not well understood. The ocean floor is not well explored, with less than 1% being well known. Most of the hydrothermal vents scientists are currently aware of form along mid ocean ridges. The location of these systems is important to understanding their formation, as most accepted theories revolve around seismic activity, particularly near volcanic regions. Seismic activity during Paleocene and Eocene continental rifting led to an eruption of gases, liquids, and sediments from the Earth's core. This intrusive event created large craters sitting on top of sills. Sills are layers of igneous rock where magma intrudes between existing layers of stratified rock. These large craters on the seafloor are collections of hydrothermal vents. Distinct features of these vents include inward-dipped sedimentary strata, and sandstone dykes, pipes, and breccias. These features are categorized as subvolcanic intrusions, which lead to hydrothermal activity. A study used 2D seismic reflection data, to characterize the structures of these systems, which are sunken in craters with a funneled side profile. These structures are often referred to as chimneys which form over the surface of the vents. The oceanic crust and the seawater interact to form these systems, and alter the local chemistry and form deposits that are rich in varying metals. The unique depositing of metals and altered local chemistry in turn allow for conditions to support life of thermophiles and other organisms. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Hopscotch_(programming_language)] | [TOKENS: 867]
Contents Hopscotch (programming language) Hopscotch is a visual programming language developed by Hopscotch Technologies, designed to allow young or beginner programmers to develop simple projects. Its simple UI allows its users to drag and drop blocks to create scripts that can be played when activated. The use of the language is through an iPad or iPhone supporting Hopscotch. Software development The idea sprang from an existing programming tool, Scratch, where the user drags blocks to create a script. The developers of Hopscotch wanted to take a step back from Scratch, making it slightly easier to grasp and use the concepts. Hopscotch's notion of events, and rules combining conditions with actions, is similar to AgentSheets. Hopscotch includes basic programming blocks and functionalities such as variables, sprites (called objects) and text objects, as well as features considered more advanced such as self-variables, maths functions and more. Editor The Hopscotch app uses a block-based programming UI. Most code blocks can have numeric, text, or math inputs, allowing for both static and dynamic outputs. The editor work area is based on a grid divided into X and Y coordinates. The Hopscotch Editor is available on iPhone and iPad. The iPhone version only supported viewing projects until early 2016, when an update supporting editing and account functionality was released. The Hopscotch iPhone projects play in an iPhone format even on the iPad and web player. A version for Android is not planned for release (as of 2021). Event blocks are conditional triggers that activate when a specific set of parameters is reached, triggering any associated Code blocks within the activated Event block. As of September 26, 2023, Hopscotch contains 40 Event blocks, including interactions, comparisons, and collision detection. Code blocks are individual actions triggered upon the activation of Event blocks, activated in descending order. Code blocks fall into six categories: Abilities, Movement, Looks & Sounds, Drawing, Variables, and Controls. Abilities are containers for Code blocks, creating a function which can be duplicated and reused within a project. Movement blocks control the positioning and rotation of objects. Looks & Sounds blocks control the scale and appearance of objects, text manipulation, sound playback, and transparency of objects. Drawing blocks paint preset colors to the background layer of a project, with additional options for stroke width and RGB/HSB support for custom colors. Variable blocks handle data storage and modification, with support for strings and numerical inputs. Control blocks provide miscellaneous functionality, such as if/else conditionals, message passing, and waiting a set amount of time. Player The Hopscotch player activates the blocks in the scripts upon activation of their individual triggers. The player is also available on the web (known as the "Webplayer"). The web player brings Hopscotch projects to almost any browser. It is designed to work the same as the in-app player, though it has a different coding layout than the app. The web version of a project is only accessible via its unique link that is formatted like this: https://c.gethopscotch.com/p/project ID Both the in-app and the web player are written in JavaScript. There is also currently a version being developed as of 2024 for easier access to Hopscotch through a computer. Subscription Currently, there is a Hopscotch subscription. It costs $79.99 a year or $9.99 a month. The subscription allows access to adding photos or drawings, 30 “seeds” (the form of Hopscotch currency) a month, custom avatars, user variables, and more. In order to make an account, you must purchase the subscription to post or create a draft. Teacher accounts do not need the subscription, nor does signing up through the Webplayer. Hopscotch Forum The Hopscotch Forum is the official online forum for Hopscotch, for users to discuss Hopscotch projects, programming, and view update information for changes made to the Hopscotch app. Users may also host or participate in Competitions or Events, and collaborate on projects. As of December 2024, the Hopscotch Forum has transitioned to a Discord server and placed it on read-only mode. Languages Supported languages: English, Simplified Chinese, Spanish. References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Hume_(programming_language)] | [TOKENS: 544]
Contents Hume (programming language) Hume is a functionally based programming language developed at the University of St Andrews and Heriot-Watt University in Scotland since the year 2000. The language name is both an acronym meaning 'Higher-order Unified Meta-Environment' and an honorific to the 18th-century philosopher David Hume. It targets real-time computing embedded systems, aiming to produce a design that is both highly abstract, and yet allows precise extraction of time and space execution costs. This allows guaranteeing the bounded time and space demands of executing programs. Hume combines functional programming ideas with ideas from finite-state automata. Automata are used to structure communicating programs into a series of "boxes", where each box maps inputs to outputs in a purely functional way using high-level pattern-matching. It is structured as a series of levels, each of which exposes different machine properties. Design model The Hume language design attempts to maintain the essential properties and features required by the embedded systems domain (especially for transparent time and space costing) whilst incorporating as high a level of program abstraction as possible. It aims to target applications ranging from simple microcontrollers to complex real-time systems such as smartphones. This ambitious goal requires incorporating both low-level notions such as interrupt handling, and high-level ones of data structure abstraction etc. Such systems are programmed in widely differing ways, but the language design should accommodate such varying requirements. Hume is a three-layer language: an outer (static) declaration/metaprogramming layer, an intermediate coordination layer describing a static layout of dynamic processes and the associated devices, and an inner layer describing each process as a (dynamic) mapping from patterns to expressions. The inner layer is stateless and purely functional. Rather than attempting to apply cost modeling and correctness proving technology to an existing language framework either directly or by altering a more general language (as with e.g., RTSJ), the approach taken by the Hume designers is to design Hume in such a way that formal models and proofs can definitely be constructed. Hume is structured as a series of overlapping language levels, where each level adds expressibility to the expression semantics, but either loses some desirable property or increases the technical difficulty of providing formal correctness/cost models. Characteristics The interpreter and compiler versions differ a bit. The coordination system wires boxes in a dataflow programming style. The expression language is Haskell-like. The message passing concurrency system remembers JoCaml's join-patterns or Polyphonic C Sharp chords, but with all channels asynchronous. There is a scheduler built-in that continuously checks pattern-matching through all boxes in turn, putting on hold the boxes that cannot copy outputs to busy input destinations. Examples References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTERaskin199291-101] | [TOKENS: 8460]
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Python_(programming_language)#cite_note-newin-2.0-52] | [TOKENS: 4314]
Contents Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language. Python 3.0, released in 2008, was a major revision and not completely backward-compatible with earlier versions. Beginning with Python 3.5, capabilities and keywords for typing were added to the language, allowing optional static typing. As of 2026[update], the Python Software Foundation supports Python 3.10, 3.11, 3.12, 3.13, and 3.14, following the project's annual release cycle and five-year support policy. Python 3.15 is currently in the alpha development phase, and the stable release is expected to come out in October 2026. Earlier versions in the 3.x series have reached end-of-life and no longer receive security updates. Python has gained widespread use in the machine learning community. It is widely taught as an introductory programming language. Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index, which ranks based on searches in 24 platforms. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands. It was designed as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Python implementation began in December 1989. Van Rossum first released it in 1991 as Python 0.9.0. Van Rossum assumed sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from responsibilities as Python's "benevolent dictator for life" (BDFL); this title was bestowed on him by the Python community to reflect his long-term commitment as the project's chief decision-maker. (He has since come out of retirement and is self-titled "BDFL-emeritus".) In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python derives from the British comedy series Monty Python's Flying Circus. (See § Naming.) Python 2.0 was released on 16 October 2000, featuring many new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, and then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. It no longer receives security patches or updates. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e., "2.7.18+" (plus 3.11), with the plus signifying (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, and was a major revision and not completely backward-compatible with earlier versions, with some new semantics and changed syntax. Python 2.7.18, released in 2020, was the last release of Python 2. Several releases in the Python 3.x series have added new syntax to the language, and made a few (considered very minor) backward-incompatible changes. As of January 2026[update], Python 3.14.3 is the latest stable release. All older 3.x versions had a security update down to Python 3.9.24 then again with 3.9.25, the final version in 3.9 series. Python 3.10 is, since November 2025, the oldest supported branch. Python 3.15 has an alpha released, and Android has an official downloadable executable available for Python 3.14. Releases receive two years of full support followed by three years of security support. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming – including metaprogramming and metaobjects. Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language' because it is purposely designed to be able to integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the "Lisp tradition". It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML. Python's core philosophy is summarized in the Zen of Python (PEP 20) written by Tim Peters, which includes aphorisms such as these: However, Python has received criticism for violating these principles and adding unnecessary language bloat. Responses to these criticisms note that the Zen of Python is a guideline rather than a rule. The addition of some new features had been controversial: Guido van Rossum resigned as Benevolent Dictator for Life after conflict about adding the assignment expression operator in Python 3.8. Nevertheless, rather than building all functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which represented the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar, while giving developers a choice in their coding methodology. Python lacks do .. while loops, which Rossum considered harmful. In contrast to Perl's motto "there is more than one way to do it", Python advocates an approach where "there should be one – and preferably only one – obvious way to do it". In practice, however, Python provides many ways to achieve a given goal. There are at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli is a Fellow at the Python Software Foundation and Python book author; he wrote that "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers typically prioritize readability over performance. For example, they reject patches to non-critical parts of the CPython reference implementation that would offer increases in speed that do not justify the cost of clarity and readability.[failed verification] Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. Also, it is possible to transpile to other languages. However, this approach either fails to achieve the expected speed-up, since Python is a very dynamic language, or only a restricted subset of Python is compiled (with potential minor semantic changes). Python is meant to be a fun language to use. This goal is reflected in the name – a tribute to the British comedy group Monty Python – and in playful approaches to some tutorials and reference materials. For instance, some code examples use the terms "spam" and "eggs" (in reference to a Monty Python sketch), rather than the typical terms "foo" and "bar". A common neologism in the Python community is pythonic, which has a broad range of meanings related to program style: Pythonic code may use Python idioms well; be natural or show fluency in the language; or conform with Python's minimalist philosophy and emphasis on readability. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Python's statements include the following: The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typing—in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations; according to Van Rossum, the language never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, data can be passed through multiple stack levels. Python's expressions include the following: In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example: A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement. Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them. Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support the semantics of only the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Python includes a module typing including several type names for type annotations. Also, mypy supports a Python compiler called mypyc, which leverages type annotations for optimization. 1.33333 frozenset() Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the modulo operator, a remainder can be negative, e.g., 4 % -3 == -2.) Also, Python offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0. Also, it offers the matrix‑multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively. Division between integers produces floating-point results. The behavior of division has changed significantly over time: In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division. Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. Also, the rounding implies that the equation b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is −1.0. Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation. Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs: To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header. Code examples "Hello, World!" program: Program to calculate the factorial of a non-negative integer: Libraries Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations. As of 13 March 2025,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. Development environments Most[which?] Python implementations (including CPython) include a read–eval–print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately. Also, CPython is bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.[citation needed] Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting. Standard desktop IDEs include PyCharm, Spyder, and Visual Studio Code; there are web browser-based IDEs, such as the following environments: Implementations CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard since version 3.11. Older versions use the C89 standard with several select C99 features, but third-party extensions are not limited to older C versions—e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python. CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms. All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past. All alternative implementations have at least slightly different semantics. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Creating an executable with Python often is done by bundling an entire Python interpreter into the executable, which causes binary sizes to be massive for small programs, yet there exist implementations that are capable of truly compiling Python. Alternative implementations include the following: Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version. Just-in-time Python compilers have been developed, but are now unsupported: There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python: There are also specialized compilers: Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax: A performance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game. There are several approaches to optimizing Python performance, despite the inherent slowness of an interpreted language. These approaches include the following strategies or tools: Language Development Python's development is conducted mostly through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases have three types, distinguished by which part of the version number is incremented: Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development. The major academic conference on Python is PyCon. Also, there are special Python mentoring programs, such as PyLadies. Naming Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. Also, the official Python documentation contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas". Languages influenced by Python See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Elon_Musk#cite_note-twitterzip2-58] | [TOKENS: 10515]
Contents Elon Musk Elon Reeve Musk (/ˈiːlɒn/ EE-lon; born June 28, 1971) is a businessman and entrepreneur known for his leadership of Tesla, SpaceX, Twitter, and xAI. Musk has been the wealthiest person in the world since 2025; as of February 2026,[update] Forbes estimates his net worth to be around US$852 billion. Born into a wealthy family in Pretoria, South Africa, Musk emigrated in 1989 to Canada; he has Canadian citizenship since his mother was born there. He received bachelor's degrees in 1997 from the University of Pennsylvania before moving to California to pursue business ventures. In 1995, Musk co-founded the software company Zip2. Following its sale in 1999, he co-founded X.com, an online payment company that later merged to form PayPal, which was acquired by eBay in 2002. Musk also became an American citizen in 2002. In 2002, Musk founded the space technology company SpaceX, becoming its CEO and chief engineer; the company has since led innovations in reusable rockets and commercial spaceflight. Musk joined the automaker Tesla as an early investor in 2004 and became its CEO and product architect in 2008; it has since become a leader in electric vehicles. In 2015, he co-founded OpenAI to advance artificial intelligence (AI) research, but later left; growing discontent with the organization's direction and their leadership in the AI boom in the 2020s led him to establish xAI, which became a subsidiary of SpaceX in 2026. In 2022, he acquired the social network Twitter, implementing significant changes, and rebranding it as X in 2023. His other businesses include the neurotechnology company Neuralink, which he co-founded in 2016, and the tunneling company the Boring Company, which he founded in 2017. In November 2025, a Tesla pay package worth $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Musk was the largest donor in the 2024 U.S. presidential election, where he supported Donald Trump. After Trump was inaugurated as president in early 2025, Musk served as Senior Advisor to the President and as the de facto head of the Department of Government Efficiency (DOGE). After a public feud with Trump, Musk left the Trump administration and returned to managing his companies. Musk is a supporter of global far-right figures, causes, and political parties. His political activities, views, and statements have made him a polarizing figure. Musk has been criticized for COVID-19 misinformation, promoting conspiracy theories, and affirming antisemitic, racist, and transphobic comments. His acquisition of Twitter was controversial due to a subsequent increase in hate speech and the spread of misinformation on the service, following his pledge to decrease censorship. His role in the second Trump administration attracted public backlash, particularly in response to DOGE. The emails he sent to Jeffrey Epstein are included in the Epstein files, which were published between 2025–26 and became a topic of worldwide debate. Early life Elon Reeve Musk was born on June 28, 1971, in Pretoria, South Africa's administrative capital. He is of British and Pennsylvania Dutch ancestry. His mother, Maye (née Haldeman), is a model and dietitian born in Saskatchewan, Canada, and raised in South Africa. Musk therefore holds both South African and Canadian citizenship from birth. His father, Errol Musk, is a South African electromechanical engineer, pilot, sailor, consultant, emerald dealer, and property developer, who partly owned a rental lodge at Timbavati Private Nature Reserve. His maternal grandfather, Joshua N. Haldeman, who died in a plane crash when Elon was a toddler, was an American-born Canadian chiropractor, aviator and political activist in the technocracy movement who moved to South Africa in 1950. Elon has a younger brother, Kimbal, a younger sister, Tosca, and four paternal half-siblings. Musk was baptized as a child in the Anglican Church of Southern Africa. Despite both Elon and Errol previously stating that Errol was a part owner of a Zambian emerald mine, in 2023, Errol recounted that the deal he made was to receive "a portion of the emeralds produced at three small mines". Errol was elected to the Pretoria City Council as a representative of the anti-apartheid Progressive Party and has said that his children shared their father's dislike of apartheid. After his parents divorced in 1979, Elon, aged around 9, chose to live with his father because Errol Musk had an Encyclopædia Britannica and a computer. Elon later regretted his decision and became estranged from his father. Elon has recounted trips to a wilderness school that he described as a "paramilitary Lord of the Flies" where "bullying was a virtue" and children were encouraged to fight over rations. In one incident, after an altercation with a fellow pupil, Elon was thrown down concrete steps and beaten severely, leading to him being hospitalized for his injuries. Elon described his father berating him after he was discharged from the hospital. Errol denied berating Elon and claimed, "The [other] boy had just lost his father to suicide, and Elon had called him stupid. Elon had a tendency to call people stupid. How could I possibly blame that child?" Elon was an enthusiastic reader of books, and had attributed his success in part to having read The Lord of the Rings, the Foundation series, and The Hitchhiker's Guide to the Galaxy. At age ten, he developed an interest in computing and video games, teaching himself how to program from the VIC-20 user manual. At age twelve, Elon sold his BASIC-based game Blastar to PC and Office Technology magazine for approximately $500 (equivalent to $1,600 in 2025). Musk attended Waterkloof House Preparatory School, Bryanston High School, and then Pretoria Boys High School, where he graduated. Musk was a decent but unexceptional student, earning a 61/100 in Afrikaans and a B on his senior math certification. Musk applied for a Canadian passport through his Canadian-born mother to avoid South Africa's mandatory military service, which would have forced him to participate in the apartheid regime, as well as to ease his path to immigration to the United States. While waiting for his application to be processed, he attended the University of Pretoria for five months. Musk arrived in Canada in June 1989, connected with a second cousin in Saskatchewan, and worked odd jobs, including at a farm and a lumber mill. In 1990, he entered Queen's University in Kingston, Ontario. Two years later, he transferred to the University of Pennsylvania, where he studied until 1995. Although Musk has said that he earned his degrees in 1995, the University of Pennsylvania did not award them until 1997 – a Bachelor of Arts in physics and a Bachelor of Science in economics from the university's Wharton School. He reportedly hosted large, ticketed house parties to help pay for tuition, and wrote a business plan for an electronic book-scanning service similar to Google Books. In 1994, Musk held two internships in Silicon Valley: one at energy storage startup Pinnacle Research Institute, which investigated electrolytic supercapacitors for energy storage, and another at Palo Alto–based startup Rocket Science Games. In 1995, he was accepted to a graduate program in materials science at Stanford University, but did not enroll. Musk decided to join the Internet boom of the 1990s, applying for a job at Netscape, to which he reportedly never received a response. The Washington Post reported that Musk lacked legal authorization to remain and work in the United States after failing to enroll at Stanford. In response, Musk said he was allowed to work at that time and that his student visa transitioned to an H1-B. According to numerous former business associates and shareholders, Musk said he was on a student visa at the time. Business career In 1995, Musk, his brother Kimbal, and Greg Kouri founded the web software company Zip2 with funding from a group of angel investors. They housed the venture at a small rented office in Palo Alto. Replying to Rolling Stone, Musk denounced the notion that they started their company with funds borrowed from Errol Musk, but in a tweet, he recognized that his father contributed 10% of a later funding round. The company developed and marketed an Internet city guide for the newspaper publishing industry, with maps, directions, and yellow pages. According to Musk, "The website was up during the day and I was coding it at night, seven days a week, all the time." To impress investors, Musk built a large plastic structure around a standard computer to create the impression that Zip2 was powered by a small supercomputer. The Musk brothers obtained contracts with The New York Times and the Chicago Tribune, and persuaded the board of directors to abandon plans for a merger with CitySearch. Musk's attempts to become CEO were thwarted by the board. Compaq acquired Zip2 for $307 million in cash in February 1999 (equivalent to $590,000,000 in 2025), and Musk received $22 million (equivalent to $43,000,000 in 2025) for his 7-percent share. In 1999, Musk co-founded X.com, an online financial services and e-mail payment company. The startup was one of the first federally insured online banks, and, in its initial months of operation, over 200,000 customers joined the service. The company's investors regarded Musk as inexperienced and replaced him with Intuit CEO Bill Harris by the end of the year. The following year, X.com merged with online bank Confinity to avoid competition. Founded by Max Levchin and Peter Thiel, Confinity had its own money-transfer service, PayPal, which was more popular than X.com's service. Within the merged company, Musk returned as CEO. Musk's preference for Microsoft software over Unix created a rift in the company and caused Thiel to resign. Due to resulting technological issues and lack of a cohesive business model, the board ousted Musk and replaced him with Thiel in 2000.[b] Under Thiel, the company focused on the PayPal service and was renamed PayPal in 2001. In 2002, PayPal was acquired by eBay for $1.5 billion (equivalent to $2,700,000,000 in 2025) in stock, of which Musk—the largest shareholder with 11.72% of shares—received $175.8 million (equivalent to $320,000,000 in 2025). In 2017, Musk purchased the domain X.com from PayPal for an undisclosed amount, stating that it had sentimental value. In 2001, Musk became involved with the nonprofit Mars Society and discussed funding plans to place a growth-chamber for plants on Mars. Seeking a way to launch the greenhouse payloads into space, Musk made two unsuccessful trips to Moscow to purchase intercontinental ballistic missiles (ICBMs) from Russian companies NPO Lavochkin and Kosmotras. Musk instead decided to start a company to build affordable rockets. With $100 million of his early fortune, (equivalent to $180,000,000 in 2025) Musk founded SpaceX in May 2002 and became the company's CEO and Chief Engineer. SpaceX attempted its first launch of the Falcon 1 rocket in 2006. Although the rocket failed to reach Earth orbit, it was awarded a Commercial Orbital Transportation Services program contract from NASA, then led by Mike Griffin. After two more failed attempts that nearly caused Musk to go bankrupt, SpaceX succeeded in launching the Falcon 1 into orbit in 2008. Later that year, SpaceX received a $1.6 billion NASA contract (equivalent to $2,400,000,000 in 2025) for Falcon 9-launched Dragon spacecraft flights to the International Space Station (ISS), replacing the Space Shuttle after its 2011 retirement. In 2012, the Dragon vehicle docked with the ISS, a first for a commercial spacecraft. Working towards its goal of reusable rockets, in 2015 SpaceX successfully landed the first stage of a Falcon 9 on a land platform. Later landings were achieved on autonomous spaceport drone ships, an ocean-based recovery platform. In 2018, SpaceX launched the Falcon Heavy; the inaugural mission carried Musk's personal Tesla Roadster as a dummy payload. Since 2019, SpaceX has been developing Starship, a reusable, super heavy-lift launch vehicle intended to replace the Falcon 9 and Falcon Heavy. In 2020, SpaceX launched its first crewed flight, the Demo-2, becoming the first private company to place astronauts into orbit and dock a crewed spacecraft with the ISS. In 2024, NASA awarded SpaceX an $843 million (equivalent to $865,000,000 in 2025) contract to build a spacecraft that NASA will use to deorbit the ISS at the end of its lifespan. In 2015, SpaceX began development of the Starlink constellation of low Earth orbit satellites to provide satellite Internet access. After the launch of prototype satellites in 2018, the first large constellation was deployed in May 2019. As of May 2025[update], over 7,600 Starlink satellites are operational, comprising 65% of all operational Earth satellites. The total cost of the decade-long project to design, build, and deploy the constellation was estimated by SpaceX in 2020 to be $10 billion (equivalent to $12,000,000,000 in 2025).[c] During the Russian invasion of Ukraine, Musk provided free Starlink service to Ukraine, permitting Internet access and communication at a yearly cost to SpaceX of $400 million (equivalent to $440,000,000 in 2025). However, Musk refused to block Russian state media on Starlink. In 2023, Musk denied Ukraine's request to activate Starlink over Crimea to aid an attack against the Russian navy, citing fears of a nuclear response. Tesla, Inc., originally Tesla Motors, was incorporated in July 2003 by Martin Eberhard and Marc Tarpenning. Both men played active roles in the company's early development prior to Musk's involvement. Musk led the Series A round of investment in February 2004; he invested $6.35 million (equivalent to $11,000,000 in 2025), became the majority shareholder, and joined Tesla's board of directors as chairman. Musk took an active role within the company and oversaw Roadster product design, but was not deeply involved in day-to-day business operations. Following a series of escalating conflicts in 2007 and the 2008 financial crisis, Eberhard was ousted from the firm.[page needed] Musk assumed leadership of the company as CEO and product architect in 2008. A 2009 lawsuit settlement with Eberhard designated Musk as a Tesla co-founder, along with Tarpenning and two others. Tesla began delivery of the Roadster, an electric sports car, in 2008. With sales of about 2,500 vehicles, it was the first mass production all-electric car to use lithium-ion battery cells. Under Musk, Tesla has since launched several well-selling electric vehicles, including the four-door sedan Model S (2012), the crossover Model X (2015), the mass-market sedan Model 3 (2017), the crossover Model Y (2020), and the pickup truck Cybertruck (2023). In May 2020, Musk resigned as chairman of the board as part of the settlement of a lawsuit from the SEC over him tweeting that funding had been "secured" for potentially taking Tesla private. The company has also constructed multiple lithium-ion battery and electric vehicle factories, called Gigafactories. Since its initial public offering in 2010, Tesla stock has risen significantly; it became the most valuable carmaker in summer 2020, and it entered the S&P 500 later that year. In October 2021, it reached a market capitalization of $1 trillion (equivalent to $1,200,000,000,000 in 2025), the sixth company in U.S. history to do so. Musk provided the initial concept and financial capital for SolarCity, which his cousins Lyndon and Peter Rive founded in 2006. By 2013, SolarCity was the second largest provider of solar power systems in the United States. In 2014, Musk promoted the idea of SolarCity building an advanced production facility in Buffalo, New York, triple the size of the largest solar plant in the United States. Construction of the factory started in 2014 and was completed in 2017. It operated as a joint venture with Panasonic until early 2020. Tesla acquired SolarCity for $2 billion in 2016 (equivalent to $2,700,000,000 in 2025) and merged it with its battery unit to create Tesla Energy. The deal's announcement resulted in a more than 10% drop in Tesla's stock price; at the time, SolarCity was facing liquidity issues. Multiple shareholder groups filed a lawsuit against Musk and Tesla's directors, stating that the purchase of SolarCity was done solely to benefit Musk and came at the expense of Tesla and its shareholders. Tesla directors settled the lawsuit in January 2020, leaving Musk the sole remaining defendant. Two years later, the court ruled in Musk's favor. In 2016, Musk co-founded Neuralink, a neurotechnology startup, with an investment of $100 million. Neuralink aims to integrate the human brain with artificial intelligence (AI) by creating devices that are embedded in the brain. Such technology could enhance memory or allow the devices to communicate with software. The company also hopes to develop devices to treat neurological conditions like spinal cord injuries. In 2022, Neuralink announced that clinical trials would begin by the end of the year. In September 2023, the Food and Drug Administration approved Neuralink to initiate six-year human trials. Neuralink has conducted animal testing on macaques at the University of California, Davis. In 2021, the company released a video in which a macaque played the video game Pong via a Neuralink implant. The company's animal trials—which have caused the deaths of some monkeys—have led to claims of animal cruelty. The Physicians Committee for Responsible Medicine has alleged that Neuralink violated the Animal Welfare Act. Employees have complained that pressure from Musk to accelerate development has led to botched experiments and unnecessary animal deaths. In 2022, a federal probe was launched into possible animal welfare violations by Neuralink.[needs update] In 2017, Musk founded the Boring Company to construct tunnels; he also revealed plans for specialized, underground, high-occupancy vehicles that could travel up to 150 miles per hour (240 km/h) and thus circumvent above-ground traffic in major cities. Early in 2017, the company began discussions with regulatory bodies and initiated construction of a 30-foot (9.1 m) wide, 50-foot (15 m) long, and 15-foot (4.6 m) deep "test trench" on the premises of SpaceX's offices, as that required no permits. The Los Angeles tunnel, less than two miles (3.2 km) in length, debuted to journalists in 2018. It used Tesla Model Xs and was reported to be a rough ride while traveling at suboptimal speeds. Two tunnel projects announced in 2018, in Chicago and West Los Angeles, have been canceled. A tunnel beneath the Las Vegas Convention Center was completed in early 2021. Local officials have approved further expansions of the tunnel system. April 14, 2022 In early 2017, Musk expressed interest in buying Twitter and had questioned the platform's commitment to freedom of speech. By 2022, Musk had reached 9.2% stake in the company, making him the largest shareholder.[d] Musk later agreed to a deal that would appoint him to Twitter's board of directors and prohibit him from acquiring more than 14.9% of the company. Days later, Musk made a $43 billion offer to buy Twitter. By the end of April Musk had successfully concluded his bid for approximately $44 billion. This included approximately $12.5 billion in loans and $21 billion in equity financing. Having backtracked on his initial decision, Musk bought the company on October 27, 2022. Immediately after the acquisition, Musk fired several top Twitter executives including CEO Parag Agrawal; Musk became the CEO instead. Under Elon Musk, Twitter instituted monthly subscriptions for a "blue check", and laid off a significant portion of the company's staff. Musk lessened content moderation and hate speech also increased on the platform after his takeover. In late 2022, Musk released internal documents relating to Twitter's moderation of Hunter Biden's laptop controversy in the lead-up to the 2020 presidential election. Musk also promised to step down as CEO after a Twitter poll, and five months later, Musk stepped down as CEO and transitioned his role to executive chairman and chief technology officer (CTO). Despite Musk stepping down as CEO, X continues to struggle with challenges such as viral misinformation, hate speech, and antisemitism controversies. Musk has been accused of trying to silence some of his critics such as Twitch streamer Asmongold, who criticized him during one of his streams. Musk has been accused of removing their accounts' blue checkmarks, which hinders visibility and is considered a form of shadow banning, or suspending their accounts without justification. Other activities In August 2013, Musk announced plans for a version of a vactrain, and assigned engineers from SpaceX and Tesla to design a transport system between Greater Los Angeles and the San Francisco Bay Area, at an estimated cost of $6 billion. Later that year, Musk unveiled the concept, dubbed the Hyperloop, intended to make travel cheaper than any other mode of transport for such long distances. In December 2015, Musk co-founded OpenAI, a not-for-profit artificial intelligence (AI) research company aiming to develop artificial general intelligence, intended to be safe and beneficial to humanity. Musk pledged $1 billion of funding to the company, and initially gave $50 million. In 2018, Musk left the OpenAI board. Since 2018, OpenAI has made significant advances in machine learning. In July 2023, Musk launched the artificial intelligence company xAI, which aims to develop a generative AI program that competes with existing offerings like OpenAI's ChatGPT. Musk obtained funding from investors in SpaceX and Tesla, and xAI hired engineers from Google and OpenAI. December 16, 2022 Musk uses a private jet owned by Falcon Landing LLC, a SpaceX-linked company, and acquired a second jet in August 2020. His heavy use of the jets and the consequent fossil fuel usage have received criticism. Musk's flight usage is tracked on social media through ElonJet. In December 2022, Musk banned the ElonJet account on Twitter, and made temporary bans on the accounts of journalists that posted stories regarding the incident, including Donie O'Sullivan, Keith Olbermann, and journalists from The New York Times, The Washington Post, CNN, and The Intercept. In October 2025, Musk's company xAI launched Grokipedia, an AI-generated online encyclopedia that he promoted as an alternative to Wikipedia. Articles on Grokipedia are generated and reviewed by xAI's Grok chatbot. Media coverage and academic analysis described Grokipedia as frequently reusing Wikipedia content but framing contested political and social topics in line with Musk's own views and right-wing narratives. A study by Cornell University researchers and NBC News stated that Grokipedia cites sources that are blacklisted or considered "generally unreliable" on Wikipedia, for example, the conspiracy site Infowars and the neo-Nazi forum Stormfront. Wired, The Guardian and Time criticized Grokipedia for factual errors and for presenting Musk himself in unusually positive terms while downplaying controversies. Politics Musk is an outlier among business leaders who typically avoid partisan political advocacy. Musk was a registered independent voter when he lived in California. Historically, he has donated to both Democrats and Republicans, many of whom serve in states in which he has a vested interest. Since 2022, his political contributions have mostly supported Republicans, with his first vote for a Republican going to Mayra Flores in the 2022 Texas's 34th congressional district special election. In 2024, he started supporting international far-right political parties, activists, and causes, and has shared misinformation and numerous conspiracy theories. Since 2024, his views have been generally described as right-wing. Musk supported Barack Obama in 2008 and 2012, Hillary Clinton in 2016, Joe Biden in 2020, and Donald Trump in 2024. In the 2020 Democratic Party presidential primaries, Musk endorsed candidate Andrew Yang and expressed support for Yang's proposed universal basic income, and endorsed Kanye West's 2020 presidential campaign. In 2021, Musk publicly expressed opposition to the Build Back Better Act, a $3.5 trillion legislative package endorsed by Joe Biden that ultimately failed to pass due to unanimous opposition from congressional Republicans and several Democrats. In 2022, gave over $50 million to Citizens for Sanity, a conservative political action committee. In 2023, he supported Republican Ron DeSantis for the 2024 U.S. presidential election, giving $10 million to his campaign, and hosted DeSantis's campaign announcement on a Twitter Spaces event. From June 2023 to January 2024, Musk hosted a bipartisan set of X Spaces with Republican and Democratic candidates, including Robert F. Kennedy Jr., Vivek Ramaswamy, and Dean Phillips. In October 2025, former vice-president Kamala Harris commented that it was a mistake from the Democratic side to not invite Musk to a White House electric vehicle event organized in August 2021 and featuring executives from General Motors, Ford and Stellantis, despite Tesla being "the major American manufacturer of extraordinary innovation in this space." Fortune remarked that this was a nod to United Auto Workers and organized labor. Harris said presidents should put aside political loyalties when it came to recognizing innovation, and guessed that the non-invitation impacted Musk's perspective. Fortune noted that, at the time, Musk said, "Yeah, seems odd that Tesla wasn't invited." A month later, he criticized Biden as "not the friendliest administration." Jacob Silverman, author of the book Gilded Rage: Elon Musk and the Radicalization of Silicon Valley, said that the tech industry represented by Musk, Thiel, Andreessen and other capitalists, actually flourished under Biden, but the tech leaders chose Trump for their common ground on cultural issues. By early 2024, Musk had become a vocal and financial supporter of Donald Trump. In July 2024, minutes after the attempted assassination of Donald Trump, Musk endorsed him for president saying; "I fully endorse President Trump and hope for his rapid recovery." During the presidential campaign, Musk joined Trump on stage at a campaign rally, and during the campaign promoted conspiracy theories and falsehoods about Democrats, election fraud and immigration, in support of Trump. Musk was the largest individual donor of the 2024 election. In 2025, Musk contributed $19 million to the Wisconsin Supreme Court race, hoping to influence the state's future redistricting efforts and its regulations governing car manufacturers and dealers. In 2023, Musk said he shunned the World Economic Forum because it was boring. The organization commented that they had not invited him since 2015. He has participated in Dialog, dubbed "Tech Bilderberg" and organized by Peter Thiel and Auren Hoffman, though. Musk's international political actions and comments have come under increasing scrutiny and criticism, especially from the governments and leaders of France, Germany, Norway, Spain and the United Kingdom, particularly due to his position in the U.S. government as well as ownership of X. An NBC News analysis found he had boosted far-right political movements to cut immigration and curtail regulation of business in at least 18 countries on six continents since 2023. During his speech after the second inauguration of Donald Trump, Musk twice made a gesture interpreted by many as a Nazi or a fascist Roman salute.[e] He thumped his right hand over his heart, fingers spread wide, and then extended his right arm out, emphatically, at an upward angle, palm down and fingers together. He then repeated the gesture to the crowd behind him. As he finished the gestures, he said to the crowd, "My heart goes out to you. It is thanks to you that the future of civilization is assured." It was widely condemned as an intentional Nazi salute in Germany, where making such gestures is illegal. The Anti-Defamation League said it was not a Nazi salute, but other Jewish organizations disagreed and condemned the salute. American public opinion was divided on partisan lines as to whether it was a fascist salute. Musk dismissed the accusations of Nazi sympathies, deriding them as "dirty tricks" and a "tired" attack. Neo-Nazi and white supremacist groups celebrated it as a Nazi salute. Multiple European political parties demanded that Musk be banned from entering their countries. The concept of DOGE emerged in a discussion between Musk and Donald Trump, and in August 2024, Trump committed to giving Musk an advisory role, with Musk accepting the offer. In November and December 2024, Musk suggested that the organization could help to cut the U.S. federal budget, consolidate the number of federal agencies, and eliminate the Consumer Financial Protection Bureau, and that its final stage would be "deleting itself". In January 2025, the organization was created by executive order, and Musk was designated a "special government employee". Musk led the organization and was a senior advisor to the president, although his official role is not clear. In sworn statement during a lawsuit, the director of the White House Office of Administration stated that Musk "is not an employee of the U.S. DOGE Service or U.S. DOGE Service Temporary Organization", "is not the U.S. DOGE Service administrator", and has "no actual or formal authority to make government decisions himself". Trump said two days later that he had put Musk in charge of DOGE. A federal judge has ruled that Musk acted as the de facto leader of DOGE. Musk's role in the second Trump administration, particularly in response to DOGE, has attracted public backlash. He was criticized for his treatment of federal government employees, including his influence over the mass layoffs of the federal workforce. He has prioritized secrecy within the organization and has accused others of violating privacy laws. A Senate report alleged that Musk could avoid up to $2 billion in legal liability as a result of DOGE's actions. In May 2025, Bill Gates accused Musk of "killing the world's poorest children" through his cuts to USAID, which modeling by Boston University estimated had resulted in 300,000 deaths by this time, most of them of children. By November 2025, the estimated death toll had increased to 400,000 children and 200,000 adults. Musk announced on May 28, 2025, that he would depart from the Trump administration as planned when the special government employee's 130 day deadline expired, with a White House official confirming that Musk's offboarding from the Trump administration was already underway. His departure was officially confirmed during a joint Oval Office press conference with Trump on May 30, 2025. @realDonaldTrump is in the Epstein files. That is the real reason they have not been made public. June 5, 2025 After leaving office, Musk criticized the Trump administration's Big Beautiful Bill, calling it a "disgusting abomination" due to its provisions increasing the deficit. A feud began between Musk and Trump, with its most notable event being Musk alleging Trump had ties to sex offender Jeffrey Epstein on X (formerly Twitter) on June 5, 2025. Trump responded on Truth Social stating that Musk went "CRAZY" after the "EV Mandate" was purportedly taken away and threatened to cut Musk's government contracts. Musk then called for a third Trump impeachment. The next day, Trump stated that he did not wish to reconcile with Musk, and added that Musk would face "very serious consequences" if he funds Democratic candidates. On June 11, Musk publicly apologized for the tweets against Trump, saying they "went too far". Views November 6, 2022 Rejecting the conservative label, Musk has described himself as a political moderate, even as his views have become more right-wing over time. His views have been characterized as libertarian and far-right, and after his involvement in European politics, they have received criticism from world leaders such as Emmanuel Macron and Olaf Scholz. Within the context of American politics, Musk supported Democratic candidates up until 2022, at which point he voted for a Republican for the first time. He has stated support for universal basic income, gun rights, freedom of speech, a tax on carbon emissions, and H-1B visas. Musk has expressed concern about issues such as artificial intelligence (AI) and climate change, and has been a critic of wealth tax, short-selling, and government subsidies. An immigrant himself, Musk has been accused of being anti-immigration, and regularly blames immigration policies for illegal immigration. He is also a pronatalist who believes population decline is the biggest threat to civilization, and identifies as a cultural Christian. Musk has long been an advocate for space colonization, especially the colonization of Mars. He has repeatedly pushed for humanity colonizing Mars, in order to become an interplanetary species and lower the risks of human extinction. Musk has promoted conspiracy theories and made controversial statements that have led to accusations of racism, sexism, antisemitism, transphobia, disseminating disinformation, and support of white pride. While describing himself as a "pro-Semite", his comments regarding George Soros and Jewish communities have been condemned by the Anti-Defamation League and the Biden White House. Musk was criticized during the COVID-19 pandemic for making unfounded epidemiological claims, defying COVID-19 lockdowns restrictions, and supporting the Canada convoy protest against vaccine mandates. He has amplified false claims of white genocide in South Africa. Musk has been critical of Israel's actions in the Gaza Strip during the Gaza war, praised China's economic and climate goals, suggested that Taiwan and China should resolve cross-strait relations, and was described as having a close relationship with the Chinese government. In Europe, Musk expressed support for Ukraine in 2022 during the Russian invasion, recommended referendums and peace deals on the annexed Russia-occupied territories, and supported the far-right Alternative for Germany political party in 2024. Regarding British politics, Musk blamed the 2024 UK riots on mass migration and open borders, criticized Prime Minister Keir Starmer for what he described as a "two-tier" policing system, and was subsequently attacked as being responsible for spreading misinformation and amplifying the far-right. He has also voiced his support for far-right activist Tommy Robinson and pledged electoral support for Reform UK. In February 2026, Musk described Spanish Prime Minister Pedro Sánchez as a "tyrant" following Sánchez's proposal to prohibit minors under the age of 16 from accessing social media platforms. Legal affairs In 2018, Musk was sued by the U.S. Securities and Exchange Commission (SEC) for a tweet stating that funding had been secured for potentially taking Tesla private.[f] The securities fraud lawsuit characterized the tweet as false, misleading, and damaging to investors, and sought to bar Musk from serving as CEO of publicly traded companies. Two days later, Musk settled with the SEC, without admitting or denying the SEC's allegations. As a result, Musk and Tesla were fined $20 million each, and Musk was forced to step down for three years as Tesla chairman but was able to remain as CEO. Shareholders filed a lawsuit over the tweet, and in February 2023, a jury found Musk and Tesla not liable. Musk has stated in interviews that he does not regret posting the tweet that triggered the SEC investigation. In 2019, Musk stated in a tweet that Tesla would build half a million cars that year. The SEC reacted by asking a court to hold him in contempt for violating the terms of the 2018 settlement agreement. A joint agreement between Musk and the SEC eventually clarified the previous agreement details, including a list of topics about which Musk needed preclearance. In 2020, a judge blocked a lawsuit that claimed a tweet by Musk regarding Tesla stock price ("too high imo") violated the agreement. Freedom of Information Act (FOIA)-released records showed that the SEC concluded Musk had subsequently violated the agreement twice by tweeting regarding "Tesla's solar roof production volumes and its stock price". In October 2023, the SEC sued Musk over his refusal to testify a third time in an investigation into whether he violated federal law by purchasing Twitter stock in 2022. In February 2024, Judge Laurel Beeler ruled that Musk must testify again. In January 2025, the SEC filed a lawsuit against Musk for securities violations related to his purchase of Twitter. In January 2024, Delaware judge Kathaleen McCormick ruled in a 2018 lawsuit that Musk's $55 billion pay package from Tesla be rescinded. McCormick called the compensation granted by the company's board "an unfathomable sum" that was unfair to shareholders. The Delaware Supreme Court overturned McCormick's decision in December 2025, restoring Musk's compensation package and awarding $1 in nominal damages. Personal life Musk became a U.S. citizen in 2002. From the early 2000s until late 2020, Musk resided in California, where both Tesla and SpaceX were founded. He then relocated to Cameron County, Texas, saying that California had become "complacent" about its economic success. While hosting Saturday Night Live in 2021, Musk stated that he has Asperger syndrome (an outdated term for autism spectrum disorder). When asked about his experience growing up with Asperger's syndrome in a TED2022 conference in Vancouver, Musk stated that "the social cues were not intuitive ... I would just tend to take things very literally ... but then that turned out to be wrong — [people were not] simply saying exactly what they mean, there's all sorts of other things that are meant, and [it] took me a while to figure that out." Musk suffers from back pain and has undergone several spine-related surgeries, including a disc replacement. In 2000, he contracted a severe case of malaria while on vacation in South Africa. Musk has stated he uses doctor-prescribed ketamine for occasional depression and that he doses "a small amount once every other week or something like that"; since January 2024, some media outlets have reported that he takes ketamine, marijuana, LSD, ecstasy, mushrooms, cocaine and other drugs. Musk at first refused to comment on his alleged drug use, before responding that he had not tested positive for drugs, and that if drugs somehow improved his productivity, "I would definitely take them!". The New York Times' investigations revealed Musk's overuse of ketamine and numerous other drugs, as well as strained family relationships and concerns from close associates who have become troubled by his public behavior as he became more involved in political activities and government work. According to The Washington Post, President Trump described Musk as "a big-time drug addict". Through his own label Emo G Records, Musk released a rap track, "RIP Harambe", on SoundCloud in March 2019. The following year, he released an EDM track, "Don't Doubt Ur Vibe", featuring his own lyrics and vocals. Musk plays video games, which he stated has a "'restoring effect' that helps his 'mental calibration'". Some games he plays include Quake, Diablo IV, Elden Ring, and Polytopia. Musk once claimed to be one of the world's top video game players but has since admitted to "account boosting", or cheating by hiring outside services to achieve top player rankings. Musk has justified the boosting by claiming that all top accounts do it so he has to as well to remain competitive. In 2024 and 2025, Musk criticized the video game Assassin's Creed Shadows and its creator Ubisoft for "woke" content. Musk posted to X that "DEI kills art" and specified the inclusion of the historical figure Yasuke in the Assassin's Creed game as offensive; he also called the game "terrible". Ubisoft responded by saying that Musk's comments were "just feeding hatred" and that they were focused on producing a game not pushing politics. Musk has fathered at least 14 children, one of whom died as an infant. The Wall Street Journal reported in 2025 that sources close to Musk suggest that the "true number of Musk's children is much higher than publicly known". He had six children with his first wife, Canadian author Justine Wilson, whom he met while attending Queen's University in Ontario, Canada; they married in 2000. In 2002, their first child Nevada Musk died of sudden infant death syndrome at the age of 10 weeks. After his death, the couple used in vitro fertilization (IVF) to continue their family; they had twins in 2004, followed by triplets in 2006. The couple divorced in 2008 and have shared custody of their children. The elder twin he had with Wilson came out as a trans woman and, in 2022, officially changed her name to Vivian Jenna Wilson, adopting her mother's surname because she no longer wished to be associated with Musk. Musk began dating English actress Talulah Riley in 2008. They married two years later at Dornoch Cathedral in Scotland. In 2012, the couple divorced, then remarried the following year. After briefly filing for divorce in 2014, Musk finalized a second divorce from Riley in 2016. Musk then dated the American actress Amber Heard for several months in 2017; he had reportedly been "pursuing" her since 2012. In 2018, Musk and Canadian musician Grimes confirmed they were dating. Grimes and Musk have three children, born in 2020, 2021, and 2022.[g] Musk and Grimes originally gave their eldest child the name "X Æ A-12", which would have violated California regulations as it contained characters that are not in the modern English alphabet; the names registered on the birth certificate are "X" as a first name, "Æ A-Xii" as a middle name, and "Musk" as a last name. They received criticism for choosing a name perceived to be impractical and difficult to pronounce; Musk has said the intended pronunciation is "X Ash A Twelve". Their second child was born via surrogacy. Despite the pregnancy, Musk confirmed reports that the couple were "semi-separated" in September 2021; in an interview with Time in December 2021, he said he was single. In October 2023, Grimes sued Musk over parental rights and custody of X Æ A-Xii. Elon Musk has taken X Æ A-Xii to multiple official events in Washington, D.C. during Trump's second term in office. Also in July 2022, The Wall Street Journal reported that Musk allegedly had an affair with Nicole Shanahan, the wife of Google co-founder Sergey Brin, in 2021, leading to their divorce the following year. Musk denied the report. Musk also had a relationship with Australian actress Natasha Bassett, who has been described as "an occasional girlfriend". In October 2024, The New York Times reported Musk bought a Texas compound for his children and their mothers, though Musk denied having done so. Musk also has four children with Shivon Zilis, director of operations and special projects at Neuralink: twins born via IVF in 2021, a child born in 2024 via surrogacy and a child born in 2025.[h] On February 14, 2025, Ashley St. Clair, an influencer and author, posted on X claiming to have given birth to Musk's son Romulus five months earlier, which media outlets reported as Musk's supposed thirteenth child.[i] On February 22, 2025, it was reported that St Clair had filed for sole custody of her five-month-old son and for Musk to be recognised as the child's father. On March 31, 2025, Musk wrote that, while he was unsure if he was the father of St. Clair's child, he had paid St. Clair $2.5 million and would continue paying her $500,000 per year.[j] Later reporting from the Wall Street Journal indicated that $1 million of these payments to St. Clair were structured as a loan. In 2014, Musk and Ghislaine Maxwell appeared together in a photograph taken at an Academy Awards after-party, which Musk later described as a "photobomb". The January 2026 Epstein files contain emails between Musk and Epstein from 2012 to 2013, after Epstein's first conviction. Emails released on January 30, 2026, indicated that Epstein invited Musk to visit his private island on multiple occasions. The correspondence showed that while Epstein repeatedly encouraged Musk to attend, Musk did not visit the island. In one instance, Musk discussed the possibility of attending a party with his then-wife Talulah Riley and asked which day would be the "wildest party"; according to the emails, the visit did not take place after Epstein later cancelled the plans.[k] On Christmas day in 2012, Musk emailed Epstein asking "Do you have any parties planned? I’ve been working to the edge of sanity this year and so, once my kids head home after Christmas, I really want to hit the party scene in St Barts or elsewhere and let loose. The invitation is much appreciated, but a peaceful island experience is the opposite of what I’m looking for". Epstein replied that the "ratio on my island" might make Musk's wife uncomfortable to which Musk responded, "Ratio is not a problem for Talulah". On September 11, 2013, Epstein sent an email asking Musk if he had any plans for coming to New York for the opening of the United Nations General Assembly where many "interesting people" would be coming to his house to which Musk responded that "Flying to NY to see UN diplomats do nothing would be an unwise use of time". Epstein responded by stating "Do you think i am retarded. Just kidding, there is no one over 25 and all very cute." Musk has denied any close relationship with Epstein and described him as a "creep" who attempted to ingratiate himself with influential people. When Musk was asked in 2019 if he introduced Epstein to Mark Zuckerberg, Musk responded: "I don’t recall introducing Epstein to anyone, as I don’t know the guy well enough to do so." The released emails nonetheless showed cordial exchanges on a range of topics, including Musk's inquiry about parties on the island. The correspondence also indicated that Musk suggested hosting Epstein at SpaceX, while Epstein separately discussed plans to tour SpaceX and bring "the girls", though there is no evidence that such a visit occurred. Musk has described the release of the files a "distraction", later accusing the second Trump administration of suppressing them to protect powerful individuals, including Trump himself.[l] Wealth Elon Musk is the wealthiest person in the world, with an estimated net worth of US$690 billion as of January 2026, according to the Bloomberg Billionaires Index, and $852 billion according to Forbes, primarily from his ownership stakes in SpaceX and Tesla. Having been first listed on the Forbes Billionaires List in 2012, around 75% of Musk's wealth was derived from Tesla stock in November 2020, although he describes himself as "cash poor". According to Forbes, he became the first person in the world to achieve a net worth of $300 billion in 2021; $400 billion in December 2024; $500 billion in October 2025; $600 billion in mid-December 2025; $700 billion later that month; and $800 billion in February 2026. In November 2025, a Tesla pay package worth potentially $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Public image Although his ventures have been highly influential within their separate industries starting in the 2000s, Musk only became a public figure in the early 2010s. He has been described as an eccentric who makes spontaneous and impactful decisions, while also often making controversial statements, contrary to other billionaires who prefer reclusiveness to protect their businesses. Musk's actions and his expressed views have made him a polarizing figure. Biographer Ashlee Vance described people's opinions of Musk as polarized due to his "part philosopher, part troll" persona on Twitter. He has drawn denouncement for using his platform to mock the self-selection of personal pronouns, while also receiving praise for bringing international attention to matters like British survivors of grooming gangs. Musk has been described as an American oligarch due to his extensive influence over public discourse, social media, industry, politics, and government policy. After Trump's re-election, Musk's influence and actions during the transition period and the second presidency of Donald Trump led some to call him "President Musk", the "actual president-elect", "shadow president" or "co-president". Awards for his contributions to the development of the Falcon rockets include the American Institute of Aeronautics and Astronautics George Low Transportation Award in 2008, the Fédération Aéronautique Internationale Gold Space Medal in 2010, and the Royal Aeronautical Society Gold Medal in 2012. In 2015, he received an honorary doctorate in engineering and technology from Yale University and an Institute of Electrical and Electronics Engineers Honorary Membership. Musk was elected a Fellow of the Royal Society (FRS) in 2018.[m] In 2022, Musk was elected to the National Academy of Engineering. Time has listed Musk as one of the most influential people in the world in 2010, 2013, 2018, and 2021. Musk was selected as Time's "Person of the Year" for 2021. Then Time editor-in-chief Edward Felsenthal wrote that, "Person of the Year is a marker of influence, and few individuals have had more influence than Musk on life on Earth, and potentially life off Earth too." Notes References Works cited Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Linguistics] | [TOKENS: 6513]
Contents Linguistics Linguistics is the scientific study of language. The areas of linguistic analysis are syntax (rules governing the structure of sentences), semantics (meaning), morphology (structure of words), phonetics (speech sounds and equivalent gestures in sign languages), phonology (the abstract sound system of a particular language, and analogous systems of sign languages), and pragmatics (how the context of use contributes to meaning). Subdisciplines such as biolinguistics (the study of the biological variables and evolution of language) and psycholinguistics (the study of psychological factors in human language) bridge many of these divisions. Linguistics encompasses many branches and subfields that span both theoretical and practical applications. Theoretical linguistics is concerned with understanding the universal and fundamental nature of language and developing a general theoretical framework for describing it. Applied linguistics seeks to utilize the scientific findings of the study of language for practical purposes, such as developing methods of improving language education and literacy. Mathematical linguistics is the application of mathematics to model phenomena and solve problems in general linguistics and theoretical linguistics. Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. Linguistic features may be studied through a variety of perspectives: synchronically (by describing the structure of a language at a specific point in time) or diachronically (through the historical development of a language over a period of time), in monolinguals or in multilinguals, among children or among adults, in terms of how it is being learnt or how it was acquired, as abstract objects or as cognitive structures, through written texts or through oral elicitation, and finally through mechanical data collection or practical fieldwork. Linguistics emerged from the field of philology, of which some branches are more qualitative and holistic in approach. Today, philology and linguistics are variably described as related fields, subdisciplines, or separate fields of language study, but, by and large, linguistics can be seen as an umbrella term. Linguistics is also related to the philosophy of language, stylistics, rhetoric, semiotics, lexicography, and translation. Major subdisciplines Historical linguistics is the study of how language changes over history, particularly with regard to a specific language or a group of languages. Western trends in historical linguistics date back to roughly the late 18th century, when the discipline grew out of philology, the study of ancient texts and oral traditions. Historical linguistics emerged as one of the first few sub-disciplines in the field, and was most widely practised during the late 19th century. Despite a shift in focus in the 20th century towards formalism and generative grammar, which studies the universal properties of language, historical research today still remains a significant field of linguistic inquiry. Subfields of the discipline include language change and grammaticalization. Historical linguistics studies language change either diachronically (through a comparison of different time periods in the past and present) or in a synchronic manner (by observing developments between different variations that exist within the current linguistic stage of a language). At first, historical linguistics was the cornerstone of comparative linguistics, which involves a study of the relationship between different languages. At that time, scholars of historical linguistics were only concerned with creating different categories of language families, and reconstructing prehistoric proto-languages by using both the comparative method and the method of internal reconstruction. Internal reconstruction is the method by which an element that contains a certain meaning is re-used in different contexts or environments where there is a variation in either sound or analogy.[better source needed] The reason for this had been to describe well-known Indo-European languages, many of which had detailed documentation and long written histories. Scholars of historical linguistics also studied Uralic languages, another European language family for which very little written material existed back then. After that, there also followed significant work on the corpora of other languages, such as the Austronesian languages and the Native American language families. In historical work, the uniformitarian principle is generally the underlying working hypothesis, occasionally also clearly expressed. The principle was expressed early by William Dwight Whitney, who considered it imperative, a "must", of historical linguistics to "look to find the same principle operative also in the very outset of that [language] history." The above approach of comparativism in linguistics is now, however, only a small part of the much broader discipline called historical linguistics. The comparative study of specific Indo-European languages is considered a highly specialized field today, while comparative research is carried out over the subsequent internal developments in a language: in particular, over the development of modern standard varieties of languages, and over the development of a language from its standardized form to its varieties.[citation needed] For instance, some scholars also tried to establish super-families, linking, for example, Indo-European, Uralic, and other language families to a hypothetical Nostratic language group. While these attempts are still not widely accepted as credible methods, they provide necessary information to establish relatedness in language change. This is generally hard to find for events long ago, due to the occurrence of chance word resemblances and variations between language groups. A limit of around 10,000 years is often assumed for the functional purpose of conducting research. It is also hard to date various proto-languages. Even though several methods are available, these languages can be dated only approximately. In modern historical linguistics, we examine how languages change over time, focusing on the relationships between dialects within a specific period. This includes studying morphological, syntactical, and phonetic shifts. Connections between dialects in the past and present are also explored. Syntax is the study of how words and morphemes combine to form larger units such as phrases and sentences. Central concerns of syntax include word order, grammatical relations, constituency, agreement, the nature of crosslinguistic variation, and the relationship between form and meaning. There are numerous approaches to syntax that differ in their central assumptions and goals. Morphology is the study of words, including the principles by which they are formed, and how they relate to one another within a language. Most approaches to morphology investigate the structure of words in terms of morphemes, which are the smallest units in a language with some independent meaning. Morphemes include roots that can exist as words by themselves, but also categories such as affixes that can only appear as part of a larger word. For example, in English the root catch and the suffix -ing are both morphemes; catch may appear as its own word, or it may be combined with -ing to form the new word catching. Morphology also analyzes how words behave as parts of speech, and how they may be inflected to express grammatical categories including number, tense, and aspect. Concepts such as productivity are concerned with how speakers create words in specific contexts, which evolves over the history of a language. The discipline that deals specifically with the sound changes occurring within morphemes is morphophonology. Semantics and pragmatics are branches of linguistics concerned with meaning. These subfields have traditionally been divided according to aspects of meaning: "semantics" refers to grammatical and lexical meanings, while "pragmatics" is concerned with meaning in context. Within linguistics, the subfield of formal semantics studies the denotations of sentences and how they are composed from the meanings of their constituent expressions. Formal semantics draws heavily on philosophy of language and uses formal tools from logic and computer science. On the other hand, cognitive semantics explains linguistic meaning via aspects of general cognition, drawing on ideas from cognitive science such as prototype theory. Pragmatics focuses on phenomena such as speech acts, implicature, and talk in interaction. Unlike semantics, which examines meaning that is conventional or "coded" in a given language, pragmatics studies how the transmission of meaning depends not only on the structural and linguistic knowledge (grammar, lexicon, etc.) of the speaker and listener, but also on the context of the utterance, any pre-existing knowledge about those involved, the inferred intent of the speaker, and other factors. Phonetics and phonology are branches of linguistics concerned with sounds (or the equivalent aspects of sign languages). Phonetics is largely concerned with the physical aspects of sounds such as their articulation, acoustics, production, and perception. Phonology is concerned with the linguistic abstractions and categorizations of sounds, and it tells us what sounds are in a language, how they do and can combine into words, and explains why certain phonetic features are important to identifying a word. Linguistic typology (or language typology) is a field of linguistics that studies and classifies languages according to their structural features to allow their comparison. Its aim is to describe and explain the structural diversity and the common properties of the world's languages. Its subdisciplines include, but are not limited to phonological typology, which deals with sound features; syntactic typology, which deals with word order and form; lexical typology, which deals with language vocabulary; and theoretical typology, which aims to explain the universal tendencies. Structures Linguistic structures are pairings of meaning and form. Any particular pairing of meaning and form is a Saussurean linguistic sign. For instance, the meaning "cat" is represented worldwide with a wide variety of different sound patterns (in oral languages), movements of the hands and face (in sign languages), and written symbols (in written languages). Linguistic patterns have proven their importance for the knowledge engineering field especially with the ever-increasing amount of available data. Linguists focusing on structure attempt to understand the rules regarding language use that native speakers know (not always consciously). All linguistic structures can be broken down into component parts that are combined according to (sub)conscious rules, over multiple levels of analysis. For instance, consider the structure of the word "tenth" on two different levels of analysis. On the level of internal word structure (known as morphology), the word "tenth" is made up of one linguistic form indicating a number and another form indicating ordinality. The rule governing the combination of these forms ensures that the ordinality marker "th" follows the number "ten." On the level of sound structure (known as phonology), structural analysis shows that the "n" sound in "tenth" is made differently from the "n" sound in "ten" spoken alone. Although most speakers of English are consciously aware of the rules governing internal structure of the word pieces of "tenth", they are less often aware of the rule governing its sound structure. Linguists focused on structure find and analyze rules such as these, which govern how native speakers use language.[citation needed] Grammar is a system of rules which governs the production and use of utterances in a given language. These rules apply to sound as well as meaning, and include componential subsets of rules, such as those pertaining to phonology (the organization of phonetic sound systems), morphology (the formation and composition of words), and syntax (the formation and composition of phrases and sentences). Modern frameworks that deal with the principles of grammar include structural and functional linguistics, and generative linguistics. Sub-fields that focus on a grammatical study of language include the following: Discourse is language as social practice (Baynham, 1995) and is a multilayered concept. As a social practice, discourse embodies different ideologies through written and spoken texts. Discourse analysis can examine or expose these ideologies. Discourse not only influences genre, which is selected based on specific contexts but also, at a micro level, shapes language as text (spoken or written) down to the phonological and lexico-grammatical levels. Grammar and discourse are linked as parts of a system. A particular discourse becomes a language variety when it is used in this way for a particular purpose, and is referred to as a register. There may be certain lexical additions (new words) that are brought into play because of the expertise of the community of people within a certain domain of specialization. Thus, registers and discourses distinguish themselves not only through specialized vocabulary but also, in some cases, through distinct stylistic choices. People in the medical fraternity, for example, may use some medical terminology in their communication that is specialized to the field of medicine. This is often referred to as being part of the "medical discourse", and so on. The lexicon is a catalogue of words and terms that are stored in a speaker's mind. The lexicon consists of words and bound morphemes, which are parts of words that can not stand alone, like affixes. In some analyses, compound words and certain classes of idiomatic expressions and other collocations are also considered to be part of the lexicon. Dictionaries represent attempts at listing, in alphabetical order, the lexicon of a given language; usually, however, bound morphemes are not included. Lexicography, closely linked with the domain of semantics, is the science of mapping the words into an encyclopedia or a dictionary. The creation and addition of new words (into the lexicon) is called coining or neologization, and the new words are called neologisms. It is often believed that a speaker's capacity for language lies in the quantity of words stored in the lexicon. However, this is often considered a myth by linguists. The capacity for the use of language is considered by many linguists to lie primarily in the domain of grammar, and to be linked with competence, rather than with the growth of vocabulary. Even a very small lexicon is theoretically capable of producing an infinite number of sentences. Vocabulary size is relevant as a measure of comprehension. There is general consensus that reading comprehension of a written text in English requires 98% coverage, meaning that the person understands 98% of the words in the text. The question of how much vocabulary is needed is therefore related to which texts or conversations need to be understood. A common estimate is 6-7,000 word families to understand a wide range of conversations and 8-9,000 word families to be able to read a wide range of written texts. Stylistics also involves the study of written, signed, or spoken discourse through varying speech communities, genres, and editorial or narrative formats in the mass media. It involves the study and interpretation of texts for aspects of their linguistic and tonal style. Stylistic analysis entails the analysis of description of particular dialects and registers used by speech communities. Stylistic features include rhetoric, diction, stress, satire, irony, dialogue, and other forms of phonetic variations. Stylistic analysis can also include the study of language in canonical works of literature, popular fiction, news, advertisements, and other forms of communication in popular culture as well. It is usually seen as a variation in communication that changes from speaker to speaker and community to community. In short, Stylistics is the interpretation of text. In the 1960s, Jacques Derrida, for instance, further distinguished between speech and writing, by proposing that written language be studied as a linguistic medium of communication in itself. Palaeography is therefore the discipline that studies the evolution of written scripts (as signs and symbols) in language. The formal study of language also led to the growth of fields like psycholinguistics, which explores the representation and function of language in the mind; neurolinguistics, which studies language processing in the brain; biolinguistics, which studies the biology and evolution of language; and language acquisition, which investigates how children and adults acquire the knowledge of one or more languages. Methodology Modern linguistics is primarily descriptive. Linguists describe and explain features of language without making subjective judgments on whether a particular feature or usage is "good" or "bad". This is analogous to practice in other sciences: a zoologist studies the animal kingdom without making subjective judgments on whether a particular species is "better" or "worse" than another. Prescription, on the other hand, is an attempt to promote particular linguistic usages over others, often favoring a particular dialect or "acrolect". This may have the aim of establishing a linguistic standard, which can aid communication over large geographical areas. It may also, however, be an attempt by speakers of one language or dialect to exert influence over speakers of other languages or dialects (see Linguistic imperialism). An extreme version of prescriptivism can be found among censors, who attempt to eradicate words and structures that they consider to be destructive to society. Prescription, however, may be practised appropriately in language instruction, like in ELT, where certain fundamental grammatical rules and lexical items need to be introduced to a second-language speaker who is attempting to acquire the language.[citation needed] Most contemporary linguists work under the assumption that spoken data and signed data are more fundamental than written data. This is because Nonetheless, linguists agree that the study of written language can be worthwhile and valuable. For research that relies on corpus linguistics and computational linguistics, written language is often much more convenient for processing large amounts of linguistic data. Large corpora of spoken language are difficult to create and hard to find, and are typically transcribed and written. In addition, linguists have turned to text-based discourse occurring in various formats of computer-mediated communication as a viable site for linguistic inquiry. The study of writing systems themselves, graphemics, is, in any case, considered a branch of linguistics. Before the 20th century, linguists analysed language on a diachronic plane, which was historical in focus. This meant that they would compare linguistic features and try to analyse language from the point of view of how it had changed between then and later. However, with the rise of Saussurean linguistics in the 20th century, the focus shifted to a more synchronic approach, where the study was geared towards analysis and comparison between different language variations, which existed at the same given point of time. At another level, the syntagmatic plane of linguistic analysis entails the comparison between the way words are sequenced, within the syntax of a sentence. For example, the article "the" is followed by a noun, because of the syntagmatic relation between the words. The paradigmatic plane, on the other hand, focuses on an analysis that is based on the paradigms or concepts that are embedded in a given text. In this case, words of the same type or class may be replaced in the text with each other to achieve the same conceptual understanding. History The earliest activities in the description of language have been attributed to the 6th-century BC Indian grammarian Pāṇini who composed a formal description of the Sanskrit language in his Aṣṭādhyāyī. Today, modern-day theories on grammar employ many of the principles that were laid down then. Before the 20th century, the term philology, first attested in 1716, was commonly used to refer to the study of language, which was then predominantly historical in focus. Since Ferdinand de Saussure's insistence on the importance of synchronic analysis, however, this focus has shifted and the term philology is now generally used for the "study of a language's grammar, history, and literary tradition", especially in the United States (where philology has never been very popularly considered as the "science of language"). Although the term linguist in the sense of "a student of language" dates from 1641, the term linguistics is first attested in 1847. It is now the usual term in English for the scientific study of language, though linguistic science is sometimes used. Linguistics is a multi-disciplinary field of research that combines tools from natural sciences, social sciences, formal sciences, and the humanities. Many linguists, such as David Crystal, conceptualize the field as being primarily scientific. The term linguist applies to someone who studies language or is a researcher within the field, or to someone who uses the tools of the discipline to describe and analyse specific languages. An early formal study of language was undertaken in India by the 6th-century BC grammarian Pāṇini, who formulated 3,959 rules of Sanskrit morphology. Pāṇini's systematic classification of the sounds of Sanskrit into consonants and vowels, and word classes, such as nouns and verbs, was the first known instance of its kind. In the Middle East, Sibawayh, a Persian, made a detailed description of Arabic in AD 760 in his monumental work, Al-kitab fii an-naħw (الكتاب في النحو, The Book on Grammar), the first known author to distinguish between sounds and phonemes (sounds as units of a linguistic system). Western interest in the study of languages began somewhat later than in the East, but the grammarians of the classical languages did not use the same methods or reach the same conclusions as their contemporaries in the Indic world. Early interest in language in the West was a part of philosophy, not of grammatical description. The first insights into semantic theory were made by Plato in his Cratylus dialogue, where he argues that words denote concepts that are eternal and exist in the world of ideas. This work is the first to use the word etymology to describe the history of a word's meaning. Around 280 BC, one of Alexander the Great's successors founded a university (see Musaeum) in Alexandria, where a school of philologists studied the ancient texts in Greek, and taught Greek to speakers of other languages. While this school was the first to use the word "grammar" in its modern sense, Plato had used the word in its original meaning as "téchnē grammatikḗ" (Τέχνη Γραμματική), the "art of writing", which is also the title of one of the most important works of the Alexandrine school by Dionysius Thrax. Throughout the Middle Ages, the study of language was subsumed under the topic of philology, the study of ancient languages and texts, practised by such educators as Roger Ascham, Wolfgang Ratke, and John Amos Comenius. In the 18th century, the first use of the comparative method by William Jones sparked the rise of comparative linguistics. Bloomfield attributes "the first great scientific linguistic work of the world" to Jacob Grimm, who wrote Deutsche Grammatik. It was soon followed by other authors writing similar comparative studies on other language groups of Europe. The study of language was broadened from Indo-European to language in general by Wilhelm von Humboldt, of whom Bloomfield asserts: This study received its foundation at the hands of the Prussian statesman and scholar Wilhelm von Humboldt (1767–1835), especially in the first volume of his work on Kavi, the literary language of Java, entitled Über die Verschiedenheit des menschlichen Sprachbaues und ihren Einfluß auf die geistige Entwickelung des Menschengeschlechts (On the Variety of the Structure of Human Language and its Influence upon the Mental Development of the Human Race). There was a shift of focus from historical and comparative linguistics to synchronic analysis in early 20th century. Structural analysis was improved by Leonard Bloomfield, Louis Hjelmslev; and Zellig Harris who also developed methods of discourse analysis. Functional analysis was developed by the Prague linguistic circle and André Martinet. As sound recording devices became commonplace in the 1960s, dialectal recordings were made and archived, and the audio-lingual method provided a technological solution to foreign language learning. The 1960s also saw a new rise of comparative linguistics: the study of language universals in linguistic typology. Towards the end of the century the field of linguistics became divided into further areas of interest with the advent of language technology and digitalized corpora. Areas of research Sociolinguistics is the study of how language is shaped by social factors. This sub-discipline focuses on the synchronic approach of linguistics, and looks at how a language in general, or a set of languages, display variation and varieties at a given point in time. The study of language variation and the different varieties of language through dialects, registers, and idiolects can be tackled through a study of style, as well as through analysis of discourse. Sociolinguists research both style and discourse in language, as well as the theoretical factors that are at play between language and society. Developmental linguistics is the study of the development of linguistic ability in individuals, particularly the acquisition of language in childhood. Some of the questions that developmental linguistics looks into are how children acquire different languages, how adults can acquire a second language, and what the process of language acquisition is. Neurolinguistics is the study of the structures in the human brain that underlie grammar and communication. Researchers are drawn to the field from a variety of backgrounds, bringing along a variety of experimental techniques as well as widely varying theoretical perspectives. Much work in neurolinguistics is informed by models in psycholinguistics and theoretical linguistics, and is focused on investigating how the brain can implement the processes that theoretical and psycholinguistics propose are necessary in producing and comprehending language. Neurolinguists study the physiological mechanisms by which the brain processes information related to language, and evaluate linguistic and psycholinguistic theories, using aphasiology, brain imaging, electrophysiology, and computer modelling. Amongst the structures of the brain involved in the mechanisms of neurolinguistics, the cerebellum which contains the highest numbers of neurons has a major role in terms of predictions required to produce language. Linguists are largely concerned with finding and describing the generalities and varieties both within particular languages and among all languages. Applied linguistics takes the results of those findings and "applies" them to other areas. Linguistic research is commonly applied to areas such as language education, lexicography, translation, language planning, which involves governmental policy implementation related to language use, and natural language processing. "Applied linguistics" has been argued to be something of a misnomer. Applied linguists actually focus on making sense of and engineering solutions for real-world linguistic problems, and not literally "applying" existing technical knowledge from linguistics. Moreover, they commonly apply technical knowledge from multiple sources, such as sociology (e.g., conversation analysis) and anthropology. (Constructed language fits under Applied linguistics.) Today, computers are widely used in many areas of applied linguistics. Speech synthesis and speech recognition use phonetic and phonemic knowledge to provide voice interfaces to computers. Applications of computational linguistics in machine translation, computer-assisted translation, and natural language processing are areas of applied linguistics that have come to the forefront. Their influence has had an effect on theories of syntax and semantics, as modelling syntactic and semantic theories on computers constraints. Linguistic analysis is a sub-discipline of applied linguistics used by many governments to verify the claimed nationality of people seeking asylum who do not hold the necessary documentation to prove their claim. This often takes the form of an interview by personnel in an immigration department. Depending on the country, this interview is conducted either in the asylum seeker's native language through an interpreter or in an international lingua franca like English. Australia uses the former method, while Germany employs the latter; the Netherlands uses either method depending on the languages involved. Tape recordings of the interview then undergo language analysis, which can be done either by private contractors or within a department of the government. In this analysis, linguistic features of the asylum seeker are used by analysts to make a determination about the speaker's nationality. The reported findings of the linguistic analysis can play a critical role in the government's decision on the refugee status of the asylum seeker. Language documentation combines anthropological inquiry (into the history and culture of language) with linguistic inquiry, in order to describe languages and their grammars. Lexicography involves the documentation of words that form a vocabulary. Such a documentation of a linguistic vocabulary from a particular language is usually compiled in a dictionary. Computational linguistics is concerned with the statistical or rule-based modeling of natural language from a computational perspective. Specific knowledge of language is applied by speakers during the act of translation and interpretation, as well as in language education – the teaching of a second or foreign language. Policy makers work with governments to implement new plans in education and teaching which are based on linguistic research.[citation needed] Since the inception of the discipline of linguistics, linguists have been concerned with describing and analysing previously undocumented languages. Starting with Franz Boas in the early 1900s, this became the main focus of American linguistics until the rise of formal linguistics in the mid-20th century. This focus on language documentation was partly motivated by a concern to document the rapidly disappearing languages of indigenous peoples. The ethnographic dimension of the Boasian approach to language description played a role in the development of disciplines such as sociolinguistics, anthropological linguistics, and linguistic anthropology, which investigate the relations between language, culture, and society.[citation needed] The emphasis on linguistic description and documentation has also gained prominence outside North America, with the documentation of rapidly dying indigenous languages becoming a focus in some university programs in linguistics. Language description is a work-intensive endeavour, usually requiring years of field work in the language concerned, so as to equip the linguist to write a sufficiently accurate reference grammar. Further, the task of documentation requires the linguist to collect a substantial corpus in the language in question, consisting of texts and recordings, both sound and video, which can be stored in an accessible format within open repositories, and used for further research. The sub-field of translation includes the translation of written and spoken texts across media, from digital to print and spoken. To translate literally means to transmute the meaning from one language into another. Translators are often employed by organizations such as travel agencies and governmental embassies to facilitate communication between two speakers who do not know each other's language. Translators are also employed to work within computational linguistics setups like Google Translate, which is an automated program to translate words and phrases between any two or more given languages. Translation is also conducted by publishing houses, which convert works of writing from one language to another in order to reach varied audiences. Cross-national and cross-cultural survey research studies employ translation to collect comparable data among multilingual populations. Academic translators specialize in or are familiar with various other disciplines such as technology, science, law, economics, etc. Clinical linguistics is the application of linguistic theory to the field of speech-language pathology. Speech language pathologists work on corrective measures to treat communication and swallowing disorders. Computational linguistics is the study of linguistic issues in a way that is "computationally responsible", i.e., taking careful note of computational consideration of algorithmic specification and computational complexity, so that the linguistic theories devised can be shown to exhibit certain desirable computational properties and their implementations. Computational linguists also work on computer language and software development. Evolutionary linguistics is a sociobiological approach to analyzing the emergence of the language faculty through human evolution, and also the application of evolutionary theory to the study of cultural evolution among different languages. It is also a study of the dispersal of various languages across the globe, through movements among ancient communities. Forensic linguistics is the application of linguistic analysis to forensics. Forensic analysis investigates the style, language, lexical use, and other linguistic and grammatical features used in the legal context to provide evidence in courts of law. Forensic linguists have also used their expertise in the framework of criminal cases. See also References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Torpedo_Data_Computer] | [TOKENS: 2609]
Contents Torpedo Data Computer The Torpedo Data Computer (TDC) was an early electromechanical analog computer used for torpedo fire-control on American submarines during World War II. Britain, Germany, and Japan also developed automated torpedo fire control equipment, but none were as advanced as the US Navy's TDC, as it was able to automatically track the target rather than simply offering an instantaneous firing solution. This unique capability of the TDC set the standard for submarine torpedo fire control during World War II. Replacing the previously standard hand-held slide rule-type devices (known as the "banjo" and "Is/Was"), the TDC was designed to provide fire-control solutions for submarine torpedo firing against surface ships. The TDC was a rather bulky addition to the sub's conning tower and required two extra crewmen: one as an expert in its maintenance, the other as its actual operator. Despite these drawbacks, the use of the TDC was an important factor in the successful commerce raiding program conducted by American submarines during the Pacific campaign of World War II. Accounts of the American submarine campaign in the Pacific often cite the use of TDC. Some officers became highly skilled in its use, and the Navy set up a training school for operation of the device. Two upgraded World War II-era U.S. Navy fleet submarines (USS Tusk and Cutlass) with their TDCs continue to serve with Taiwan's navy and U.S. Nautical Museum staff are assisting them with maintaining their equipment. The museum also has a fully restored and functioning TDC from USS Pampanito, docked in San Francisco. Background The problem of aiming a torpedo has occupied military engineers since Robert Whitehead developed the modern torpedo in the 1860s. These early torpedoes ran at a preset depth on a straight course (consequently they are frequently referred to as "straight runners"). This was the state of the art in torpedo guidance until the development of the homing torpedo during the latter part of World War II. The vast majority of submarine torpedoes during World War II were straight running, and these continued in use for many years after World War II. In fact, two World War II-era straight running torpedoes — fired by the British nuclear-powered submarine HMS Conqueror — sank ARA General Belgrano in 1982. During World War I, computing a target intercept course for a torpedo was a manual process where the fire control party was aided by mechanical calculator/sights or various slide rules – the U.S. examples were the Mark VIII Angle Solver (colloquially called the "banjo", for its shape), and the "Is/Was" circular sliderule (Nasmith Director), for predicting where a target will be based on where it is now and was. These were often "woefully inaccurate", which helps explain why torpedo spreads were advised. During World War II, Germany, Japan, and the United States each developed analog computers to automate the process of computing the required torpedo course. In 1932, the Bureau of Ordnance (BuOrd) initiated development of the TDC with Arma Corporation and Ford Instruments. This culminated in the "very complicated" Mark 1 in 1938. This was retrofitted into older boats, beginning with Dolphin and up through the newest Salmons. The first submarine designed to use the TDC was Tambor, launched in 1940 with the Mark III, located in the conning tower. (This differed from earlier outfits.) It proved to be the best torpedo fire control system of World War II. In 1943, the Torpedo Data Computer Mark IV was developed to support the Mark 18 torpedo. Both the Mk III and Mk IV TDC were developed by Arma Corporation (now American Bosch Arma). A straight-running torpedo has a gyroscope-based control system that ensures that the torpedo will run a straight course. The torpedo can run on a course different from that of the submarine by adjusting a parameter called the gyro angle, which sets the course of the torpedo relative to the course of the submarine (see Figure 2). The primary role of the TDC is to determine the gyro angle setting required to ensure that the torpedo will strike the target. Determining the gyro angle required the real-time solution of a complex trigonometric equation (see Equation 1 for a simplified example). The TDC provided a continuous solution to this equation using data updates from the submarine's navigation sensors and the TDC's target tracker. The TDC was also able to automatically update all torpedo gyro angle settings simultaneously with a fire control solution, which improved the accuracy over systems that required manual updating of the torpedo's course. The TDC enables the submarine to launch the torpedo on a course different from that of the submarine, which is important tactically. Otherwise, the submarine would need to be pointed at the projected intercept point in order to launch a torpedo. Requiring the entire vessel to be pointed in order to launch a torpedo would be time consuming, require precise submarine course control, and would needlessly complicate the torpedo firing process. The TDC with target tracking gives the submarine the ability to maneuver independently of the required target intercept course for the torpedo. As is shown in Figure 2, in general, the torpedo does not actually move in a straight path immediately after launch and it does not instantly accelerate to full speed, which are referred to as torpedo ballistic characteristics. The ballistic characteristics are described by three parameters: reach, turning radius, and corrected torpedo speed. Also, the target bearing angle is different from the point of view of the periscope versus the point of view of the torpedo, which is referred to as torpedo tube parallax. These factors are a significant complication in the calculation of the gyro angle, and the TDC must compensate for their effects. Straight running torpedoes were usually launched in salvo (i.e. multiple launches in a short period of time) or a spread (i.e. multiple launches with slight angle offsets) to increase the probability of striking the target given the inaccuracies present in the measurement of angles, target range, target speed, torpedo track angle, and torpedo speed. Salvos and spreads were also launched to strike tough targets multiple times to ensure their destruction. The TDC supported the firing of torpedo salvos by allowing short time offsets between firings and torpedo spreads by adding small angle offsets to each torpedo's gyro angle. Before the sinking of South Korea's ROKS Cheonan by North Korea in 2010, the last warship sunk by a submarine torpedo attack, ARA General Belgrano in 1982, was struck by two torpedoes from a three torpedo spread. To accurately compute the gyro angle for a torpedo in a general engagement scenario, the target course, speed, range, and bearing must be accurately known. During World War II, target course, range, and bearing estimates often had to be generated using periscope observations, which were highly subjective and error prone. The TDC was used to refine the estimates of the target's course, range, and bearing through a process of Estimating the target's course was generally considered the most difficult of the observation tasks. The accuracy of the result was highly dependent on the experience of the skipper. During combat, the actual course of the target was not usually determined but instead the skippers determined a related quantity called "angle on the bow." Angle on the bow is the angle formed by the target course and the line of sight to the submarine. Some skippers, like Richard O'Kane, practiced determining the angle on the bow by looking at Imperial Japanese Navy ship models mounted on a calibrated lazy Susan through an inverted binocular barrel. To generate target position data versus time, the TDC needed to solve the equations of motion for the target relative to the submarine. The equations of motion are differential equations and the TDC used mechanical integrators to generate its solution. The TDC needed to be positioned near other fire control equipment to minimize the amount of electromechanical interconnect. Because submarine space within the pressure hull was limited, the TDC needed to be as small as possible. On World War II submarines, the TDC and other fire control equipment was mounted in the conning tower, which was a very small space. The packaging problem was severe and the performance of some early torpedo fire control equipment was hampered by the need to make it small. It had an array of handcranks, dials, and switches for data input and display. To generate a fire control solution, it required inputs on The TDC performed the trigonometric calculations required to compute a target intercept course for the torpedo. It also had an electromechanical interface to the torpedoes, allowing it to automatically set courses while torpedoes were still in their tubes, ready to be fired. The TDC's target tracking capability was used by the fire control party to continuously update the fire control solution even while the submarine was maneuvering. The TDC's target tracking ability also allowed the submarine to accurately fire torpedoes even when the target was temporarily obscured by smoke or fog. Since the TDC performed two separate functions, generating target position estimates and computing torpedo firing angles, the TDC consisted of two types of analog computers: The equations implemented in the angle solver can be found in the Torpedo Data Computer manual. The Submarine Torpedo Fire Control Manual discusses the calculations in a general sense and a greatly abbreviated form of that discussion is presented here. The general torpedo fire control problem is illustrated in Figure 2. The problem is made more tractable if we assume: As can be seen in Figure 2, these assumptions are not true in general because of the torpedo ballistic characteristics and torpedo tube parallax. Providing the details as to how to correct the torpedo gyro angle calculation for ballistics and parallax is complicated and beyond the scope of this article. Most discussions of gyro angle determination take the simpler approach of using Figure 3, which is called the torpedo fire control triangle. Figure 3 provides an accurate model for computing the gyro angle when the gyro angle is small, usually less than 30°. The effects of parallax and ballistics are minimal for small gyro angle launches because the course deviations they cause are usually small enough to be ignorable. U.S. submarines during World War II preferred to fire their torpedoes at small gyro angles because the TDC's fire control solutions were most accurate for small angles. The problem of computing the gyro angle setting is a trigonometry problem that is simplified by first considering the calculation of the deflection angle, which ignores torpedo ballistics and parallax. For small gyro angles, θGyro ≈ θBearing − θDeflection. A direct application of the law of sines to Figure 3 produces Equation 1. where Range plays no role in Equation 1, which is true as long as the three assumptions are met. In fact, Equation 1 is the same equation solved by the mechanical sights of steerable torpedo tubes used on surface ships during World War I and World War II. Torpedo launches from steerable torpedo tubes meet the three stated assumptions well. However, an accurate torpedo launch from a submarine requires parallax and torpedo ballistic corrections when gyro angles are large. These corrections require knowing range accurately. When the target range was not known, torpedo launches requiring large gyro angles were not recommended. Equation 1 is frequently modified to substitute track angle for deflection angle (track angle is defined in Figure 2, θTrack=θBow+θDeflection). This modification is illustrated with Equation 2. where θTrack is the angle between the target ship's course and the torpedo's course. A number of publications state the optimum torpedo track angle as 110° for a Mk 14 (46 knot weapon). Figure 4 shows a plot of the deflection angle versus track angle when the gyro angle is 0° (i.e.., θDeflection=θBearing). Optimum track angle is defined as the point of minimum deflection angle sensitivity to track angle errors for a given target speed. This minimum occurs at the points of zero slope on the curves in Figure 4 (these points are marked by small triangles). The curves show the solutions of Equation 2 for deflection angle as a function of target speed and track angle. Figure 4 confirms that 110° is the optimum track angle for a 16-knot (30 km/h) target, which would be a common ship speed. As with the angle solver, the equations implemented in the position keeper can be found in the Torpedo Data Computer manual. Similar functions were implemented in the rangekeepers for surface ship-based fire control systems. For a general discussion of the principles behind the position keeper, see Rangekeeper. References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/United_States_Navy] | [TOKENS: 11696]
Contents United States Navy October 13, 1775; 250 years ago (1775-10-13)(as the Continental Navy) The United States Navy (USN) is the maritime service branch of the United States Armed Forces and is designated as the navy of the United States in the Constitution. It is the world's most powerful navy with the largest displacement, at 4.5 million tons in 2021.[needs update] It has the world's largest aircraft carrier fleet, with eleven in service, one undergoing trials, two new carriers under construction, and six other carriers planned as of 2024. With 336,978 personnel on active duty and 101,583 in the Ready Reserve, the U.S. Navy is the third largest of the United States military service branches in terms of personnel. It has 299 deployable combat vessels and about 4,012 operational aircraft as of 18 July 2023.[needs update] The U.S. Navy is a part of the United States Department of Defense and is one of six armed forces of the United States and one of eight uniformed services of the United States. The United States Navy traces its origins to the Continental Navy, which was established during the American Revolutionary War and was effectively disbanded as a separate entity shortly thereafter. After suffering significant loss of goods and personnel at the hands of the Barbary pirates from Algiers, the United States Congress passed the Naval Act of 1794 for the construction of six heavy frigates, the first ships of the Navy. The United States Navy played a major role in the American Civil War by blockading the Confederacy and seizing control of its rivers. It played the central role in the World War II defeat of Imperial Japan. The United States Navy emerged from World War II as the most powerful navy in the world, succeeding the British Navy. The modern United States Navy maintains a sizable global presence, deploying in strength in such areas as the Western Pacific, the Mediterranean, and the Indian Ocean. It is a blue-water navy with the ability to project force onto the littoral regions of the world, engage in forward deployments during peacetime and rapidly respond to regional crises, making it a frequent actor in American foreign and military policy. The United States Navy is part of the Department of the Navy, alongside the United States Marine Corps, which is its coequal sister service. The Department of the Navy is headed by the civilian secretary of the Navy. The Department of the Navy is itself a military department of the Department of Defense, which is headed by the secretary of defense. The chief of naval operations (CNO) is the most senior Navy officer serving in the Department of the Navy. Mission To recruit, train, equip, and organize to deliver combat ready Naval forces to win conflicts and wars while maintaining security and deterrence through sustained forward presence. — Mission statement of the United States Navy. The Navy's three primary areas of responsibility are: U.S. Navy training manuals state that the overall mission of the armed forces is "to be prepared to conduct prompt and sustained combat operations in support of the national interest." The Navy's five enduring functions are: sea control, power projection, deterrence, maritime security, and sealift. History It follows then as certain as that night succeeds the day, that without a decisive naval force we can do nothing definitive, and with it, everything honorable and glorious. — George Washington 15 November 1781, to Marquis de Lafayette Would to Heaven we had a navy able to reform those enemies to mankind or crush them into non-existence. — George Washington 15 August 1786, to Marquis de Lafayette Naval power . . . is the natural defense of the United States. — John Adams The Navy was rooted in the colonial seafaring tradition, which produced a large community of sailors, captains, and shipbuilders. In the early stages of the American Revolutionary War, Massachusetts had its own Massachusetts Naval Militia. The rationale for establishing a national navy was debated in the Second Continental Congress. Supporters argued that a navy would protect shipping, defend the coast, and make it easier to seek support from foreign countries. Detractors countered that challenging the British Royal Navy, then the world's preeminent naval power, was a foolish undertaking. Commander in Chief George Washington resolved the debate when he commissioned the ocean-going schooner USS Hannah to interdict British merchantmen and reported the captures to the Congress. On 13 October 1775, the Continental Congress authorized the purchase of two vessels to be armed for a cruise against British merchantmen; this resolution created the Continental Navy and is considered the first establishment of the U.S. Navy. The Continental Navy achieved mixed results; it was successful in a number of engagements and raided many British merchant vessels, but it lost twenty-four of its vessels and at one point was reduced to two in active service. In August 1785, after the Revolutionary War had drawn to a close, Congress had sold Alliance, the last ship remaining in the Continental Navy due to a lack of funds to maintain the ship or support a navy. In 1972, the chief of naval operations, Admiral Elmo Zumwalt, authorized the Navy to celebrate its birthday on 13 October to honor the establishment of the Continental Navy in 1775. The United States was without a navy for nearly a decade, a situation that exposed U.S. merchant ships to attacks by the Barbary pirates. The sole armed maritime presence between 1790 and the launching of the U.S. Navy's first warships in 1797 was the U.S. Revenue-Marine, the primary predecessor of the U.S. Coast Guard. Although the United States Revenue Cutter Service conducted operations against the pirates, the pirates' depredations far outstripped its abilities and Congress passed the Naval Act of 1794 that established a permanent standing navy on 27 March 1794. The Naval Act ordered the construction and manning of six frigates and, by October 1797, the first three were brought into service: USS United States, USS Constellation, and USS Constitution. Due to his strong posture on having a strong standing Navy during this period, John Adams is "often called the father of the American Navy". In 1798–99 the Navy was involved in an undeclared Quasi-War with France. From 1801 to 1805, in the First Barbary War, the U.S. Navy defended U.S. ships from the Barbary pirates, blockaded the Barbary ports and executed attacks against the Barbary' fleets. The U.S. Navy saw substantial action in the War of 1812, where it fought numerous engagements with Royal Navy. It emerged victorious in the Battle of Lake Erie and prevented the region from becoming a threat to American operations in the area. The result was a major victory for the U.S. Army at the Niagara Frontier of the war, and the defeat of Tecumseh's confederacy at the Battle of the Thames. Despite this, the U.S. Navy could not prevent the British from blockading its ports and landing troops. But after the War of 1812 ended in 1815, the U.S. Navy primarily focused its attention on protecting American shipping assets, sending squadrons to the Caribbean, the Mediterranean, where it participated in the Second Barbary War that ended piracy in the region, South America, Africa, and the Pacific. From 1819 to the outbreak of the Civil War, the Africa Squadron operated to suppress the slave trade, seizing 36 slave ships, although its contribution was smaller than that of the much larger Royal Navy. After 1840 several secretaries of the navy were southerners who advocated for strengthening southern naval defenses, expanding the fleet, and making naval technological improvements. During the Mexican–American War the U.S. Navy blockaded Mexican ports, capturing or burning the Mexican fleet in the Gulf of California and capturing all major cities in Baja California peninsula. In 1846–1848 the Navy successfully used the Pacific Squadron under Commodore Robert F. Stockton and its marines and blue-jackets to facilitate the capture of California with large-scale land operations coordinated with the local militia organized in the California Battalion. The Navy conducted the U.S. military's first large-scale amphibious joint operation by successfully landing 12,000 army troops with their equipment in one day at Veracruz, Mexico. When larger guns were needed to bombard Veracruz, Navy volunteers landed large guns and manned them in the successful bombardment and capture of the city. This successful landing and capture of Veracruz opened the way for the capture of Mexico City and the end of the war. The U.S. Navy established itself as a player in United States foreign policy through the actions of Commodore Matthew C. Perry in Japan, which resulted in the Convention of Kanagawa in 1854. Naval power played a significant role during the American Civil War, in which the Union had a distinct advantage over the Confederacy on the seas. A Union blockade on all major ports shut down exports and the coastal trade, but blockade runners provided a thin lifeline. The Brown-water navy components of the U.S. navy control of the river systems made internal travel difficult for Confederates and easy for the Union. The war saw ironclad warships in combat for the first time at the Battle of Hampton Roads in 1862, which pitted USS Monitor against CSS Virginia. For two decades after the war, however, the U.S. Navy's fleet was neglected and became technologically obsolete. A modernization program beginning in the 1880s when the first steel-hulled warships stimulated the American steel industry, and "the new steel navy" was born. This rapid expansion of the U.S. Navy and its decisive victory over the outdated Spanish Navy in 1898 brought a new respect for American technical quality. Rapid building of at first pre-dreadnoughts, then dreadnoughts brought the U.S. in line with the navies of countries such as Britain and Germany. In 1907, most of the Navy's battleships, with several support vessels, dubbed the Great White Fleet, were showcased in a 14-month circumnavigation of the world. Ordered by President Theodore Roosevelt, it was a mission designed to demonstrate the Navy's capability to extend to the global theater. By 1911, the U.S. had begun building the super-dreadnoughts at a pace to eventually become competitive with Britain. 1911 also saw the first naval aircraft with the navy which would lead to the informal establishment of United States Naval Flying Corps to protect shore bases. It was not until 1921 US naval aviation truly commenced. During World War I, the U.S. Navy spent much of its resources protecting and shipping hundreds of thousands of soldiers and marines of the American Expeditionary Force and war supplies across the Atlantic in U-boat infested waters with the Cruiser and Transport Force. It also concentrated on laying the North Sea Mine Barrage. Hesitation by the senior command meant that naval forces were not contributed until late 1917. Battleship Division Nine was dispatched to Britain and served as the Sixth Battle Squadron of the British Grand Fleet. Its presence allowed the British to decommission some older ships and reuse the crews on smaller vessels. Destroyers and U.S. Naval Air Force units like the Northern Bombing Group contributed to the anti-submarine operations. The strength of the United States Navy grew under an ambitious ship building program associated with the Naval Act of 1916. Naval construction, especially of battleships, was limited by the Washington Naval Conference of 1921–22, the first arms control conference in history. The aircraft carriers USS Saratoga (CV-3) and USS Lexington (CV-2) were built on the hulls of partially built battle cruisers that had been canceled by the treaty. The New Deal used Public Works Administration funds to build warships, such as USS Yorktown (CV-5) and USS Enterprise (CV-6). By 1936, with the completion of USS Wasp (CV-7), the U.S. Navy possessed a carrier fleet of 165,000 tonnes displacement, although this figure was nominally recorded as 135,000 tonnes to comply with treaty limitations. Franklin Roosevelt, the number two official in the Navy Department during World War I, appreciated the Navy and gave it strong support. In return, senior leaders were eager for innovation and experimented with new technologies, such as magnetic torpedoes, and developed a strategy called War Plan Orange for victory in the Pacific in a hypothetical war with Japan that would eventually become reality. The U.S. Navy grew into a formidable force in the years prior to World War II, with battleship production being restarted in 1937, commencing with USS North Carolina (BB-55). Though ultimately unsuccessful, Japan tried to neutralize this strategic threat with the surprise attack on Pearl Harbor on 7 December 1941. Following American entry into the war, the U.S. Navy grew tremendously as the United States was faced with a two-front war on the seas. It achieved notable acclaim in the Pacific Theater, where it was instrumental to the Allies' successful "island hopping" campaign. The U.S. Navy participated in many significant battles, including the Battle of the Coral Sea, the Battle of Midway, the Solomon Islands Campaign, the Battle of the Philippine Sea, the Battle of Leyte Gulf, and the Battle of Okinawa. By 1943, the navy's size was larger than the combined fleets of all the other combatant nations in World War II. By war's end in 1945, the U.S. Navy had added hundreds of new ships, including 18 aircraft carriers and 8 battleships, and had over 70% of the world's total numbers and total tonnage of naval vessels of 1,000 tons or greater. At its peak, the U.S. Navy was operating 6,768 ships on V-J Day in August 1945. Doctrine had significantly shifted by the end of the war. The U.S. Navy had followed in the footsteps of the navies of Great Britain and Germany which favored concentrated groups of battleships as their main offensive naval weapons. The development of the aircraft carrier and its devastating use by the Japanese against the U.S. at Pearl Harbor, however, shifted U.S. thinking. The Pearl Harbor attack destroyed or took out of action a significant number of U.S. Navy battleships. This placed much of the burden of retaliating against the Japanese on the small number of aircraft carriers. During World War II some 4,000,000 Americans served in the United States Navy. The potential for armed conflict with the Soviet Union during the Cold War pushed the U.S. Navy to continue its technological advancement by developing new weapons systems, ships, and aircraft. U.S. naval strategy changed to that of forward deployment in support of U.S. allies with an emphasis on carrier battle groups. The navy was a major participant in the Korean and Vietnam Wars, blockaded Cuba during the Cuban Missile Crisis, and, through the use of ballistic missile submarines, became an important aspect of the United States' nuclear strategic deterrence policy. The U.S. Navy conducted various combat operations in the Persian Gulf against Iran in 1987 and 1988, most notably Operation Praying Mantis. The Navy was extensively involved in Operation Urgent Fury, Operation Desert Shield, Operation Desert Storm, Operation Deliberate Force, Operation Allied Force, Operation Desert Fox and Operation Southern Watch. The U.S. Navy has also been involved in search and rescue/search and salvage operations, sometimes in conjunction with vessels of other countries as well as with U.S. Coast Guard ships. Two examples are the 1966 Palomares B-52 crash incident and the subsequent search for missing hydrogen bombs, and Task Force 71 of the Seventh Fleet's operation in search for Korean Air Lines Flight 007, shot down by the Soviets on 1 September 1983. The U.S. Navy continues to be a major support to U.S. interests in the 21st century. Since the end of the Cold War, it has shifted its focus from preparations for large-scale war with the Soviet Union to special operations and strike missions in regional conflicts. The navy participated in Operation Enduring Freedom, Operation Iraqi Freedom, and is a major participant in the ongoing war on terror, largely in this capacity. Development continues on new ships and weapons, including the Gerald R. Ford-class aircraft carrier and the Littoral combat ship. Because of its size, weapons technology, and ability to project force far from U.S. shores, the current U.S. Navy remains an asset for the United States. Moreover, it is the principal means through which the U.S. maintains international global order, namely by safeguarding global trade and protecting allied nations. In 2007, the U.S. Navy joined with the U.S. Marine Corps and U.S. Coast Guard to adopt a new maritime strategy called A Cooperative Strategy for 21st Century Seapower that raises the notion of prevention of war to the same philosophical level as the conduct of war. The strategy was presented by the Chief of Naval Operations, the Commandant of the Marine Corps, and Commandant of the Coast Guard at the International Sea Power Symposium in Newport, Rhode Island on 17 October 2007. The strategy recognized the economic links of the global system and how any disruption due to regional crises (man-made or natural) can adversely impact the U.S. economy and quality of life. This new strategy charts a course for the Navy, Coast Guard, and Marine Corps to work collectively with each other and international partners to prevent these crises from occurring or reacting quickly should one occur to prevent negative impacts on the U.S. In 2010, Admiral Gary Roughead, Chief of Naval Operations, noted that demands on the Navy have grown as the fleet has shrunk and that in the face of declining budgets in the future, the U.S. Navy must rely even more on international partnerships. In its 2013 budget request, the navy focused on retaining all eleven big deck carriers, at the expense of cutting numbers of smaller ships and delaying the SSBN replacement. By the next year the USN found itself unable to maintain eleven aircraft carriers in the face of the expiration of budget relief offered by the Bipartisan Budget Act of 2013 and CNO Jonathan Greenert said that a ten ship carrier fleet would not be able to sustainably support military requirements. The British First Sea Lord George Zambellas said that the USN had switched from "outcome-led to resource-led" planning. One significant change in U.S. policymaking that is having a major effect on naval planning is the Pivot to East Asia. In response, the Secretary of the Navy Ray Mabus stated in 2015 that 60 percent of the total U.S. fleet will be deployed to the Pacific by 2020. The Navy's most recent 30-year shipbuilding plan, published in 2016, calls for a future fleet of 350 ships to meet the challenges of an increasingly competitive international environment. A provision of the 2018 National Defense Authorization Act called for expanding the naval fleet to 355 ships "as soon as practicable", but did not establish additional funding nor a timeline. Organization The U.S. Navy falls under the administration of the Department of the Navy, under civilian leadership of the Secretary of the Navy (SECNAV). The most senior naval officer is the Chief of Naval Operations (CNO), a four-star admiral who is immediately under and reports to the Secretary of the Navy. At the same time, the Chief of Naval Operations is a member of the Joint Chiefs of Staff (JCS), though the JCS plays only an advisory role to the President and does not nominally form part of the chain of command. The Secretary of the Navy and Chief of Naval Operations are responsible for organizing, recruiting, training, and equipping the Navy so that it is ready for operation under the commanders of the unified combatant commands. There are ten components in the operating forces of the U.S. Navy: Fleet Forces Command controls a number of unique capabilities, including the Naval Expeditionary Combat Command and the Naval Information Forces. The United States Navy has seven active numbered fleets – Second, Third, Fifth, Sixth, Seventh and Tenth Fleets are each led by a vice admiral, and the Fourth Fleet is led by a rear admiral. These seven fleets are further grouped under Fleet Forces Command (the former Atlantic Fleet), Pacific Fleet, Naval Forces Europe-Africa, and Naval Forces Central Command, whose commander also doubles as Commander Fifth Fleet; the first three commands being led by four-star admirals. The United States First Fleet existed after World War II from 1947, but it was redesignated the Third Fleet in early 1973. The Second Fleet was deactivated in September 2011 but reestablished in August 2018 amid heightened tensions with Russia. It is headquartered in Norfolk, Virginia, with responsibility over the East Coast and North Atlantic. In early 2008, the Navy reactivated the Fourth Fleet to control operations in the area controlled by Southern Command, which consists of US assets in and around Central and South America. Other number fleets were activated during World War II and later deactivated, renumbered, or merged. Shore establishments exist to support the mission of the fleet through the use of facilities on land. Among the commands of the shore establishment, as of April 2011[update], are the Naval Education and Training Command, the U.S. Fleet Cyber Command, the Navy Space Command, the Navy Installations Command, the Naval Meteorology and Oceanography Command, the Naval Information Warfare Systems Command, the Naval Facilities Engineering Command, the Naval Supply Systems Command, the Naval Air Systems Command, the Naval Sea Systems Command, the Bureau of Medicine and Surgery, the Bureau of Naval Personnel, the Office of Naval Research, the Office of Naval Intelligence, the United States Naval Academy, the Naval Safety Command, the Naval Aviation Warfighting Development Center, and the United States Naval Observatory. Official Navy websites list the Office of the Chief of Naval Operations and the Chief of Naval Operations as part of the shore establishment, but these two entities effectively sit superior to the other organizations, playing a coordinating role. In 1834, the United States Marine Corps came under the Department of the Navy. Historically, the Navy has had a unique relationship with the USMC, partly because they both specialize in seaborne operations. Together the Navy and Marine Corps form the Department of the Navy and report to the Secretary of the Navy. However, the Marine Corps is a distinct, separate service branch with its own uniformed service chief – the Commandant of the Marine Corps, a four-star general. The Marine Corps depends on the Navy for medical support (dentists, doctors, nurses, medical technicians known as corpsmen) and religious support (chaplains). Thus, Navy officers and enlisted sailors fulfill these roles. When attached to Marine Corps units deployed to an operational environment they generally wear Marine camouflage uniforms, but otherwise, they wear Navy dress uniforms unless they opt to conform to Marine Corps grooming standards. In the operational environment, as an expeditionary force specializing in amphibious operations, Marines often embark on Navy ships to conduct operations from beyond territorial waters. Marine units deploying as part of a Marine Air-Ground Task Force (MAGTF) operate under the command of the existing Marine chain of command. Although Marine units routinely operate from amphibious assault ships, the relationship has evolved over the years much as the Commander of the Carrier Air Group/Wing (CAG) does not work for the carrier commanding officer, but coordinates with the ship's CO and staff. Some Marine aviation squadrons, usually fixed-wing assigned to carrier air wings train and operate alongside Navy squadrons; they fly similar missions and often fly sorties together under the cognizance of the CAG. Aviation is where the Navy and Marines share the most common ground since aircrews are guided in their use of aircraft by standard procedures outlined in a series of publications known as NATOPS manuals. The United States Coast Guard, in its peacetime role with the Department of Homeland Security, fulfills its law enforcement and rescue role in the maritime environment. It provides Law Enforcement Detachments (LEDETs) to Navy vessels, where they perform arrests and other law enforcement duties during naval boarding and interdiction missions. In times of war, the Coast Guard may be called upon to operate as a service within the Navy. At other times, Coast Guard Port Security Units are sent overseas to guard the security of ports and other assets. The Coast Guard also jointly staffs the Navy's naval coastal warfare groups and squadrons (the latter of which were known as harbor defense commands until late-2004), which oversee defense efforts in foreign littoral combat and inshore areas. Personnel The United States Navy has over 400,000 personnel, approximately a quarter of whom are in ready reserve. Of those on active duty, more than eighty percent are enlisted sailors and around fifteen percent are commissioned officers; the rest are midshipmen of the United States Naval Academy and midshipmen of the Naval Reserve Officer Training Corps at over 180 universities around the country and officer candidates at the Navy's Officer Candidate School. Enlisted sailors complete basic military training at boot camp and then are sent to complete training for their individual careers. Sailors prove they have mastered skills and deserve responsibilities by completing Personnel Qualification Standards (PQS) tasks and examinations. Among the most important is the "warfare qualification", which denotes a journeyman level of capability in Surface Warfare, Aviation Warfare, Information Dominance Warfare, Naval Aircrew, Special Warfare, Seabee Warfare, Submarine Warfare or Expeditionary Warfare. Many qualifications are denoted on a sailor's uniform with U.S. Navy badges and insignia. The uniforms of the U.S. Navy have evolved gradually since the first uniform regulations for officers were issued in 1802 on the formation of the Navy Department. The predominant colors of U.S. Navy uniforms are navy blue and white. U.S. Navy uniforms were based on Royal Navy uniforms of the time and have tended to follow that template. Navy officers serve either as a line officer or as a staff corps officer. Line officers wear an embroidered gold star above their rank of the naval service dress uniform while staff corps officers and commissioned warrant officers wear unique designator insignias that denotes their occupational specialty. Warrant and chief warrant officer ranks are held by technical specialists who direct specific activities essential to the proper operation of the ship, which also require commissioned officer authority. Navy warrant officers serve in 30 specialties covering five categories. Warrant officers should not be confused with the limited duty officer (LDO) in the Navy. Warrant officers perform duties that are directly related to their previous enlisted service and specialized training. This allows the Navy to capitalize on the experience of warrant officers without having to frequently transition them to other duty assignments for advancement. Most Navy warrant officers are accessed from the chief petty officer pay grades, E-7 through E-9, analogous to a senior non-commissioned officer in the other services, and must have a minimum 14 years in service. Sailors in pay grades E-1 through E-3 are considered to be in apprenticeships. They are divided into five definable groups, with colored group rate marks designating the group to which they belong: Seaman, Fireman, Airman, Constructionman, and Hospitalman. E-4 to E-6 are non-commissioned officers (NCOs), and are specifically called Petty officers in the Navy. Petty Officers perform not only the duties of their specific career field but also serve as leaders to junior enlisted personnel. E-7 to E-9 are still considered Petty Officers, but are considered a separate community within the Navy. They have separate berthing and dining facilities (where feasible), wear separate uniforms, and perform separate duties. After attaining the rate of Master Chief Petty Officer, a service member may choose to further their career by becoming a Command Master Chief Petty Officer (CMC). A CMC is considered to be the senior-most enlisted service member within a command, and is the special assistant to the Commanding Officer in all matters pertaining to the health, welfare, job satisfaction, morale, use, advancement and training of the command's enlisted personnel. CMCs can be Command level (within a single unit, such as a ship or shore station), Fleet level (squadrons consisting of multiple operational units, headed by a flag officer or commodore), or Force level (consisting of a separate community within the Navy, such as Subsurface, Air, Reserves). CMC insignia are similar to the insignia for Master Chief, except that the rating symbol is replaced by an inverted five-point star, reflecting a change in their rating from their previous rating (i.e., MMCM) to CMDCM. The stars for Command Master Chief are silver, while stars for Fleet, and gold stars for Force. Additionally, CMCs wear a badge, worn on their left breast pocket, denoting their title (Command/Fleet/Force). Insignia and badges of the United States Navy are military "badges" issued by the Department of the Navy to naval service members who achieve certain qualifications and accomplishments while serving on both active and reserve duty in the United States Navy. Most naval aviation insignia are also permitted for wear on uniforms of the United States Marine Corps. As described in Chapter 5 of U.S. Navy Uniform Regulations, "badges" are categorized as breast insignia (usually worn immediately above and below ribbons) and identification badges (usually worn at breast pocket level). Breast insignia are further divided between command and warfare and other qualification. Insignia come in the form of metal "pin-on devices" worn on formal uniforms and embroidered "tape strips" worn on work uniforms. For the purpose of this article, the general term "insignia" shall be used to describe both, as it is done in Navy Uniform Regulations. The term "badge", although used ambiguously in other military branches and in informal speak to describe any pin, patch, or tab, is exclusive to identification badges and authorized marksmanship awards according to the language in Navy Uniform Regulations, Chapter 5. Below are just a few of the many badges maintained by the Navy. The rest can be seen in the article cited at the top of this section: Bases The size, complexity, and international presence of the United States Navy requires a large number of navy installations to support its operations. While the majority of bases are located inside the United States itself, the Navy maintains a significant number of facilities abroad, either in U.S.-controlled territories or in foreign countries under a Status of Forces Agreement (SOFA). The second largest concentration of installations is at Hampton Roads, Virginia, where the navy occupies over 36,000 acres (15,000 ha) of land. Located at Hampton Roads are Naval Station Norfolk, homeport of the Atlantic Fleet; Naval Air Station Oceana, a master jet base; Naval Amphibious Base Little Creek; and Training Support Center Hampton Roads as well as a number of Navy and commercial shipyards that service navy vessels. The Aegis Training and Readiness Center is located at the Naval Support Activity South Potomac in Dahlgren, Virginia. Maryland is home to NAS Patuxent River, which houses the Navy's Test Pilot School. Also located in Maryland is the United States Naval Academy, situated in Annapolis. NS Newport in Newport, Rhode Island is home to many schools and tenant commands, including the Officer Candidate School, Naval Undersea Warfare Center, and more, and also maintains inactive ships.[clarification needed] There is also a naval base in Charleston, South Carolina. This is home to the Naval Nuclear Power Training Command, under which reside the Nuclear Field "A" Schools (for Machinist Mates (Nuclear), Electrician Mates (Nuclear), and Electronics Technicians (Nuclear)), Nuclear Power School (Officer and Enlisted); and one of two Nuclear Power Training Unit 'Prototype' schools. The state of Florida is the location of three major bases, NS Mayport, the Navy's fourth largest, in Jacksonville, Florida; NAS Jacksonville, a Master Air Anti-submarine Warfare base; and NAS Pensacola; home of the Naval Education and Training Command, the Naval Air Technical Training Center that provides specialty training for enlisted aviation personnel and is the primary flight training base for Navy and Marine Corps Naval Flight Officers and enlisted Naval Aircrewmen. There is also NSA Panama City, Florida which is home to the Center for Explosive Ordnance Disposal and Diving (CENEODIVE) and the Navy Diving and Salvage Training Center and NSA Orlando, Florida, which home to the Naval Air Warfare Center Training Systems Division (NAWCTSD). The main U.S. Navy submarine bases on the east coast are located in Naval Submarine Base New London in Groton, Connecticut and NSB Kings Bay in Kings Bay, Georgia. The Portsmouth Naval Shipyard near Portsmouth, New Hampshire, which repairs naval submarines. NS Great Lakes, north of Chicago, Illinois is the home of the Navy's boot camp for enlisted sailors. The Washington Navy Yard in Washington, D.C., is the Navy's oldest shore establishment and serves as a ceremonial and administrative center for the U.S. Navy, home to the chief of naval operations and numerous commands. The U.S. Navy's largest complex is Naval Air Weapons Station China Lake, California, which covers 1.1 million acres (4,500 km2) of land, or approximately one-third of the U.S. Navy's total land holdings. Naval Base San Diego, California is the main homeport of the Pacific Fleet, although its headquarters is located in Pearl Harbor, Hawaii. NAS North Island is located on the north side of Coronado, California, and is home to Headquarters for Naval Air Forces and Naval Air Force Pacific, the bulk of the Pacific Fleet's helicopter squadrons, and part of the West Coast aircraft carrier fleet. NAB Coronado is located on the southern end of the Coronado Island and is home to the navy's west coast SEAL teams and special boat units. NAB Coronado is also home to the Naval Special Warfare Center, the primary training center for SEALs. The other major collection of naval bases on the west coast is in Puget Sound, Washington. Among them, NS Everett is one of the newer bases and the navy states that it is its most modern facility. NAS Fallon, Nevada serves as the primary training ground for navy strike aircrews and is home to the Naval Strike Air Warfare Center. Master Jet Bases are also located at NAS Lemoore, California, and NAS Whidbey Island, Washington, while the carrier-based airborne early warning aircraft community and major air test activities are located at NAS Point Mugu, California. The naval presence in Hawaii is centered on NS Pearl Harbor, which hosts the headquarters of the Pacific Fleet and many of its subordinate commands. Guam, an island strategically located in the Western Pacific Ocean, maintains a sizable U.S. Navy presence, including NB Guam. The westernmost U.S. territory, it contains a natural Deepwater harbor capable of harboring aircraft carriers in emergencies.[citation needed] Its naval air station was deactivated[citation needed] in 1995 and its flight activities transferred to nearby Andersen Air Force Base. Puerto Rico in the Caribbean formerly housed NS Roosevelt Roads, which was shut down in 2004 shortly after the controversial closure of the live ordnance training area on nearby Vieques Island. The largest overseas base is the United States Fleet Activities Yokosuka, Japan, which serves as the home port for the navy's largest forward-deployed fleet and is a significant base of operations in the Western Pacific.[citation needed] European operations revolve around facilities in Italy (NAS Sigonella and Naval Computer and Telecommunications Station Naples) with NSA Naples as the homeport for the Sixth Fleet and Command Naval Region Europe, Africa, Southwest Asia (CNREURAFSWA), and additional facilities in nearby Gaeta. There is also NS Rota in Spain and NSA Souda Bay in Greece. In the Middle East, naval facilities are located almost exclusively in countries bordering the Persian Gulf, with NSA Bahrain serving as the headquarters of U.S. Naval Forces Central Command and U.S. Fifth Fleet. NS Guantanamo Bay in Cuba is the oldest overseas facility and has become known in recent years as the location of a detention camp for suspected al-Qaeda operatives. Equipment As of 2018[update], the navy operates over 460 ships (including vessels operated by the Military Sealift Command), 3,650+ aircraft, 50,000 non-combat vehicles and owns 75,200 buildings on 3,300,000 acres (13,000 km2). The names of commissioned ships of the U.S. Navy are prefixed with the letters "USS", designating "United States Ship". Non-commissioned, civilian-manned vessels of the navy have names that begin with "USNS", standing for "United States Naval Ship". The names of ships are officially selected by the secretary of the navy, often to honor important people or places. Additionally, each ship is given a letter-based hull classification symbol (for example, CVN or DDG) to indicate the vessel's type and number. All ships in the navy inventory are placed in the Naval Vessel Register, which is part of "the Navy List" (required by article 29 of the United Nations Convention on the Law of the Sea).[dubious – discuss] The register tracks data such as the current status of a ship, the date of its commissioning, and the date of its decommissioning. Vessels that are removed from the register prior to disposal are said to be stricken from the register. The navy also maintains a reserve fleet of inactive vessels that are maintained for reactivation in times of need. The U.S. Navy was one of the first to install nuclear reactors aboard naval vessels. Today, nuclear energy powers all active U.S. aircraft carriers and submarines. In early 2010, the U.S. Navy had identified a need for 313 combat ships but could only afford 232 to 243 ships. In March 2014, the Navy started counting self-deployable support ships such as minesweepers, surveillance craft, and tugs in the "battle fleet" to reach a count of 272 as of October 2016, and it includes ships that have been put in "shrink wrap". The number of ships generally ranged between 270 and 300 throughout the late 2010s. As of February 2022, the Navy has 296 battle force ships, however analyses state the Navy needs a fleet of more than 500 to meet its commitments. Aircraft carriers act as airbases for carrier-based aircraft. They are the largest vessels in the Navy fleet and all are nuclear-powered. An aircraft carrier is typically deployed along with a host of additional vessels, forming a carrier strike group. The supporting ships, which usually include three or four Aegis-equipped cruisers and destroyers, a frigate, and two attack submarines, are tasked with protecting the carrier from air, missile, sea, and undersea threats as well as providing additional strike capabilities themselves. Ready logistics support for the group is provided by a combined ammunition, oiler, and supply ship. Modern carriers are named after American admirals and politicians, usually presidents. The Navy has a statutory requirement for a minimum of 11 aircraft carriers. All 11 carriers are currently active, ten Nimitz-class and one Gerald R. Ford-class. Aircraft Carriers have the ability to house 5,000 people. This is the size of a small town floating in the ocean. Aircraft carriers also have up to 90 aircraft on the ship at one time. Amphibious assault ships are the centerpieces of US amphibious warfare and fulfill the same power projection role as aircraft carriers except that their striking force centers on land forces instead of aircraft. They deliver, command, coordinate, and fully support all elements of a 2,200-strong Marine Expeditionary Unit in an amphibious assault using both air and amphibious vehicles. Resembling small aircraft carriers, amphibious assault ships are capable of V/STOL, STOVL, VTOL, tiltrotor, and rotary wing aircraft operations. They also contain a well deck to support the use of Landing Craft Air Cushion (LCAC) and other amphibious assault watercraft. Recently, amphibious assault ships have begun to be deployed as the core of an expeditionary strike group, which usually consists of an additional amphibious transport dock and dock landing ship for amphibious warfare and an Aegis-equipped cruiser and destroyer, frigate, and attack submarine for group defense. Amphibious assault ships are typically named after World War II aircraft carriers. Amphibious transport docks are warships that embark, transport, and land Marines, supplies, and equipment in a supporting role during amphibious warfare missions. With a landing platform, amphibious transport docks also have the capability to serve as secondary aviation support for an expeditionary group. All amphibious transport docks can operate helicopters, LCACs, and other conventional amphibious vehicles while the newer San Antonio class of ships has been explicitly designed to operate all three elements of the Marines' "mobility triad": Expeditionary Fighting Vehicles (EFVs), the V-22 Osprey tiltrotor aircraft, and LCACs. Amphibious transport docks are typically named after U.S. cities. The dock landing ship is a medium amphibious transport that is designed specifically to support and operate LCACs, though it is able to operate other amphibious assault vehicles in the United States inventory as well. Dock landing ships are normally deployed as a component of an expeditionary strike group's amphibious assault contingent, operating as a secondary launch platform for LCACs. All dock landing ships are named after cities or important places in U.S. and U.S. Naval history. The Navy operates 32 amphibious warfare ships, eight Wasp class and two America class amphibious assault ships, four Harpers Ferry class and six Whidbey Island class dock landing ships, and 12 San Antonio class amphibious transport dock ships. Cruisers are large surface combat vessels that conduct anti-air/anti-missile warfare, surface warfare, anti-submarine warfare, and strike operations independently or as members of a larger task force. Modern guided missile cruisers were developed out of a need to counter the anti-ship missile threat facing the United States Navy. This led to the development of the AN/SPY-1 phased array radar and the RIM-67 Standard missile with the Aegis combat system coordinating the two. Ticonderoga-class cruisers were the first to be equipped with Aegis and were put to use primarily as anti-air and anti-missile defense in a battle force protection role. Later developments of vertical launch systems and the Tomahawk missile gave cruisers additional long-range land and sea strike capability, making them capable of both offensive and defensive battle operations. The Ticonderoga class is the only active class of cruiser. All cruisers in this class are named after battles. Destroyers are multi-mission medium surface ships capable of sustained performance in anti-air, anti-submarine, anti-ship, and offensive strike operations. Like cruisers, guided missile destroyers are primarily focused on surface strikes using Tomahawk missiles and fleet defense through Aegis and the Standard missile. Destroyers additionally specialize in anti-submarine warfare and are equipped with VLA rockets and LAMPS Mk III Sea Hawk helicopters to deal with underwater threats. When deployed with a carrier strike group or expeditionary strike group, destroyers and their fellow Aegis-equipped cruisers are primarily tasked with defending the fleet while providing secondary strike capabilities. With very few exceptions, destroyers are named after U.S. Navy, Marine Corps, and Coast Guard heroes. The U.S. Navy currently has 75 destroyers, 73 Arleigh Burke-class destroyers and two Zumwalt-class stealth destroyers, with a third (the USS Lyndon B. Johnson) expected to enter service sometime in 2024. Modern U.S. frigates mainly perform anti-submarine warfare for carrier and expeditionary strike groups and provide armed escort for supply convoys and merchant shipping. They are designed to protect friendly ships against hostile submarines in low to medium threat environments, using torpedoes and LAMPS helicopters. Independently, frigates are able to conduct counterdrug missions and other maritime interception operations. As in the case of destroyers, frigates are named after U.S. Navy, Marine Corps, and Coast Guard heroes. In late 2015, the U.S. Navy retired its most recent class of traditional frigates in favor of the littoral combat ship (LCS), relatively small vessels designed for near-shore operations that was expected to assume many of the duties the frigate had with the fleet. The LCS was "envisioned to be a networked, agile, stealthy surface combatant capable of defeating anti-access and asymmetric threats in the littorals", although their ability to perform these missions in practice has been called into question. The Navy has announced it plans to reduce procurement of the LCS and retire early examples of the type. The Navy planned to purchase up to 20 of the Constellation-class frigate (18 cancelled 2 under construction to be completed). Constellation is based on the FREMM multipurpose frigate, already in service with European navies. The U.S. Navy has 23 littoral combat ships, eight Freedom-class and 15 Independence-class ships as of 2025. Mine countermeasures vessels are a combination of minehunters, a naval vessel that actively detects and destroys individual naval mines, and minesweepers, which clear mined areas as a whole, without prior detection of the mines. MCM vessels have mostly legacy names of previous US Navy ships, especially World War II-era minesweepers. The Navy operates eight Avenger-class mine countermeasures ships, with four expected to be retired in 2024. The U.S. Navy operates three types of submarines: attack submarines, ballistic missile submarines and guided missile submarines. All current and planned U.S. Navy submarines are nuclear-powered, as nuclear propulsion allows for a combination of stealth and long-duration, high-speed, sustained underwater movement. Attack submarines typically operate as part of a carrier battle group, while guided missile submarines generally operate independently and carry larger quantities of cruise missiles. Both types have several tactical missions, including sinking ships and other subs, launching cruise missiles, gathering intelligence, and assisting in special operations. Ballistic missile submarines operate independently with only one mission: to carry and, if called upon, to launch the Trident nuclear missile. The Navy operates 69 submarines, 29 Los Angeles class attack submarines (with two more in reserve), 18 Ohio class submarines with 14 configured as ballistic missile submarines and four configured as guided missile submarines, three Seawolf class attack submarines, and 19 Virginia class attack submarines. A special case is the USS Constitution, commissioned in 1797 as one of the original six frigates of the United States Navy and which remains in commission at the Charlestown Navy Yard in Boston. She occasionally sails for commemorative events such as Independence Day. The Navy operates a class of small tugboats, called barrier boats. Carrier-based aircraft are able to strike air, sea, and land targets far from a carrier strike group while protecting friendly forces from enemy aircraft, ships, and submarines. In peacetime, aircraft's ability to project the threat of sustained attack from a mobile platform on the seas gives United States leaders significant diplomatic and crisis-management options. Aircraft additionally provide logistics support to maintain the navy's readiness and, through helicopters, supply platforms with which to conduct search and rescue, special operations, anti-submarine warfare (ASW), and anti-surface warfare, including the U.S. Navy's premier Maritime Strike and only organic ASW aircraft, the venerable Sikorsky MH-60R operated by Helicopter Maritime Strike Wing. The U.S. Navy began to research the use of aircraft at sea in the 1910s, with Lieutenant Theodore G. "Spuds" Ellyson becoming the first naval aviator on 28 January 1911, and commissioned its first aircraft carrier, USS Langley (CV-1), in 1922. United States naval aviation fully came of age in World War II, when it became clear following the attack on Pearl Harbor, the Battle of the Coral Sea, and the Battle of Midway that aircraft carriers and the planes that they carried had replaced the battleship as the greatest weapon on the seas. Leading navy aircraft in World War II included the Grumman F4F Wildcat, the Grumman F6F Hellcat, the Chance Vought F4U Corsair, the Douglas SBD Dauntless, and the Grumman TBF Avenger. Navy aircraft also played a significant role in conflicts during the following Cold War years, with the F-4 Phantom II and the F-14 Tomcat becoming military icons of the era. The navy's current primary fighter-attack airplane is the multi-mission F/A-18E/F Super Hornet. The F-35C entered service in 2019. The Navy is also looking to eventually replace its F/A-18E/F Super Hornets with the F/A-XX program. The Aircraft Investment Plan sees naval aviation growing from 30 percent of current aviation forces to half of all procurement funding over the next three decades. Current U.S. Navy shipboard weapons systems are almost entirely focused on missiles, both as a weapon and as a threat. In an offensive role, missiles are intended to strike targets at long distances with accuracy and precision. Because they are unmanned weapons, missiles allow for attacks on heavily defended targets without risk to human pilots. Land strikes are the domain of the BGM-109 Tomahawk, which was first deployed in the 1980s and is continually being updated to increase its capabilities. For anti-ship strikes, the navy's dedicated missile is the Harpoon Missile. To defend against enemy missile attack, the navy operates a number of systems that are all coordinated by the Aegis combat system. Medium-long range defense is provided by the Standard Missile 2, which has been deployed since the 1980s. The Standard missile doubles as the primary shipboard anti-aircraft weapon and is undergoing development for use in theater ballistic missile defense. Short range defense against missiles is provided by the Phalanx CIWS and the more recently developed RIM-162 Evolved Sea Sparrow Missile. In addition to missiles, the navy employs Mark 46, Mark 48, and Mark 50 torpedoes and various types of naval mines. Naval fixed-wing aircraft employ much of the same weapons as the United States Air Force for both air-to-air and air-to-surface combat. Air engagements are handled by the heat-seeking Sidewinder and the radar guided AMRAAM missiles along with the M61 Vulcan cannon for close range dogfighting. For surface strikes, navy aircraft use a combination of missiles, smart bombs, and dumb bombs. On the list of available missiles are the Maverick, SLAM-ER and JSOW. Smart bombs include the GPS-guided JDAM and the laser-guided Paveway series. Unguided munitions such as dumb bombs and cluster bombs make up the rest of the weapons deployed by fixed-wing aircraft. Rotary aircraft weapons are focused on anti-submarine warfare (ASW) and light to medium surface engagements. To combat submarines, helicopters use Mark 46 and Mark 50 torpedoes. Against small watercraft, they use Hellfire and Penguin air to surface missiles. Helicopters also employ various types of mounted anti-personnel machine guns, including the M60, M240, GAU-16/A, and GAU-17/A. Nuclear weapons in the U.S. Navy arsenal are deployed through ballistic missile submarines and aircraft. The Ohio-class submarine carries the latest iteration of the Trident missile, a three-stage, submarine-launched ballistic missile (SLBM) with MIRV capability; the current Trident II (D5) version is expected to be in service past 2020. The navy's other nuclear weapon is the air-deployed B61 nuclear bomb. The B61 is a thermonuclear device that can be dropped by strike aircraft such as the F/A-18 Hornet and Super Hornet at high speed from a large range of altitudes. It can be released through free-fall or parachute and can be set to detonate in the air or on the ground. Naval jack The current naval jack of the United States is the Union Jack, a small blue flag emblazoned with the stars of the 50 states. The Union Jack was not flown for the duration of the war on terror, during which Secretary of the Navy Gordon R. England directed all U.S. naval ships to fly the First Navy Jack. While Secretary England directed the change on 31 May 2002, many ships chose to shift colors later that year in remembrance of the first anniversary of the September 11, 2001 attacks. The Union Jack, however, remained in use with vessels of the U.S. Coast Guard and National Oceanic and Atmospheric Administration. A jack of similar design to the Union Jack was used in 1794, with 13 stars arranged in a 3–2–3–2–3 pattern. When a ship is moored or anchored, the jack is flown from the bow of the ship while the ensign is flown from the stern. When underway, the ensign is raised on the mainmast. Before the decision for all ships to fly the First Navy Jack, it was flown only on the oldest ship in the active American fleet, which is currently USS Blue Ridge. U.S. Navy ships and craft returned to flying the Union Jack effective 4 June 2019. The date for reintroduction of the jack commemorates the Battle of Midway, which began on 4 June 1942. Notable sailors Many past and present United States historical figures have served in the U.S. Navy. Notable officers include: The first American President who served in the U.S. Navy was John F. Kennedy (who commanded the famous PT-109 in World War II); he was then followed by Lyndon B. Johnson, Richard Nixon, Gerald Ford, Jimmy Carter, and George H. W. Bush. Some notable former members of the Navy include U.S. Senators, Bob Kerrey, John McCain, and John Kerry, along with Ron DeSantis, Governor of Florida, and Jesse Ventura, Governor of Minnesota. Notable former members of the U.S. Navy include; astronauts (Alan B. Shepard, Walter M. Schirra, Neil Armstrong, John Young, Michael J. Smith, Scott Kelly), entertainers (Johnny Carson, Mike Douglas, Paul Newman, Robert Stack, Humphrey Bogart, Tony Curtis, Jack Lemmon, Jack Benny, Don Rickles, Ernest Borgnine, Harry Belafonte, Henry Fonda, Fred Gwynne), authors (Robert Heinlein, Marcus Luttrell, Thomas Pynchon, Brandon Webb), musicians, (John Philip Sousa, MC Hammer, John Coltrane, Zach Bryan, Fred Durst), professional athletes (David Robinson, Bill Sharman, Roger Staubach, Joe Bellino, Bob Kuberski, Nile Kinnick, Bob Feller, Yogi Berra, Larry Doby, Stan Musial, Pee Wee Reese, Phil Rizzuto, Jack Taylor), business people (John S. Barry, Jack C. Taylor, Paul A. Sperry), and computer scientists (Grace Hopper). Naval post offices During World War I the first U.S. government post offices were established aboard Navy ships, managed by a Navy postal clerk. Prior to this, mail from crew members was collected and at the first opportunity was dropped off at a port of call where it was processed at a US Post Office. Before the arrival of email and the internet, hand stamped mail was the only way Navy crew members at sea could communicate with their family, friends and others. Mail was considered almost as valuable to crew members as food and ammunition. Sometimes mail from various crew members (referred to by historians and collectors as postal history), is directly associated with naval history. Letters and other correspondence sent by commanders, officers and crew members can include names, ranks, signatures, addresses, and ship's postmarks which can often confirm dates and locations of naval ships and crew members during various battles or other naval operations. As such, naval mail can serve as a source of information to naval historians and biographers. Among the more notable examples of Naval postal history include letters sent from the USS Arizona, before and on 7 December 1941. See also References Sources External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Biomass] | [TOKENS: 562]
Contents Biomass Biomass is material produced by the growth of microorganisms, plants or animals. Beyond this general definition, there are differences in how the term is used and applied depending on industry or subject-matter norms. For example, it may be more narrowly defined as just plant matter, or as a combination of plant and animal matter. Composition The composition of a specific source of biomass depends on whether it is derived from plants, animals, microorganisms, or some mixture of all biological matter. Biomass may also contain material from non-biological origin, due to contamination from anthropogenic activities. The table below summarizes the main types of biomasses and their typical sources. The composition of biomass on a chemical level is determined by whether it is plant or animal matter. Biomass in energy and conversion processes Biomass contains large amounts of renewable energy making it a viable source of fuel and a range of refined products; either as-is, or after a series of conversion steps. One such notable example is the production of bio-ethanol. There is a general classification of biomass that is produced or sourced for conversion processes. Biofuels such as bioethanol and biodiesel, and bioplastics, are typically derived from primary or “first-generation” source energy-dense plants and oils such as rapeseed, sugarcane, or corn. Their high content of sugars and oils makes them ideal as feedstocks, however, there are drawbacks to their use. As well as inflating the price of the chosen crop due to increased demand, arable land that would otherwise be used to grow food for human and animal consumption is rendered unavailable. Secondary or “second-generation” source biomass encompasses a much wider variety of plant and animal matter. It may be derived from a relatively pure source, such as wood chippings or grass, or it may be a less well defined solid waste stream. This type of biomass is far more challenging to work with, as it contains a more varied mixture of compounds that cannot be easily converted into useful products. Despite this, there continues to be intensive research and industry interest in second-generation biomass conversion processes due to its potential to re-use potentially valuable products and derivative products that would otherwise be wasted by incineration. Biomass in ecology In ecological studies, biomass refers to the total amount of biological organisms that are present in a given environment or ecosystem. It may encompass the entirety of biological matter, or a subset of species or individuals. It is typically expressed as the total weight of carbon that is contained within the chosen group of organisms. A 2017 estimate of the total amount of biomass within the biosphere is approximately 550 gigatons of carbon, with a significant majority of this being terrestrial plants (approx. 450 Gt C). Other definitions See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_note-39] | [TOKENS: 10628]
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Croatian_language] | [TOKENS: 3015]
Contents Croatian language North America South America Oceania Croatian[a] is the standard variety of the Serbo-Croatian language mainly used by Croats. It is the national official language and literary standard of Croatia, one of the official languages of Bosnia and Herzegovina, Montenegro, the Serbian province of Vojvodina, the European Union and a recognized minority language elsewhere in Serbia and other neighbouring countries. In the mid-18th century, the first attempts to provide a Croatian literary standard began on the basis of the Neo-Shtokavian dialect that served as a supraregional lingua franca – pushing back regional Chakavian, Kajkavian, and Shtokavian vernaculars. The decisive role was played by Croatian Vukovians, who cemented the usage of Ijekavian Neo-Shtokavian as the literary standard in the late 19th and the beginning of the 20th century, in addition to designing a phonological orthography. Croatian is written in Gaj's Latin alphabet. Besides the Shtokavian dialect, on which Standard Croatian is based, there are two other main supradialects spoken in Croatia, Chakavian and Kajkavian. These supradialects, and the four national standards – Bosnian, Croatian, Montenegrin and Serbian – are usually subsumed under the term "Serbo-Croatian" in English; this term is controversial for native speakers, and names such as "Bosnian-Croatian-Montenegrin-Serbian" (BCMS) are used by linguists and philologists in the 21st century. In 1997, the Croatian Parliament established the Days of the Croatian Language from March 11 to 17. Since 2013, the Institute of Croatian language has been celebrating the Month of the Croatian Language, from February 21 (International Mother Language Day) to March 17 (the day of signing the Declaration on the Name and Status of the Croatian Literary Language). History In the late medieval period up to the 17th century, the majority of semi-autonomous Croatia was ruled by two domestic dynasties of princes (banovi), the Zrinski and the Frankopan, which were linked by inter-marriage. Toward the 17th century, both of them attempted to unify Croatia both culturally and linguistically, writing in a mixture of all three principal dialects (Chakavian, Kajkavian and Shtokavian), and calling it "Croatian", "Dalmatian", or "Slavonian". Historically, several other names were used as synonyms for Croatian, in addition to Dalmatian and Slavonian, and these were Illyrian (ilirski) and Slavic (slovinski). It is still used now in parts of Istria, which became a crossroads of various mixtures of Chakavian with Ekavian, Ijekavian and Ikavian isoglosses. The most standardised form (Kajkavian–Ikavian) became the cultivated language of administration and intellectuals from the Istrian peninsula along the Croatian coast, across central Croatia up into the northern valleys of the Drava and the Mura. The cultural apex of this 17th century idiom is represented by the editions of "Adrianskoga mora sirena" ("The Siren of the Adriatic Sea") by Petar Zrinski and "Putni tovaruš" ("Traveling escort") by Katarina Zrinska. However, this first linguistic renaissance in Croatia was halted by the political execution of Petar Zrinski and Fran Krsto Frankopan by the Holy Roman Emperor Leopold I in Vienna in 1671. Subsequently, the Croatian elite in the 18th century gradually abandoned this combined Croatian standard. The Illyrian movement was a 19th-century pan-South Slavic political and cultural movement in Croatia that had the goal to standardise the regionally differentiated and orthographically inconsistent literary languages in Croatia, and finally merge them into a common South Slavic literary language. Specifically, three major groups of dialects were spoken on Croatian territory, and there had been several literary languages over four centuries. The leader of the Illyrian movement Ljudevit Gaj standardized the Latin alphabet in 1830–1850 and worked to bring about a standardized orthography. Although based in Kajkavian-speaking Zagreb, Gaj supported using the more populous Neo-Shtokavian – a version of Shtokavian that eventually became the predominant dialectal basis of both Croatian and Serbian literary language from the 19th century on. Supported by various South Slavic proponents, Neo-Shtokavian was adopted after an Austrian initiative at the Vienna Literary Agreement of 1850, laying the foundation for the unified Serbo-Croatian literary language. The uniform Neo-Shtokavian then became common in the Croatian elite. In the 1860s, the Zagreb Philological School dominated the Croatian cultural life, drawing upon linguistic and ideological conceptions advocated by the members of the Illyrian movement. While it was dominant over the rival Rijeka Philological School and Zadar Philological Schools, its influence waned with the rise of the Croatian Vukovians (at the end of the 19th century). Distinguishing features and differences between standards Croatian is commonly characterized by the ijekavian pronunciation (see an explanation of yat reflexes), the sole use of the Latin alphabet, and a number of lexical differences in common words that set it apart from standard Serbian. Some differences are absolute, while some appear mainly in the frequency of use. However, as professor John F. Bailyn states, "an examination of all the major 'levels' of language shows that BCS is clearly a single language with a single grammatical system." Sociopolitical standpoints Croatian, although technically a form of Serbo-Croatian, is sometimes considered a distinct language by itself. This is at odds with purely linguistic classifications of languages based on mutual intelligibility (abstand and ausbau languages), which do not allow varieties that are mutually intelligible to be considered separate languages. "There is no doubt of the near 100% mutual intelligibility of (standard) Croatian and (standard) Serbian, as is obvious from the ability of all groups to enjoy each others' films, TV and sports broadcasts, newspapers, rock lyrics etc.", writes Bailyn. Differences between various standard forms of Serbo-Croatian are often exaggerated for political reasons. Most Croatian linguists regard Croatian as a separate language that is considered key to national identity, in the sense that the term Croatian language includes all language forms from the earliest times to the present, in all areas where Croats live, as realized in the speeches of Croatian dialects, in city speeches and jargons, and in the Croatian standard language. The issue is sensitive in Croatia as the notion of a separate language being the most important characteristic of a nation is widely accepted, stemming from the 19th-century history of Europe. The 1967 Declaration on the Status and Name of the Croatian Literary Language, in which a group of Croatian authors and linguists demanded greater autonomy for Croatian, is viewed in Croatia as a linguistic policy milestone that was also a general milestone in national politics. On the 50th anniversary of the Declaration, at the beginning of 2017, a two-day meeting of linguists, writers, journalists and artists from Croatia, Bosnia-Herzegovina, Serbia and Montenegro was organized in Zagreb, at which the text of the Declaration on the Common Language of Croats, Bosniaks, Serbs and Montenegrins was drafted. The new Declaration has received more than ten thousand signatures (not only of intellectuals, but also commonpeople). It stated that in Croatia, Serbia, Bosnia-Herzegovina and Montenegro a common polycentric standard language is used, consisting of several standard varieties, similar to the existing varieties of German, English or Spanish. The aim of the new Declaration was to "stimulate discussion on language without the nationalistic baggage" and to "counter nationalistic divisions". The terms "Serbo-Croatian", "Serbo-Croat", or "Croato-Serbian", are still used as a cover term for all these forms by foreign scholars, even though the speakers themselves largely do not use it. Within Southeastern Europe, the term has largely been replaced by the ethnopolitical terms Bosnian, Croatian, Montenegrin, and Serbian. The use of the name "Croatian" for a language has historically been attested to, though not always distinctively. The first printed Croatian literary work is a vernacular Chakavian poem written in 1501 by Marko Marulić, titled "The History of the Holy Widow Judith Composed in Croatian Verses". The Croatian–Hungarian Agreement designated Croatian as one of its official languages. Croatian became an official EU language upon accession of Croatia to the European Union on 1 July 2013. In 2013, the EU started publishing a Croatian-language version of its official gazette. Official status Standard Croatian is the official language of the Republic of Croatia and, along with Standard Bosnian and Standard Serbian, one of three official languages of Bosnia and Herzegovina. It is also official in the regions of Burgenland (Austria), Molise (Italy) and Vojvodina (Serbia). Additionally, it has co-official status alongside Romanian in the communes of Carașova and Lupac, Romania. In these localities, Croats or Krashovani make up the majority of the population, and education, signage and access to public administration and the justice system are provided in Croatian, alongside Romanian. Croatian is officially used and taught at all universities in Croatia and at the University of Mostar in Bosnia and Herzegovina. Studies of Croatian language are held in Hungary (Institute of Philosophy at the ELTE Faculty of Humanities in Budapest), Slovakia (Faculty of Philosophy of the Comenius University in Bratislava), Poland (University of Warsaw, Jagiellonian University, University of Silesia in Katowice, University of Wroclaw, Adam Mickiewicz University in Poznan), Germany (University of Regensburg), Australia (Center for Croatian Studies at the Macquarie University), New Zealand (University of Auckland), North Macedonia (Faculty of Philology in Skopje) etc. The procedure for selecting Croatian language tutors (lektori), in accordance with signed interstate agreements, is carried out by the Ministry of Science, Education and Youth; In addition to teaching, lecturers of the Croatian language and literature also organize lectures by guest lecturers, professors from Croatian universities, writers, directors and other cultural and public figures, and for the purpose of promoting the Croatian language and culture, they organize theatrical performances, Croatian film evenings, cultural days, literary meetings, translation, publishing magazines and other activities that stimulate students' interest in learning the Croatian language. The Ministry is responsible for 34 official exchange teaching centers for Croatian language and literature and three centers for Croatian studies in Australia, Canada and the United Kingdom, which it co-finances. In addition to the aforementioned teaching centers and centers, which include more than 2,000 students in 25 countries, the Ministry fully or partially supports another 40 independent teaching centers that are not under its jurisdiction. The Ministry awards one-semester scholarships to students of the teaching staff for improving their Croatian language skills at Croaticum of the Faculty of Philosophy in Zagreb, the Center for Croatian Studies in the World at the Faculty of Philosophy in Split, and the same faculty in Rijeka. In addition to one-semester scholarships, the Ministry also awards scholarships for shorter scientific stays for the purpose of studying literature, researching, or consulting with professors related to the preparation of scientific papers in the field of Croatian studies. Croatian embassies hold courses for learning Croatian in Poland, United Kingdom and a few other countries. Extracurricular education of Croatian is held in Germany in Baden-Württemberg, Berlin, Hamburg and Saarland, as well as in North Macedonia in Skopje, Bitola, Štip and Kumanovo. Some Croatian Catholic Missions also hold Croatian language courses (e.g., CCM in Buenos Aires). There is no regulatory body that determines the proper usage of Croatian. However, in January 2023, the Croatian Parliament passed a law that prescribes the official use of the Croatian language, regulates the establishment of the Council for the Croatian language as a coordinating advisory body whose work will be focused on the protection and development of the Croatian language. State authorities, local and regional self-government entities are obliged to use the Croatian language. The current standard language is generally laid out in the grammar books and dictionaries used in education, such as the school curriculum prescribed by the Ministry of Education and the university programmes of the Faculty of Philosophy at the four main universities.[citation needed][needs update] In 2013, a Hrvatski pravopis by the Institute of Croatian Language and Linguistics received an official sole seal of approval from the Ministry of Education. The most prominent recent editions describing the Croatian standard language are: Also notable are the recommendations of Matica hrvatska, the national publisher and promoter of Croatian heritage, and the Miroslav Krleža Institute of Lexicography, as well as the Croatian Academy of Sciences and Arts. Numerous representative Croatian linguistic works were published since the independence of Croatia, among them three voluminous monolingual dictionaries of contemporary Croatian. In 2021, Croatia introduced a new model of linguistic categorisation of the Bunjevac dialect (as part of New-Shtokavian Ikavian dialects of the Shtokavian dialect of the Croatian language) in three sub-branches: Dalmatian (also called Bosnian-Dalmatian), Danubian (also called Bunjevac), and Littoral-Lika. Its speakers largely use the Latin alphabet and are living in parts of Bosnia and Herzegovina, different parts of Croatia, southern parts (inc. Budapest) of Hungary as well in the autonomous province Vojvodina of Serbia. The Institute of Croatian Language and Linguistics added the Bunjevac dialect to the List of Protected Intangible Cultural Heritage of the Republic of Croatia on 8 October 2021. Sample text Article 1 of the Universal Declaration of Human Rights in Croatian (2009 Croatian government official translation): Article 1 of the Universal Declaration of Human Rights in English: See also Notes References Sources Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Swedish_language] | [TOKENS: 8173]
Contents Swedish language Page version status This is an accepted version of this page Swedish (endonym: svenska [ˈsvɛ̂nːska] ⓘ) is a North Germanic language from the Indo-European language family, spoken predominantly in Sweden and parts of Finland. It has at least 10 million native speakers, making it the fourth most spoken Germanic language, and the first among its type in the Nordic countries overall. Swedish, like the other Nordic languages, is a descendant of Old Norse, the common language of the Germanic peoples living in Scandinavia during the Viking Age. It is largely mutually intelligible with Norwegian and Danish, although the degree of mutual intelligibility is dependent on the dialect and accent of the speaker. Standard Swedish, spoken by most Swedes, is the national language that evolved from the Central Swedish dialects in the 19th century, and was well established by the beginning of the 20th century. While distinct regional varieties and rural dialects still exist, the written language is uniform and standardized. In addition to being the native language of the Finland-Swedish minority, Swedish is the most widely spoken second language in Finland and has co-official language status in the country. Swedish was long spoken in parts of Estonia, although the current status of the Estonian Swedish speakers is almost extinct. It is also used in the Swedish diaspora, most notably in Oslo, Norway, with more than 50,000 Swedish residents. Classification Swedish is an Indo-European language belonging to the North Germanic branch of the Germanic languages. In the established classification, it belongs to the East Scandinavian languages, together with Danish, separating it from the West Scandinavian languages, consisting of Faroese, Icelandic, and Norwegian. However, more recent analyses divide the North Germanic languages into two groups: Insular Scandinavian (Faroese and Icelandic), and Continental Scandinavian (Danish, Norwegian, and Swedish), based on mutual intelligibility due to heavy influence of East Scandinavian (particularly Danish) on Norwegian during the last millennium and divergence from both Faroese and Icelandic. By many general criteria of mutual intelligibility, the Continental Scandinavian languages could very well be considered dialects of a common Scandinavian language. However, because of several hundred years of sometimes quite intense rivalry between Denmark and Sweden, including a long series of wars from the 16th to 18th centuries, and the nationalist ideas that emerged during the late 19th and early 20th centuries, the languages have separate orthographies, dictionaries, grammars, and regulatory bodies. Danish, Norwegian, and Swedish are thus from a linguistic perspective more accurately described as a dialect continuum of Scandinavian (North Germanic), and some of the dialects, such as those on the border between Norway and Sweden, especially parts of Bohuslän, Dalsland, western Värmland, western Dalarna, Härjedalen, and Jämtland, could be described as intermediate dialects of the national standard languages. Swedish pronunciations also vary greatly from one region to another, a legacy of the vast geographic distances and historical isolation. Even so, the vocabulary is standardized to a level that make most dialects within Sweden virtually fully mutually intelligible. East Germanic languages West Germanic languages Icelandic Faroese Norwegian Danish Swedish History In the 8th century, the common Germanic language of Scandinavia, Proto-Norse, evolved into Old Norse. This language underwent more changes that did not spread to all of Scandinavia, which resulted in the appearance of two similar dialects: Old West Norse (Norway, the Faroe Islands and Iceland) and Old East Norse (Denmark and Sweden). The dialects of Old East Norse spoken in Sweden are called Runic Swedish, while the dialects of Denmark are referred to as Runic Danish. The dialects are described as "runic" because the main body of text appears in the runic alphabet. Unlike Proto-Norse, which was written with the Elder Futhark alphabet, Old Norse was written with the Younger Futhark alphabet, which had only 16 letters. Because the number of runes was limited, some runes were used for a range of phonemes, such as the rune for the vowel u, which was also used for the vowels o, ø and y, and the rune for i, also used for e. From 1200 onwards, the dialects in Denmark began to diverge from those of Sweden. The innovations spread unevenly from Denmark, creating a series of minor dialectal boundaries, or isoglosses, ranging from Zealand in the south to Norrland, Österbotten and northwestern Finland in the north. An early change that separated Runic Danish from the other dialects of Old East Norse was the change of the diphthong æi to the monophthong é, as in stæinn to sténn "stone". This is reflected in runic inscriptions where the older read stain and the later stin. There was also a change of au as in dauðr into a long open ø as in døðr "dead". This change is shown in runic inscriptions as a change from tauþr into tuþr. Moreover, the øy diphthong changed into a long, close ø, as in the Old Norse word for "island". By the end of the period, these innovations had affected most of the Runic Swedish-speaking area as well, with the exception of the dialects spoken north and east of Mälardalen where the diphthongs still exist in remote areas. Old Swedish (Swedish: fornsvenska) is the term used for the medieval Swedish language. The start date is usually set to 1225 since this is the year that Västgötalagen ("the Västgöta Law") is believed to have been compiled for the first time. It is among the most important documents of the period written in Latin script and the oldest Swedish law codes. Old Swedish is divided into äldre fornsvenska (1225–1375) and yngre fornsvenska (1375–1526), "older" and "younger" Old Swedish. Important outside influences during this time came with the firm establishment of the Christian church and various monastic orders, introducing many Greek and Latin loanwords. With the rise of Hanseatic power in the late 13th and early 14th century, Middle Low German became very influential. The Hanseatic league provided Swedish commerce and administration with a large number of Low German-speaking immigrants. Many became quite influential members of Swedish medieval society, and brought terms from their native languages into the vocabulary. Besides a great number of loanwords for such areas as warfare, trade and administration, general grammatical suffixes and even conjunctions were imported. The League also brought a certain measure of influence from Danish (at the time Swedish and Danish were much more similar than today). Early Old Swedish was markedly different from the modern language in that it had a more complex case structure and also retained the original Germanic three-gender system. Nouns, adjectives, pronouns and certain numerals were inflected in four cases; besides the extant nominative, there were also the genitive (later possessive), dative and accusative. The gender system resembled that of modern German, having masculine, feminine and neuter genders. The masculine and feminine genders were later merged into a common gender with the definite suffix -en and the definite article den, in contrast with the neuter gender equivalents -et and det. The verb system was also more complex: it included subjunctive and imperative moods and verbs were conjugated according to person as well as number. By the 16th century, the case and gender systems of the colloquial spoken language and the profane literature had been largely reduced to the two cases and two genders of modern Swedish. A transitional change of the Latin script in the Nordic countries was to spell the letter combination "ae" as æ – and sometimes as a' – though it varied between persons and regions. The combination "ao" was similarly rendered ao, and "oe" became oe. These three were later to evolve into the separate letters ä, å and ö. The first time the new letters were used in print was in Aff dyäffwlsens frästilse ("By the Devil's temptation") published by Johan Gerson in 1495. Modern Swedish (Swedish: nysvenska) begins with the advent of the printing press and the European Reformation. After assuming power, the new monarch Gustav Vasa ordered a Swedish translation of the Bible. The New Testament was published in 1526, followed by a full Bible translation in 1541, usually referred to as the Gustav Vasa Bible, a translation deemed so successful and influential that, with revisions incorporated in successive editions, it remained the most common Bible translation until 1917. The main translators were Laurentius Andreæ and the brothers Laurentius and Olaus Petri. The Vasa Bible is often considered to be a reasonable compromise between old and new; while not adhering to the colloquial spoken language of its day, it was not overly conservative in its use of archaic forms. It was a major step towards a more consistent Swedish orthography. It established the use of the vowels "å", "ä", and "ö", and the spelling "ck" in place of "kk", distinguishing it clearly from the Danish Bible, perhaps intentionally, given the ongoing rivalry between the countries. All three translators came from central Sweden, which is generally seen as adding specific Central Swedish features to the new Bible. Though it might seem as if the Bible translation set a very powerful precedent for orthographic standards, spelling actually became more inconsistent during the remainder of the century. It was not until the 17th century that spelling began to be discussed, around the time when the first grammars were written. Capitalization during this time was not standardized. It depended on the authors and their background. Those influenced by German capitalized all nouns, while others capitalized more sparsely. It is also not always apparent which letters are capitalized owing to the Gothic or blackletter typeface that was used to print the Bible. This typeface was in use until the mid-18th century, when it was gradually replaced with a Latin typeface (often Antiqua). Some important changes in sound during the Modern Swedish period were the gradual assimilation of several different consonant clusters into the fricative [ʃ] and later into [ɧ]. There was also the gradual softening of [ɡ] and [k] into [j] and the fricative [ɕ] before front vowels. The velar fricative [ɣ] was also transformed into the corresponding plosive [ɡ]. The period that includes Swedish as it is spoken today is termed nusvenska (lit., "Now-Swedish") in linguistics, and started in the last decades of the 19th century. It saw a democratization of the language with a less formal written form that approached the spoken one. The growth of a state school system also led to the evolution of so-called boksvenska (literally, "Book Swedish"), especially among the working classes, where spelling to some extent influenced pronunciation, particularly in official contexts. With the industrialization and urbanization of Sweden well under way by the last decades of the 19th century, a new breed of authors made their mark on Swedish literature. Many scholars, politicians and other public figures had a great influence on the emerging national language, among them prolific authors like the poet Gustaf Fröding, Nobel laureate Selma Lagerlöf and radical writer and playwright August Strindberg. In Finland, Finland-Swedish literature emerged as a separate branch. It was during the 20th century that a common, standardized national language became available to all Swedes. The orthography finally stabilized and became almost completely uniform, with some minor deviations, by the time of the spelling reform of 1906. With the exception of plural forms of verbs and a slightly different syntax, particularly in the written language, the language was the same as the Swedish of today. The plural verb forms appeared decreasingly in formal writing into the 1950s, when their use was removed from all official recommendations. A very significant change in Swedish occurred in the late 1960s with the so-called du-reformen. Previously the proper way to address people of the same or higher social status had been by title and surname. The use of herr ("Mr" or "Sir"), fru ("Mrs" or "Ma'am") or fröken ("Miss") was considered the only acceptable way to begin conversation with strangers of unknown occupation, academic title or military rank. The fact that the listener should preferably be referred to in the third person tended to further complicate spoken communication between members of society. In the early 20th century an unsuccessful attempt was made to replace the insistence on titles with ni—the standard second person plural pronoun)—analogous to the French vous (see T-V distinction). Ni wound up being used as a slightly less familiar form of du, the second person singular pronoun, used to address people of lower social status. With the liberalization and radicalization of Swedish society in the 1950s and 1960s, these class distinctions became less important and du became the standard, even in formal and official contexts. Though the reform was not an act of any centralized political decree but rather the result of sweeping change in social attitudes, it was completed in just a few years, from the late 1960s to early 1970s. The use of ni as a polite form of address is sometimes encountered today in both the written and spoken language, particularly among older speakers. Geographic distribution Swedish is the sole official national language of Sweden, and one of two in Finland (alongside Finnish). As of 2006, it was the sole native language of 83% of Swedish residents. In 2007, around 5.5% (c. 290,000) of the population of Finland were native speakers of Swedish, partially due to a decline following the Russian annexation of Finland after the Finnish War 1808–1809. The Fenno-Swedish-speaking minority is concentrated in the coastal areas and archipelagos of southern and western Finland. In some of these areas, Swedish is the predominant language; in 19 municipalities, 16 of which are located in Åland, Swedish is the sole official language. Åland county is an autonomous region of Finland. According to a rough estimation, as of 2010 there were up to 300,000 Swedish-speakers living outside Sweden and Finland. The largest populations were in the United States (up to 100,000), the UK, Spain and Germany (c. 30,000 each) and a large proportion of the remaining 100,000 in the Scandinavian countries, France, Switzerland, Belgium, the Netherlands, Canada and Australia. Over three million people speak Swedish as a second language, with about 2,410,000 of those in Finland. According to a survey by the European Commission, 44% of respondents from Finland who did not have Swedish as a native language considered themselves to be proficient enough in Swedish to hold a conversation. Due to the close relation between the Scandinavian languages, a considerable proportion of speakers of Danish and especially Norwegian are able to understand Swedish. There is considerable migration between the Nordic countries, but owing to the similarity between the cultures and languages (with the exception of Finnish), expatriates generally assimilate quickly and do not stand out as a group. According to the 2000 United States Census, some 67,000 people over the age of five were reported as Swedish speakers, though without any information on the degree of language proficiency. Similarly, there were 16,915 reported Swedish speakers in Canada from the 2001 census. Although there are no certain numbers, some 40,000 Swedes are estimated to live in the London area in the United Kingdom. Outside Sweden and Finland, there are about 40,000 active learners enrolled in Swedish language courses. In the United States, particularly during the 19th and early 20th centuries, there was a significant Swedish-speaking immigrant population. This was notably true in states like Minnesota, where many Swedish immigrants settled. By 1940, approximately 6% of Minnesota's population spoke Swedish. Although the use of Swedish has significantly declined, it is not uncommon to find older generations and communities that still retain some use and knowledge of the language, particularly in rural communities like Lindström and Scandia. Swedish is the official main language of Sweden. Swedish is also one of two official languages of Finland. In Sweden, it has long been used in local and state government, and most of the educational system, but remained only a de facto primary language with no official status in law until 2009. A bill was proposed in 2005 that would have made Swedish an official language, but failed to pass by the narrowest possible margin (145–147) due to a pairing-off failure. A proposal for a broader language law, designating Swedish as the main language of the country and bolstering the status of the minority languages, was submitted by an expert committee to the Swedish Ministry of Culture in March 2008. It was subsequently enacted by the Riksdag, and entered into effect on 1 July 2009. Swedish is the sole official language of Åland (an autonomous province under the sovereignty of Finland), where the vast majority of the 26,000 inhabitants speak Swedish as a first language. In Finland as a whole, Swedish is one of the two "national" languages, with the same official status as Finnish (spoken by the majority) at the state level and an official language in some municipalities. Swedish is one of the official languages of the European Union, and one of the working languages of the Nordic Council. Under the Nordic Language Convention, citizens of the Nordic countries speaking Swedish have the opportunity to use their native language when interacting with official bodies in other Nordic countries without being liable for interpretation or translation costs. The Swedish Language Council (Språkrådet) is the regulator of Swedish in Sweden but does not attempt to enforce control of the language, as for instance the Académie française does for French. However, many organizations and agencies require the use of the council's publication Svenska skrivregler in official contexts, with it otherwise being regarded as a de facto orthographic standard. Among the many organizations that make up the Swedish Language Council, the Swedish Academy (established 1786) is arguably the most influential. Its primary instruments are the spelling dictionary Svenska Akademiens ordlista (SAOL, currently in its 14th edition) and the dictionary Svenska Akademiens Ordbok, in addition to various books on grammar, spelling and manuals of style. Although the dictionaries have a prescriptive element, they mainly describe current usage. In Finland, a special branch of the Research Institute for the Languages of Finland has official status as the regulatory body for Swedish in Finland. Among its highest priorities is to maintain intelligibility with the language spoken in Sweden. It has published Finlandssvensk ordbok, a dictionary about the differences between Swedish in Finland and Sweden. From the 13th to 20th century, there were Swedish-speaking communities in Estonia, particularly on the islands (e. g., Hiiumaa, Vormsi, Ruhnu; in Swedish, known as Dagö, Ormsö, Runö, respectively) along the coast of the Baltic, communities that today have all disappeared. The Swedish-speaking minority was represented in parliament, and entitled to use their native language in parliamentary debates. After the loss of Estonia to the Russian Empire in the early 18th century, around 1,000 Estonian Swedish speakers were forced to march to southern Ukraine, where they founded a village, Gammalsvenskby ("Old Swedish Village"). A few elderly people in the village still speak a Swedish dialect and observe the holidays of the Swedish calendar, although their dialect is most likely facing extinction. From 1918 to 1940, when Estonia was independent, the small Swedish community was well treated. Municipalities with a Swedish majority, mainly found along the coast, used Swedish as the administrative language and Swedish-Estonian culture saw an upswing. However, most Swedish-speaking people fled to Sweden before the end of World War II, that is before the invasion of Estonia by the Soviet army in 1944. Only a handful of speakers remain. Phonology Swedish dialects have either 17 or 18 vowel phonemes, 9 long and 9 short. As in the other Germanic languages, including English, most long vowels are phonetically paired with one of the short vowels, and the pairs are such that the two vowels are of similar quality, but with the short vowel being slightly lower and slightly centralized. In contrast to, for example, Danish, which has only tense vowels, the short vowels are slightly more lax, but the tense vs. lax contrast is not nearly as pronounced as in English, German or Dutch. In many dialects, the short vowel sound pronounced [ɛ] or [æ] has merged with the short /e/ (transcribed ⟨ɛ⟩ in the chart below). There are 18 consonant phonemes, two of which, /ɧ/ and /r/, vary considerably in pronunciation depending on the dialect and social status of the speaker. In many dialects, sequences of /r/ (pronounced alveolarly) with a dental consonant result in retroflex consonants; alveolarity of the pronunciation of /r/ is a precondition for this retroflexion. /r/ has a guttural or "French R" pronunciation in the South Swedish dialects; consequently, these dialects lack retroflex consonants. Swedish is a stress-timed language, where the time intervals between stressed syllables are equal. However, when casually spoken, it tends to be syllable-timed. Any stressed syllable carries one of two tones, which gives Swedish much of its characteristic sound. Prosody is often one of the most noticeable differences between dialects. Grammar The standard word order is, as in most Germanic languages, V2, which means that the finite verb (V) appears in the second position (2) of a declarative main clause. Swedish morphology is similar to English; that is, words have comparatively few inflections. Swedish has two genders and is generally seen to have two grammatical cases – nominative and genitive (except for pronouns that, as in English, also are inflected in the object form) – although it is debated if the genitive in Swedish should be seen as a genitive case or just the nominative plus the so-called genitive s, then seen as a clitic. Swedish has two grammatical numbers – plural and singular. Adjectives have discrete comparative and superlative forms and are also inflected according to gender, number and definiteness. The definiteness of nouns is marked primarily through suffixes (endings), complemented with separate definite and indefinite articles. The prosody features both stress and in most dialects tonal qualities. The language has a comparatively large vowel inventory. Swedish is also notable for the voiceless dorso-palatal velar fricative, a highly variable consonant phoneme. Swedish nouns and adjectives are declined in genders as well as number. Nouns are of common gender (en form) or neuter gender (ett form). The gender determines the declension of the adjectives. For example, the word fisk ("fish") is a noun of common gender (en fisk) and can have the following forms: The definite singular form of a noun is created by adding a suffix (-en, -n, -et or -t), depending on its gender and if the noun ends in a vowel or not. The definite articles den, det, and de are used for variations to the definitiveness of a noun. They can double as demonstrative pronouns or demonstrative determiners when used with adverbs such as här ("here") or där ("there") to form den/det här (can also be "denna/detta") ("this"), de här (can also be "dessa") ("these"), den/det där ("that"), and de där ("those"). For example, den där fisken means "that fish" and refers to a specific fish; den fisken is less definite and means "that fish" in a more abstract sense, such as that set of fish; while fisken means "the fish". In certain cases, the definite form indicates possession, e. g., jag måste tvätta håret ("I must wash my hair"). Adjectives are inflected in two declensions – indefinite and definite – and they must match the noun they modify in gender and number. The indefinite neuter and plural forms of an adjective are usually created by adding a suffix (-t or -a) to the common form of the adjective, e. g., en grön stol (a green chair), ett grönt hus (a green house), and gröna stolar ("green chairs"). The definite form of an adjective is identical to the indefinite plural form, e. g., den gröna stolen ("the green chair"), det gröna huset ("the green house"), and de gröna stolarna ("the green chairs"). Swedish pronouns are similar to those of English. Besides the two natural genders han and hon ("he" and "she"), there are also the two grammatical genders den and det, usually termed common and neuter. In recent years, a gender-neutral pronoun hen has been introduced, particularly in literary Swedish. Unlike the nouns, pronouns have an additional object form, derived from the old dative form. Hon, for example, has the following nominative, possessive, and object forms: Swedish also uses third-person possessive reflexive pronouns that refer to the subject in a clause, a trait that is restricted to North Germanic languages: Swedish used to have a genitive that was placed at the end of the head of a noun phrase. In modern Swedish, it has become an enclitic -s, which attaches to the end of the noun phrase, rather than the noun itself. In formal written language, it used to be considered correct to place the genitive -s after the head of the noun phrase (hästen), though this is today considered dated, and different grammatical constructions are often used. Verbs are conjugated according to tense. One group of verbs (the ones ending in -er in present tense) has a special imperative form (generally the verb stem), but with most verbs the imperative is identical to the infinitive form. Perfect and present participles as adjectival verbs are very common: In contrast to English and many other languages, Swedish does not use the perfect participle to form the present perfect and past perfect. Rather, the auxiliary verb har ("have"), hade ("had") is followed by a special form, called the supine, used solely for this purpose (although often identical to the neuter form of the perfect participle): When building the compound passive voice using the verb att bli, the past participle is used: There exists also an inflected passive voice formed by adding -s, replacing the final r in the present tense: In a subordinate clause, the auxiliary har is optional and often omitted, particularly in written Swedish. Subjunctive mood is occasionally used for some verbs, but its use is in sharp decline and few speakers perceive the handful of commonly used verbs (as for instance: vore, månne) as separate conjugations, most of them remaining only as set of idiomatic expressions. Where other languages may use grammatical cases, Swedish uses numerous prepositions, similar to those found in English. As in modern German, prepositions formerly determined case in Swedish, but this feature can only be found in certain idiomatic expressions like till fots ("on foot", genitive). As Swedish is a Germanic language, the syntax shows similarities to both English and German. Like English, Swedish has a subject–verb–object basic word order, but like German it utilizes verb-second word order in main clauses, for instance after adverbs and adverbial phrases, and dependent clauses. (Adverbial phrases denoting time are usually placed at the beginning of a main clause that is at the head of a sentence.) Prepositional phrases are placed in a place–manner–time order, as in English (but not German). Adjectives precede the noun they modify. Verb-second (inverted) word order is also used for questions. Vocabulary The vocabulary of Swedish is mainly Germanic, either through common Germanic heritage or through loans from German, Middle Low German, and to some extent, English. Examples of Germanic words in Swedish are mus ("mouse"), kung ("king"), and gås ("goose"). A significant part of the religious and scientific vocabulary is of Latin or Greek origin, often borrowed from French and, lately, English. Some 1–200 words are also borrowed from Scandoromani or Romani, often as slang varieties; a commonly used word from Romani is tjej ("girl"). A large number of French words were imported into Sweden around the 18th century. These words have been transcribed to the Swedish spelling system and are therefore pronounced recognizably to a French-speaker. Most of them are distinguished by a "French accent", characterized by emphasis on the last syllable. For example, nivå (fr. niveau, "level"), fåtölj (fr. fauteuil, "armchair") and affär ("shop; affair"), etc. Cross-borrowing from other Germanic languages has also been common, at first from Middle Low German, the lingua franca of the Hanseatic league and later from Standard German. Some compounds are translations of the elements (calques) of German original compounds into Swedish, like bomull from German Baumwolle ("cotton"; literally, tree-wool). As with many Germanic languages, new words can be formed by compounding, e. g., nouns like nagellackborttagningsmedel ("nail polish remover") or verbs like smyglyssna ("to eavesdrop"). Compound nouns take their gender from the head, which in Swedish is always the last morpheme. New words can also be coined by derivation from other established words, such as the verbification of nouns by the adding of the suffix -a, as in bil ("car") and bila ("travel (recreationally) by car"). The opposite, making nouns of verbs, is also possible, as in tänk ("way of thinking; concept") from tänka ("to think"). Writing system The Swedish alphabet is a 29-letter alphabet, using the 26-letter ISO basic Latin alphabet plus the three additional letters ⟨å⟩, ⟨ä⟩, and ⟨ö⟩ constructed in the 16th century by writing ⟨o⟩ and ⟨e⟩ on top of an ⟨a⟩, and an ⟨e⟩ on top of an ⟨o⟩. Though these combinations are historically modified versions of ⟨a⟩ and ⟨o⟩ according to the English range of usage for the term diacritic, these three characters are not considered to be diacritics within the Swedish application, but rather separate letters, and are independent letters following ⟨z⟩. Before the release of the 13th edition of Svenska Akademiens ordlista in April 2006, ⟨w⟩ was treated as merely a variant of ⟨v⟩ used only in names (such as "Wallenberg") and foreign words ("bowling"), and so was both sorted and pronounced as a ⟨v⟩. Other diacritics (to use the broader English term usage referenced here) are unusual in Swedish; ⟨é⟩ is sometimes used to indicate that the stress falls on a terminal syllable containing ⟨e⟩, especially when the stress changes the meaning (ide vs. idé, "winter lair" vs. "idea") as well as in some names, like Kastrén; occasionally other acute accents and, less often, grave accents can be seen in names and some foreign words. The letter ⟨à⟩ is used to refer to unit cost (a loan from the French), equivalent to the at sign (⟨@⟩) in English. The German ⟨ü⟩ is treated as a variant of ⟨y⟩ and sometimes retained in foreign names and words, e. g., müsli ("muesli/granola"). A proper diaeresis may very exceptionally be seen in elaborated style (for instance: "Aïda"). The German convention of writing ⟨ä⟩ and ⟨ö⟩ as ⟨ae⟩ and ⟨oe⟩ if the characters are unavailable is an unusual convention for speakers of modern Swedish. Despite the availability of all these characters in the Swedish national top-level Internet domain and other such domains, Swedish sites are frequently labelled using ⟨a⟩ and ⟨o⟩, based on visual similarity, though Swedish domains could be registered using the characters ⟨å⟩, ⟨ä⟩, and ⟨ö⟩ from 2003. In Swedish orthography, the colon is used in a similar manner as in English, with some exceptions: the colon is used for some abbreviations, such as 3:e for tredje ("third") and S:t for Sankt ("Saint"), and for all types of endings that can be added to numbers, letters and abbreviations, such as a:et ("the a") and CD:n ("the CD"), or the genitive form USA:s ("USA's"). Dialects According to a traditional division of Swedish dialects, there are six main groups of dialects: The traditional definition of a Swedish dialect has been a local variant that has not been heavily influenced by the standard language and that can trace a separate development all the way back to Old Norse. Many of the genuine rural dialects, such as those of Orsa in Dalarna or Närpes in Österbotten, have very distinct phonetic and grammatical features, such as plural forms of verbs or archaic case inflections. These dialects can be near-incomprehensible to a majority of Swedes, and most of their speakers are also fluent in Standard Swedish. The different dialects are often so localized that they are limited to individual parishes and are referred to by Swedish linguists as sockenmål (lit., "parish speech"). They are generally separated into six major groups, with common characteristics of prosody, grammar and vocabulary. One or several examples from each group are given here. Though each example is intended to be also representative of the nearby dialects, the actual number of dialects is several hundred if each individual community is considered separately. This type of classification, however, is based on a somewhat romanticized nationalist view of ethnicity and language. The idea that only rural variants of Swedish should be considered "genuine" is not generally accepted by modern scholars. No dialects, no matter how remote or obscure, remained unchanged or undisturbed by a minimum of influences from surrounding dialects or the standard language, especially not from the late 19th century onwards with the advent of mass media and advanced forms of transport. The differences are today more accurately described by a scale that runs from "standard language" to "rural dialect" where the speech even of the same person may vary from one extreme to the other depending on the situation. All Swedish dialects with the exception of the highly diverging forms of speech in Dalarna, Norrbotten and, to some extent, Gotland can be considered to be part of a common, mutually intelligible dialect continuum. This continuum may also include Norwegian and some Danish dialects. Standard Swedish is the language used by virtually all Swedes and most Swedish-speaking Finns. It is called rikssvenska or standardsvenska ("Standard Swedish") in Sweden. In Finland, högsvenska ("High Swedish") is used for the Finnish variant of standard Swedish and rikssvenska refers to Swedish as spoken in Sweden in general. In a poll conducted in 2005 by the Swedish Retail Institute (Handelns Utredningsinstitut), the attitudes of Swedes to the use of certain dialects by salesmen revealed that 54% believed that rikssvenska was the variety they would prefer to hear when speaking with salesmen over the phone, even though dialects such as gotländska or skånska were provided as alternatives in the poll. Finland was a part of Sweden from the 13th century until the loss of the Finnish territories to Russia in 1809. Swedish was the sole administrative language until 1902 as well as the dominant language of culture and education until Finnish independence in 1917. The percentage of Swedish speakers in Finland has steadily decreased since then. The Swedish-speaking population is mainly concentrated in the coastal areas of Ostrobothnia, Southwest Finland and Uusimaa where the percentage of Finland Swedes is high, with Swedish being spoken by more than 90% of the population in several municipalities, and on Åland, where Swedish is spoken by a vast majority of the population and is the only official language. Swedish is an official language also in the rest of Finland, though, with the same official status as Finnish. The country's public broadcaster, Yle, provides two Swedish-language radio stations, Yle Vega and Yle X3M, as well a TV channel, Yle Fem. Rinkeby Swedish (after Rinkeby, a suburb of northern Stockholm with a large immigrant population) is a common name among linguists for varieties of Swedish spoken by young people of foreign heritage in certain suburbs and urban districts in the major cities of Stockholm, Gothenburg and Malmö. These varieties could alternatively be classified as sociolects, because the immigrant dialects share common traits independent of their geographical spread or the native country of the speakers. However, some studies have found distinctive features and led to terms such as Rosengård Swedish (after Rosengård in Malmö), a variant of Scanian. A survey made by the Swedish linguist Ulla-Britt Kotsinas showed that foreign learners had difficulties in guessing the origins of Rinkeby Swedish speakers in Stockholm. The greatest difficulty proved to be identifying the speech of a boy speaking Rinkeby Swedish whose parents were both Swedish; only 1.8% guessed his native language correctly. New linguistic practices in multilingual urban contexts in fiction and hip-hop culture and rap lyrics have been introduced that go beyond traditional socio-linguistic domains. See also Källström (Chapter 12) and Knudsen (Chapter 13). Sample Article 1 of the Universal Declaration of Human Rights in Swedish: Alla människor är födda fria och lika i värdighet och rättigheter. De har utrustats med förnuft och samvete och bör handla gentemot varandra i en anda av gemenskap. Article 1 of the Universal Declaration of Human Rights in English: All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood. Excerpt from Barfotabarn (1933), by Nils Ferlin (1898–1961): See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-auto_278-1] | [TOKENS: 12858]
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Bulgarian_language] | [TOKENS: 9576]
Contents Bulgarian language RupMoesian Bulgarian[a] is an Eastern South Slavic language spoken in Southeast Europe, primarily in Bulgaria. It is the language of the Bulgarians. Along with the closely related Macedonian language (collectively forming the East South Slavic languages), it is a member of the Balkan sprachbund and South Slavic dialect continuum of the Indo-European language family. The two languages have several characteristics that set them apart from all other Slavic languages, including the elimination of case declension, the development of a suffixed definite article, and the lack of a verb infinitive. They retain and have further developed the Proto-Slavic verb system (albeit analytically). One such major development is the innovation of evidential verb forms to encode for the source of information: witnessed, inferred, or reported. It is the official language of Bulgaria, and since 2007 has been among the official languages of the European Union. It is also spoken by the Bulgarian communities in Ukraine, North Macedonia, Moldova, Serbia, Romania, Hungary, Albania, Greece and Turkey. History One can divide the development of the Bulgarian language into several periods. Bulgarian was the first Slavic language attested in writing. As Slavic linguistic unity lasted into late antiquity, the oldest manuscripts initially referred to this language as ѧзꙑкъ словѣньскъ, "the Slavic language". In the Middle Bulgarian period this name was gradually replaced by the name ѧзꙑкъ блъгарьскъ, the "Bulgarian language". In some cases, this name was used not only with regard to the contemporary Middle Bulgarian language of the copyist but also to the period of Old Bulgarian. A most notable example of anachronism is the Service of Saint Cyril from Skopje (Скопски миней), a 13th-century Middle Bulgarian manuscript from northern Macedonia according to which St. Cyril preached with "Bulgarian" books among the Moravian Slavs. The first mention of the language as the "Bulgarian language" instead of the "Slavonic language" comes in the work of the Greek clergy of the Archbishopric of Ohrid in the 11th century, for example in the Greek hagiography of Clement of Ohrid by Theophylact of Ohrid (late 11th century). During the Middle Bulgarian period, the language underwent dramatic changes, losing the Slavonic case system, but preserving the rich verb system (while the development was exactly the opposite in other Slavic languages) and developing a definite article. It was influenced by its non-Slavic neighbors in the Balkan language area (mostly grammatically) and later also by Turkish, which was the official language of the Ottoman Empire, in the form of the Ottoman Turkish language, mostly lexically.[citation needed] The damaskin texts mark the transition from Middle Bulgarian to New Bulgarian, which was standardized in the 19th century. As a national revival occurred toward the end of the period of Ottoman rule (mostly during the 19th century), a modern Bulgarian literary language gradually emerged that drew heavily on Church Slavonic/Old Bulgarian (and to some extent on literary Russian, which had preserved many lexical items from Church Slavonic) and later reduced the number of Turkish and other Balkan loans. Today one difference between Bulgarian dialects in the country and literary spoken Bulgarian is the significant presence of Old Bulgarian words and even word forms in the latter. Russian loans are distinguished from Old Bulgarian ones on the basis of the presence of specifically Russian phonetic changes, as in оборот (turnover, rev), непонятен (incomprehensible), ядро (nucleus) and others. Many other loans from French, English and the classical languages have subsequently entered the language as well. Modern Bulgarian was based essentially on the Eastern dialects of the language, but its pronunciation is in many respects a compromise between East and West Bulgarian (see especially the phonetic sections below). Following the efforts of some figures of the National awakening of Bulgaria (most notably Neofit Rilski and Ivan Bogorov), there had been many attempts to codify a standard Bulgarian language; however, there was much argument surrounding the choice of norms. Between 1835 and 1878 more than 25 proposals were put forward and "linguistic chaos" ensued. Eventually the eastern dialects prevailed, and in 1899 the Bulgarian Ministry of Education officially codified a standard Bulgarian language based on the Drinov-Ivanchev orthography. Geographic distribution Bulgarian is the official language of Bulgaria, where it is used in all spheres of public life. As of 2011, it is spoken as a first language by about 6 million people in the country, or about four out of every five Bulgarian citizens. There is also a significant Bulgarian diaspora abroad. One of the main historically established communities are the Bessarabian Bulgarians, whose settlement in the Bessarabia region of nowadays Moldova and Ukraine dates mostly to the early 19th century. There were 134,000 Bulgarian speakers in Ukraine at the 2001 census, 41,800 in Moldova as of the 2014 census (of which 15,300 were habitual users of the language), and presumably a significant proportion of the 13,200 ethnic Bulgarians residing in neighbouring Transnistria in 2016. Another community abroad are the Banat Bulgarians, who migrated in the 17th century to the Banat region now split between Romania, Serbia and Hungary. They speak the Banat Bulgarian dialect, which has had its own written standard and a historically important literary tradition. There are Bulgarian speakers in neighbouring countries as well. The regional dialects of Bulgarian and Macedonian form a dialect continuum, and there is no well-defined boundary where one language ends and the other begins. Within the limits of the Republic of North Macedonia a strong separate Macedonian identity has emerged since the Second World War, even though there still are a small number of citizens who identify their language as Bulgarian. Beyond the borders of North Macedonia, the situation is more fluid, and the pockets of speakers of the related regional dialects in Albania and in Greece variously identify their language as Macedonian or as Bulgarian. In Serbia, there were 7,939 speakers as per 2022 census, mainly concentrated in the so-called Western Outlands along the border with Bulgaria. Bulgarian is also spoken in Turkey: natively by Pomaks, and as a second language by many Bulgarian Turks who emigrated from Bulgaria, mostly during the "Big Excursion" of 1989. The language is also represented among the diaspora in Western Europe and North America, which has been steadily growing since the 1990s. Countries with significant numbers of speakers include Germany, Spain, Italy, the United Kingdom (38,500 speakers in England and Wales as of 2011), France, the United States, and Canada (19,100 in 2011). Dialects The language is mainly split into two broad dialect areas, based on the different reflexes of the Proto-Slavic yat vowel (Ѣ). This split, which occurred at some point during the Middle Ages, led to the development of Bulgaria's: The literary language norm, which is generally based on the Eastern dialects, also has the Eastern alternating reflex of yat. However, it has not incorporated the general Eastern umlaut of all synchronic or even historic "ya" sounds into "e" before front vowels – e.g. поляна (polyana) vs. полени (poleni) "meadow – meadows" or even жаба (zhaba) vs. жеби (zhebi) "frog – frogs", even though it co-occurs with the yat alternation in almost all Eastern dialects that have it (except a few dialects along the yat border, e.g. in the Pleven region). More examples of the yat umlaut in the literary language are: Until 1945, Bulgarian orthography did not reveal this alternation and used the original Old Slavic Cyrillic letter yat (Ѣ), which was commonly called двойно е (dvoyno e) at the time, to express the historical yat vowel or at least root vowels displaying the ya – e alternation. The letter was used in each occurrence of such a root, regardless of the actual pronunciation of the vowel: thus, both mlyako and mlekar were spelled with (Ѣ). Among other things, this was seen as a way to "reconcile" the Western and the Eastern dialects and maintain language unity at a time when much of Bulgaria's Western dialect area was controlled by Serbia and Greece, but there were still hopes and occasional attempts to recover it. With the 1945 orthographic reform, this letter was abolished and the present spelling was introduced, reflecting the alternation in pronunciation. This had implications for some grammatical constructions: Sometimes, with the changes, words began to be spelled as other words with different meanings, e.g.: In spite of the literary norm regarding the yat vowel, many people living in Western Bulgaria, including the capital Sofia, will fail to observe its rules. While the norm requires the realizations vidyal vs. videli (he has seen; they have seen), some natives of Western Bulgaria will preserve their local dialect pronunciation with "e" for all instances of "yat" (e.g. videl, videli). Others, attempting to adhere to the norm, will actually use the "ya" sound even in cases where the standard language has "e" (e.g. vidyal, vidyali). The latter hypercorrection is called свръхякане (svrah-yakane ≈"over-ya-ing"). Bulgarian is the only Slavic language whose literary standard does not naturally contain the iotated e /jɛ/ (or its variant, e after a palatalized consonant /ʲɛ/, except in non-Slavic foreign-loaned words). This sound combination is common in all modern Slavic languages (e.g. Czech medvěd /ˈmɛdvjɛt/ "bear", Polish pięć /pjɛɲt͡ɕ/ "five", Serbo-Croatian jelen /jělen/ "deer", Ukrainian немає /neˈmajɛ/ "there is not", Macedonian пишување /piˈʃuvaɲʲɛ/ "writing", etc.), as well as some Western Bulgarian dialectal forms – e.g. ора̀н'е /oˈraɲʲɛ/ (standard Bulgarian: оране /oˈranɛ/, "ploughing"), however it is not represented in standard Bulgarian speech or writing. Even where /jɛ/ occurs in other Slavic words, in Standard Bulgarian it is usually transcribed and pronounced as pure /ɛ/ – e.g. Boris Yeltsin is "Eltsin" (Борис Елцин), Yekaterinburg is "Ekaterinburg" (Екатеринбург) and Sarajevo is "Saraevo" (Сараево), although – because of the stress and the beginning of the word – Jelena Janković is "Yelena Yankovich" (Йелена Янкович). Relationship to Macedonian Until the period immediately following the Second World War, all Bulgarian and the majority of foreign linguists referred to the South Slavic dialect continuum spanning the area of modern Bulgaria, North Macedonia and parts of Northern Greece as a group of Bulgarian dialects. In contrast, Serbian sources tended to label them "south Serbian" dialects. Some local naming conventions included bolgárski, bugárski and so forth. The codifiers of the standard Bulgarian language, however, did not wish to make any allowances for a pluricentric "Bulgaro-Macedonian" compromise. In 1870 Marin Drinov, who played a decisive role in the standardization of the Bulgarian language, rejected the proposal of Parteniy Zografski and Kuzman Shapkarev for a mixed eastern and western Bulgarian/Macedonian foundation of the standard Bulgarian language, stating in his article in the newspaper Makedoniya: "Such an artificial assembly of written language is something impossible, unattainable and never heard of." After 1944 the People's Republic of Bulgaria and the Socialist Federal Republic of Yugoslavia began a policy of making Macedonia into the connecting link for the establishment of a new Balkan Federative Republic and stimulating here a development of distinct Macedonian consciousness. With the proclamation of the Socialist Republic of Macedonia as part of the Yugoslav federation, the new authorities also started measures that would overcome the pro-Bulgarian feeling among parts of its population and in 1945 a separate Macedonian language was codified. After 1958, when the pressure from Moscow decreased, Sofia reverted to the view that the Macedonian language did not exist as a separate language. Nowadays, Bulgarian and Greek linguists, as well as some linguists from other countries, still consider the various Macedonian dialects as part of the broader Bulgarian pluricentric dialectal continuum. Outside Bulgaria and Greece, Macedonian is generally considered an autonomous language within the South Slavic dialect continuum. Sociolinguists agree that the question whether Macedonian is a dialect of Bulgarian or a language is a political one and cannot be resolved on a purely linguistic basis, because dialect continua do not allow for either/or judgements. Phonology Bulgarian possesses a phonology similar to that of the rest of the South Slavic languages, notably lacking Serbo-Croatian's phonemic vowel length and tones and alveolo-palatal affricates. There is a general dichotomy between Eastern and Western dialects, with Eastern ones featuring consonant palatalization before front vowels (/ɛ/ and /i/) and substantial vowel reduction of the low vowels /ɛ/, /ɔ/ and /a/ in unstressed position, sometimes leading to neutralisation between /ɛ/ and /i/, /ɔ/ and /u/, and /a/ and /ɤ/. Both patterns have partial parallels in Russian, leading to partially similar sounds. In turn, the Western dialects generally do not have any allophonic palatalization and exhibit minor, if any, vowel reduction. Standard Bulgarian keeps a middle ground between the macrodialects. It allows palatalizaton only before central and back vowels and only partial reduction of /a/ and /ɔ/. Reduction of /ɛ/, consonant palatalisation before front vowels and depalatalization of palatalized consonants before central and back vowels is strongly discouraged and labelled as provincial. Bulgarian has six vowel phonemes, but at least eight distinct phones can be distinguished when reduced allophones are taken into consideration. There is currently no consensus on the number of Bulgarian consonants, with one school of thought advocating for the existence of only 22 consonant phonemes and another one claiming that there are not fewer than 39 consonant phonemes. The main bone of contention is how to treat palatalized consonants: as separate phonemes or as allophones of their respective plain counterparts. The 22-consonant model is based on a general consensus reached by all major Bulgarian linguists in the 1930s and 1940s. In turn, the 39-consonant model was launched in the beginning of the 1950s under the influence of the ideas of Russian linguist Nikolai Trubetzkoy. Despite frequent objections, the support of the Bulgarian Academy of Sciences has ensured Trubetzkoy's model virtual monopoly in state-issued phonologies and grammars since the 1960s. However, its reception abroad has been lukewarm, with a number of authors either calling the model into question or outright rejecting it. Thus, the Handbook of the International Phonetic Association only lists 22 consonants in Bulgarian's consonant inventory. Alphabet In 886 AD, the Bulgarian Empire introduced the Glagolitic alphabet which was devised by the Saints Cyril and Methodius in the 850s. The Glagolitic alphabet was gradually superseded in later centuries by the Cyrillic script, developed around the Preslav Literary School, Bulgaria in the late 9th century. Several Cyrillic alphabets with 28 to 44 letters were used in the beginning and the middle of the 19th century during the efforts on the codification of Modern Bulgarian until an alphabet with 32 letters, proposed by Marin Drinov, gained prominence in the 1870s. The alphabet of Marin Drinov was used until the orthographic reform of 1945, when the letters yat (uppercase Ѣ, lowercase ѣ) and big yus (uppercase Ѫ, lowercase ѫ) were removed from its alphabet, reducing the number of letters to 30. With the accession of Bulgaria to the European Union on 1 January 2007, Cyrillic became the third official script of the European Union, following the Latin and Greek scripts. Grammar The parts of speech in Bulgarian are divided in ten types, which are categorized in two broad classes: mutable and immutable. The difference is that mutable parts of speech vary grammatically, whereas the immutable ones do not change, regardless of their use. The five classes of mutables are: nouns, adjectives, numerals, pronouns and verbs. Syntactically, the first four of these form the group of the noun or the nominal group. The immutables are: adverbs, prepositions, conjunctions, particles and interjections. Verbs and adverbs form the group of the verb or the verbal group. Nouns and adjectives have the categories grammatical gender, number, case (only vocative) and definiteness in Bulgarian. Adjectives and adjectival pronouns agree with nouns in number and gender. Pronouns have gender and number and retain (as in nearly all Indo-European languages) a more significant part of the case system. There are three grammatical genders in Bulgarian: masculine, feminine and neuter. The gender of the noun can largely be inferred from its ending: nouns ending in a consonant ("zero ending") are generally masculine (for example, град /ɡrat/ 'city', син /sin/ 'son', мъж /mɤʃ/ 'man'; those ending in –а/–я (-a/-ya) (жена /ʒɛˈna/ 'woman', дъщеря /dɐʃtɛrˈja/ 'daughter', улица /ˈulitsɐ/ 'street') are normally feminine; and nouns ending in –е, –о are almost always neuter (дете /dɛˈtɛ/ 'child', езеро /ˈɛzɛro/ 'lake'), as are those rare words (usually loanwords) that end in –и, –у, and –ю (цунами /tsuˈnami/ 'tsunami', табу /tɐˈbu/ 'taboo', меню /mɛˈnju/ 'menu'). Perhaps the most significant exception from the above are the relatively numerous nouns that end in a consonant and yet are feminine: these comprise, firstly, a large group of nouns with zero ending expressing quality, degree or an abstraction, including all nouns ending in –ост/–ест -{ost/est} (мъдрост /ˈmɤdrost/ 'wisdom', низост /ˈnizost/ 'vileness', прелест /ˈprɛlɛst/ 'loveliness', болест /ˈbɔlɛst/ 'sickness', любов /ljuˈbɔf/ 'love'), and secondly, a much smaller group of irregular nouns with zero ending which define tangible objects or concepts (кръв /krɤf/ 'blood', кост /kɔst/ 'bone', вечер /ˈvɛtʃɛr/ 'evening', нощ /nɔʃt/ 'night'). There are also some commonly used words that end in a vowel and yet are masculine: баща 'father', дядо 'grandfather', чичо / вуйчо 'uncle', and others. The plural forms of the nouns do not express their gender as clearly as the singular ones, but may also provide some clues to it: the ending –и (-i) is more likely to be used with a masculine or feminine noun (факти /ˈfakti/ 'facts', болести /ˈbɔlɛsti/ 'sicknesses'), while one in –а/–я belongs more often to a neuter noun (езера /ɛzɛˈra/ 'lakes'). Also, the plural ending –ове /ovɛ/ occurs only in masculine nouns. Two numbers are distinguished in Bulgarian–singular and plural. A variety of plural suffixes is used, and the choice between them is partly determined by their ending in singular and partly influenced by gender; in addition, irregular declension and alternative plural forms are common. Words ending in –а/–я (which are usually feminine) generally have the plural ending –и, upon dropping of the singular ending. Of nouns ending in a consonant, the feminine ones also use –и, whereas the masculine ones usually have –и for polysyllables and –ове for monosyllables (however, exceptions are especially common in this group). Nouns ending in –о/–е (most of which are neuter) mostly use the suffixes –а, –я (both of which require the dropping of the singular endings) and –та. With cardinal numbers and related words such as няколко ('several'), masculine nouns use a special count form in –а/–я, which stems from the Proto-Slavonic dual: два/три стола ('two/three chairs') versus тези столове ('these chairs'); cf. feminine две/три/тези книги ('two/three/these books') and neuter две/три/тези легла ('two/three/these beds'). However, a recently developed language norm requires that count forms should only be used with masculine nouns that do not denote persons. Thus, двама/трима ученици ('two/three students') is perceived as more correct than двама/трима ученика, while the distinction is retained in cases such as два/три молива ('two/three pencils') versus тези моливи ('these pencils'). Cases exist only in the personal and some other pronouns (as they do in many other modern Indo-European languages), with nominative, accusative, dative and vocative forms. Vestiges are present in a number of phraseological units and sayings. The major exception are vocative forms, which are still in use for masculine (with the endings -е, -о and -ю) and feminine nouns (-[ь/й]о and -е) in the singular. In modern Bulgarian, definiteness is expressed by a definite article which is postfixed to the noun, much like in the Scandinavian languages or Romanian (indefinite: човек, 'person'; definite: човекът, "the person") or to the first nominal constituent of definite noun phrases (indefinite: добър човек, 'a good person'; definite: добрият човек, "the good person"). There are four singular definite articles. Again, the choice between them is largely determined by the noun's ending in the singular. Nouns that end in a consonant and are masculine use –ът/–ят, when they are grammatical subjects, and –а/–я elsewhere. Nouns that end in a consonant and are feminine, as well as nouns that end in –а/–я (most of which are feminine, too) use –та. Nouns that end in –е/–о use –то. The plural definite article is –те for all nouns except for plural forms that end in –а/–я; getting –та instead. When postfixed to adjectives the definite articles are –ят/–я for masculine gender (again, with the longer form being reserved for grammatical subjects), –та for feminine gender, –то for neuter gender, and –те for plural. Both groups agree in gender and number with the noun they are appended to. They may also take the definite article as explained above. Pronouns may vary in gender, number, and definiteness, and are the only parts of speech that have retained case inflections. Three cases are exhibited by some groups of pronouns – nominative, accusative and dative. The distinguishable types of pronouns include the following: personal, relative, reflexive, interrogative, negative, indefinite, summative and possessive. A Bulgarian verb has many distinct forms, as it varies in person, number, voice, aspect, mood, tense and in some cases gender. Finite verbal forms are simple or compound and agree with subjects in person (first, second and third) and number (singular, plural). In addition to that, past compound forms using participles vary in gender (masculine, feminine, neuter) and voice (active and passive) as well as aspect (perfective/aorist and imperfective). Bulgarian verbs express lexical aspect: perfective verbs signify the completion of the action of the verb and form past perfective (aorist) forms; imperfective ones are neutral with regard to it and form past imperfective forms. Most Bulgarian verbs can be grouped in perfective-imperfective pairs (imperfective/perfective: идвам/дойда "come", пристигам/пристигна "arrive"). Perfective verbs can be usually formed from imperfective ones by suffixation or prefixation, but the resultant verb often deviates in meaning from the original. In the pair examples above, aspect is stem-specific and therefore there is no difference in meaning. In Bulgarian, there is also grammatical aspect. Three grammatical aspects are distinguishable: neutral, perfect and pluperfect. The neutral aspect comprises the three simple tenses and the future tense. The pluperfect is manifest in tenses that use double or triple auxiliary "be" participles like the past pluperfect subjunctive. Perfect constructions use a single auxiliary "be". The traditional interpretation is that in addition to the four moods (наклонения /nəkloˈnɛnijɐ/) shared by most other European languages – indicative (изявително, /izʲəˈvitɛɫno/) imperative (повелително /poveˈlitelno/), subjunctive (подчинително /pottʃiˈnitɛɫno/) and conditional (условно, /oˈsɫɔvno/) – in Bulgarian there is one more to describe a general category of unwitnessed events – the inferential (преизказно /prɛˈiskɐzno/) mood. However, most contemporary Bulgarian linguists usually exclude the subjunctive mood and the inferential mood from the list of Bulgarian moods (thus placing the number of Bulgarian moods at a total of 3: indicative, imperative and conditional) and do not consider them to be moods but view them as verbial morphosyntactic constructs or separate gramemes of the verb class. The possible existence of a few other moods has been discussed in the literature. Most Bulgarian school grammars teach the traditional view of 4 Bulgarian moods (as described above, but excluding the subjunctive and including the inferential). There are three grammatically distinctive positions in time – present, past and future – which combine with aspect and mood to produce a number of formations. Normally, in grammar books these formations are viewed as separate tenses – i. e. "past imperfect" would mean that the verb is in past tense, in the imperfective aspect, and in the indicative mood (since no other mood is shown). There are more than 40 different tenses across Bulgarian's two aspects and five moods. In the indicative mood, there are three simple tenses: In the indicative there are also the following compound tenses: The four perfect constructions above can vary in aspect depending on the aspect of the main-verb participle; they are in fact pairs of imperfective and perfective aspects. Verbs in forms using past participles also vary in voice and gender. There is only one simple tense in the imperative mood, the present, and there are simple forms only for the second-person singular, -и/-й (-i, -y/i), and plural, -ете/-йте (-ete, -yte), e.g. уча /ˈutʃɐ/ ('to study'): учи /oˈtʃi/, sg., учете /oˈtʃɛtɛ/, pl.; играя /ˈiɡrajɐ/ 'to play': играй /iɡˈraj/, играйте /iɡˈrajtɛ/. There are compound imperative forms for all persons and numbers in the present compound imperative (да играе, da iɡˈrae/), the present perfect compound imperative (да е играл, /dɐ ɛ iɡˈraɫ/) and the rarely used present pluperfect compound imperative (да е бил играл, /dɐ ɛ bil iɡˈraɫ/). The conditional mood consists of five compound tenses, most of which are not grammatically distinguishable. The present, future and past conditional use a special past form of the stem би- (bi – "be") and the past participle (бих учил, /bix ˈutʃiɫ/, 'I would study'). The past future conditional and the past future perfect conditional coincide in form with the respective indicative tenses. The subjunctive mood is rarely documented as a separate verb form in Bulgarian (being, morphologically, a sub-instance of the quasi-infinitive construction with the particle да and a normal finite verb form), but nevertheless it is used regularly. The most common form, often mistaken for the present tense, is the present subjunctive ([по-добре] да отида (ˈpɔdobrɛ) dɐ oˈtidɐ/, 'I had better go'). The difference between the present indicative and the present subjunctive tense is that the subjunctive can be formed by both perfective and imperfective verbs. It has completely replaced the infinitive and the supine from complex expressions (see below). It is also employed to express opinion about possible future events. The past perfect subjunctive ([по добре] да бях отишъл (ˈpɔdobrɛ) dɐ bʲax oˈtiʃɐl/, 'I'd had better be gone') refers to possible events in the past, which did not take place, and the present pluperfect subjunctive (да съм бил отишъл /dɐ sɐm bil oˈtiʃɐl/), which may be used about both past and future events arousing feelings of incontinence,[clarification needed] suspicion, etc. The inferential mood has five pure tenses. Two of them are simple – past aorist inferential and past imperfect inferential – and are formed by the past participles of perfective and imperfective verbs, respectively. There are also three compound tenses – past future inferential, past future perfect inferential and past perfect inferential. All these tenses' forms are gender-specific in the singular. There are also conditional and compound-imperative crossovers. The existence of inferential forms has been attributed to Turkic influences by most Bulgarian linguists.[citation needed] Morphologically, they are derived from the perfect. Bulgarian has the following participles: The participles are inflected by gender, number, and definiteness, and are coordinated with the subject when forming compound tenses (see tenses above). When used in an attributive role, the inflection attributes are coordinated with the noun that is being attributed. Bulgarian uses reflexive verbal forms (i.e. actions which are performed by the agent onto him- or herself) which behave in a similar way as they do in many other Indo-European languages, such as French and Spanish. The reflexive is expressed by the invariable particle se,[note 1] originally a clitic form of the accusative reflexive pronoun. Thus – When the action is performed on others, other particles are used, just like in any normal verb, e.g. – Sometimes, the reflexive verb form has a similar but not necessarily identical meaning to the non-reflexive verb – In other cases, the reflexive verb has a completely different meaning from its non-reflexive counterpart – When the action is performed on an indirect object, the particles change to si and its derivatives – In some cases, the particle si is ambiguous between the indirect object and the possessive meaning – The difference between transitive and intransitive verbs can lead to significant differences in meaning with minimal change, e.g. – The particle si is often used to indicate a more personal relationship to the action, e.g. – The most productive way to form adverbs is to derive them from the neuter singular form of the corresponding adjective—e.g. бързо (fast), силно (hard), странно (strange)—but adjectives ending in -ки use the masculine singular form (i.e. ending in -ки), instead—e.g. юнашки (heroically), мъжки (bravely, like a man), майсторски (skillfully). The same pattern is used to form adverbs from the (adjective-like) ordinal numerals, e.g. първо (firstly), второ (secondly), трето (thirdly), and in some cases from (adjective-like) cardinal numerals, e.g. двойно (twice as/double), тройно (three times as), петорно (five times as). The remaining adverbs are formed in ways that are no longer productive in the language. A small number are original (not derived from other words), for example: тук (here), там (there), вътре (inside), вън (outside), много (very/much) etc. The rest are mostly fossilized case forms, such as: Adverbs can sometimes be reduplicated to emphasize the qualitative or quantitative properties of actions, moods or relations as performed by the subject of the sentence: "бавно-бавно" ("rather slowly"), "едва-едва" ("with great difficulty"), "съвсем-съвсем" ("quite", "thoroughly"). Questions in Bulgarian which do not use a question word (such as who? what? etc.) are formed with the particle ли after the verb; a subject is not necessary, as the verbal conjugation suggests who is performing the action: While the particle ли generally goes after the verb, it can go after a noun or adjective if a contrast is needed: A verb is not always necessary, e.g. when presenting a choice: Rhetorical questions can be formed by adding ли to a question word, thus forming a "double interrogative" – The same construction +не ('no') is an emphasized positive – The verb съм /sɤm/[note 3] – 'to be' is also used as an auxiliary for forming the perfect, the passive and the conditional: Two alternate forms of съм exist: The impersonal verb ще (lit. 'it wants')[note 5] is used to for forming the (positive) future tense: The negative future is formed with the invariable construction няма да /ˈɲamɐ dɐ/ (see няма below):[note 6] The past tense of this verb – щях /ʃtʲax/ is conjugated to form the past conditional ('would have' – again, with да, since it is irrealis): The verbs имам /ˈimɐm/ ('to have') and нямам /ˈɲamɐm/ ('to not have'): In Bulgarian, there are several conjunctions all translating into English as "but", which are all used in distinct situations. They are но (no), ама (amà), а (a), ами (amì), and ала (alà) (and обаче (obache) – "however", identical in use to но). While there is some overlapping between their uses, in many cases they are specific. For example, ami is used for a choice – ne tova, ami onova – "not this one, but that one" (compare Spanish sino), while ama is often used to provide extra information or an opinion – kazah go, ama sgreshih – "I said it, but I was wrong". Meanwhile, a provides contrast between two situations, and in some sentences can even be translated as "although", "while" or even "and" – az rabotya, a toy blee – "I'm working, and he's daydreaming". Very often, different words can be used to alter the emphasis of a sentence – e.g. while pusha, no ne tryabva and pusha, a ne tryabva both mean "I smoke, but I shouldn't", the first sounds more like a statement of fact ("...but I mustn't"), while the second feels more like a judgement ("...but I oughtn't"). Similarly, az ne iskam, ama toy iska and az ne iskam, a toy iska both mean "I don't want to, but he does", however the first emphasizes the fact that he wants to, while the second emphasizes the wanting rather than the person. Ala is interesting in that, while it feels archaic, it is often used in poetry and frequently in children's stories, since it has quite a moral/ominous feel to it. Some common expressions use these words, and some can be used alone as interjections: Bulgarian has several abstract particles which are used to strengthen a statement. These have no precise translation in English.[note 8] The particles are strictly informal and can even be considered rude by some people and in some situations. They are mostly used at the end of questions or instructions. These are "tagged" on to the beginning or end of a sentence to express the mood of the speaker in relation to the situation. They are mostly interrogative or slightly imperative in nature. There is no change in the grammatical mood when these are used (although they may be expressed through different grammatical moods in other languages). These express intent or desire, perhaps even pleading. They can be seen as a sort of cohortative side to the language. (Since they can be used by themselves, they could even be considered as verbs in their own right.) They are also highly informal. These particles can be combined with the vocative particles for greater effect, e.g. ya da vidya, be (let me see), or even exclusively in combinations with them, with no other elements, e.g. hayde, de! (come on!); nedey, de! (I told you not to!). Bulgarian has several pronouns of quality which have no direct parallels in English – kakav (what sort of); takuv (this sort of); onakuv (that sort of – colloq.); nyakakav (some sort of); nikakav (no sort of); vsyakakav (every sort of); and the relative pronoun kakavto (the sort of ... that ... ). The adjective ednakuv ("the same") derives from the same radical.[note 9] Example phrases include: An interesting phenomenon is that these can be strung along one after another in quite long constructions, e.g. An extreme, albeit colloquial, example with almost no intrinsic lexical meaning – yet which is meaningful to the Bulgarian ear – would be : The subject of the sentence is simply the pronoun "taya" (lit. "this one here"; colloq. "she"). Another interesting phenomenon that is observed in colloquial speech is the use of takova (neuter of takyv) not only as a substitute for an adjective, but also as a substitute for a verb. In that case the base form takova is used as the third person singular in the present indicative and all other forms are formed by analogy to other verbs in the language. Sometimes the "verb" may even acquire a derivational prefix that changes its meaning. Examples: Another use of takova in colloquial speech is the word takovata, which can be used as a substitution for a noun, but also, if the speaker does not remember or is not sure how to say something, they might say takovata and then pause to think about it: As a result of this versatility, the word takova can readily be used as a euphemism for taboo subjects. It is commonly used to substitute, for example, words relating to reproductive organs or sexual acts: Similar "meaningless" expressions are extremely common in spoken Bulgarian, especially when the speaker is finding it difficult to describe or express something. Syntax Bulgarian employs clitic doubling, mostly for emphatic purposes. For example, the following constructions are common in colloquial Bulgarian: The phenomenon is practically obligatory in the spoken language in the case of inversion signalling information structure (in writing, clitic doubling may be skipped in such instances, with a somewhat bookish effect): Sometimes, the doubling signals syntactic relations, thus: This is contrasted with: In this case, clitic doubling can be a colloquial alternative of the more formal or bookish passive voice, which would be constructed as follows: Clitic doubling is also fully obligatory, both in the spoken and in the written norm, in clauses including several special expressions that use the short accusative and dative pronouns such as "играе ми се" (I feel like playing), студено ми е (I am cold), and боли ме ръката (my arm hurts): Except the above examples, clitic doubling is considered inappropriate in a formal context. Vocabulary Most of the vocabulary of modern Bulgarian consists of terms inherited from Proto-Slavic and local Bulgarian innovations and formations of those through the mediation of Old and Middle Bulgarian. The native terms in Bulgarian account for 70% to 80% of the lexicon. The remaining 20% to 30% are loanwords from a number of languages, as well as derivations of such words. Bulgarian adopted also a few words of Thracian and Bulgar origin. The languages which have contributed most to Bulgarian as a way of foreign vocabulary borrowings are: The classical languages Latin and Greek are the source of many words, used mostly in international terminology. Many Latin terms entered Bulgarian during the time when present-day Bulgaria was part of the Roman Empire and also in the later centuries through Romanian, Aromanian, and Megleno-Romanian during Bulgarian Empires. The loanwords of Greek origin in Bulgarian are a product of the influence of the liturgical language of the Orthodox Church. Many of the numerous loanwords from another Turkic language, Ottoman Turkish and, via Ottoman Turkish, from Arabic were adopted into Bulgarian during the long period of Ottoman rule, but have been replaced with native Bulgarian terms. Furthermore, after the independence of Bulgaria from the Ottoman Empire in 1878, Bulgarian intellectuals imported many French language vocabulary. In addition, both specialized (usually coming from the field of science) and commonplace English words (notably abstract, commodity/service-related or technical terms) have also penetrated Bulgarian since the second half of the 20th century, especially since 1989. A noteworthy portion of this English-derived terminology has attained some unique features in the process of its introduction to native speakers, and this has resulted in peculiar derivations that set the newly formed loanwords apart from the original words (mainly in pronunciation), although many loanwords are completely identical to the source words. A growing number of international neologisms are also being widely adopted, causing controversy between younger generations who, in general, are raised in the era of digital globalization, and the older, more conservative educated purists. Sample text Article 1 of the Universal Declaration of Human Rights in Bulgarian: The romanization of the text into Latin alphabet: Bulgarian pronunciation transliterated in broad IPA: Article 1 of the Universal Declaration of Human Rights in English: See also Explanatory notes Notes References Bibliography External links Linguistic reports Dictionaries Courses
========================================
[SOURCE: https://github.com/features/ai] | [TOKENS: 494]
Navigation Menu Search code, repositories, users, issues, pull requests... Provide feedback We read every piece of feedback, and take your input very seriously. Saved searches Use saved searches to filter your results more quickly To see all available qualifiers, see our documentation. AI for every step of your workflow GitHub Copilot works with you and for you to bring big ideas to life and push technology forward. /ai logos Accelerate from idea to first commit Turn ambitious projects into a functional codebase with AI that understands your intent. Go from idea to deployed application using natural language with built-in AI, database, and authentication. Use plan mode in VS Code to approve Copilot’s blueprint before it starts building. Complete complex tasks quickly by using agent mode to analyze your code, propose edits, run tests, and validate results. Command your codebase Put Copilot to work on tasks in the background, clearing your path to focus on the next creative challenge. Assign issues to Copilot and get fully-formed pull requests back. Build and share specialized agents that connect to your tools and automate workflows. Get a mission control view of all your agent tasks to track their progress and stay in control. Secure and ship quality code Deploy with confidence as Copilot helps you find and fix vulnerabilities in real time. Eliminate vulnerabilities on the spot with intelligent, automated suggestions from Copilot Autofix. Use Copilot to analyze your work, uncover hidden bugs, and fix mistakes before your team reviews. Gain full visibility and control over agent-powered software development throughout your business. Tailor AI to your needs Prioritize speed, depth, or cost by picking the industry-leading model that’s right for you with GitHub Models. Bring the rich context of GitHub into your AI tools with the GitHub MCP Server. Find a community-driven registry of custom MCP servers via the GitHub MCP Registry. Trusted by the world’s leading organizations From startups to Fortune 100 enterprises, companies choose Copilot to innovate faster while keeping their code secure. Explore AI at GitHub Biweekly tips, best practices, and use cases—delivered straight to your inbox. Everything you need to get up and running with your AI pair programmer. Use these prompt examples to build faster and smarter. See how top organizations are using AI to transform software development. Tips, tutorials, and news for developers at every level. Learn how to experiment with and evaluate AI models in your workflow. Site-wide Links Get tips, technical guides, and best practices. Twice a month.
========================================
[SOURCE: https://github.com/security] | [TOKENS: 275]
Navigation Menu Search code, repositories, users, issues, pull requests... Provide feedback We read every piece of feedback, and take your input very seriously. Saved searches Use saved searches to filter your results more quickly To see all available qualifiers, see our documentation. Powerful security, designed for developers Get enterprise-grade, built-in application security. Find out how platform security strengthens your workflow. GitHub’s API stays secure with ISO, SOC 2, and GDPR. Join the companies that secure their code with GitHub Join the companies that secure their code with GitHub Security seamlesslyintegrated into your workflow Push protection automatically blocks secrets before they reach your repository, keeping code clean without disrupting workflows. Address security debt in your GitHub workflow with static analysis, AI remediation, and proactive vulnerability management. Securing the entiresoftware supply chain Learn how the lab helps secure open source by finding vulnerabilities, building tools like CodeQL, and advancing security research. Access a security vulnerability database inclusive of CVEs and GitHub originated security advisories from the world of open source software. Adopted by the world's leading organizations Resources to get started Take an in-depth look at the current state of application security. Learn how to write more secure code from the start with DevSecOps. Explore common application security pitfalls and how to avoid them. Site-wide Links Get tips, technical guides, and best practices. Twice a month.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Biological_energy] | [TOKENS: 839]
Contents Biological thermodynamics Biological thermodynamics (Thermodynamics of biological systems) is a science that explains the nature and general laws of thermodynamic processes occurring in living organisms as nonequilibrium thermodynamic systems that convert the energy of the Sun and food into other types of energy. The nonequilibrium thermodynamic state of living organisms is ensured by the continuous alternation of cycles of controlled biochemical reactions, accompanied by the release and absorption of energy, which provides them with the properties of phenotypic adaptation and a number of others. History In 1935, the first scientific work devoted to the thermodynamics of biological systems was published - the book of the Hungarian-Russian theoretical biologist Erwin S. Bauer (1890-1938) "Theoretical Biology". E. Bauer formulated the "Universal Law of Biology" in the following edition: "All and only living systems are never in equilibrium and perform constant work at the expense of their free energy against the equilibrium required by the laws of physics and chemistry under existing external conditions". This law can be considered the 1st law of thermodynamics of biological systems. In 1957, German-British physician and biochemist Hans Krebs and British-American biochemist Hans Kornberg in the book "Energy Transformations in Living Matter" first described the thermodynamics of biochemical reactions. In their works, H. Krebs and Hans Kornberg showed how in living cells, as a result of biochemical reactions, adenosine triphosphate (ATP) is synthesized from food, which is the main source of energy of living organisms (the Krebs–Kornberg cycle). In 2006, the Israeli-Russian scientist Boris Dobroborsky (1945) published the book "Thermodynamics of Biological Systems", in which the general principles of functioning of living organisms from the perspective of nonequilibrium thermodynamics were formulated for the first time and the nature and properties of their basic physiological functions were explained. The main provisions of the theory of thermodynamics of biological systems A living organism is a thermodynamic system of an active type (in which energy transformations occur), striving for a stable nonequilibrium thermodynamic state. The nonequilibrium thermodynamic state in plants is achieved by continuous alternation of phases of solar energy consumption as a result of photosynthesis and subsequent biochemical reactions, as a result of which adenosine triphosphate (ATP) is synthesized in the daytime, and the subsequent release of energy during the splitting of ATP mainly in the dark. Thus, one of the conditions for the existence of life on Earth is the alternation of light and dark time of day. In animals, the processes of alternating cycles of biochemical reactions of ATP synthesis and cleavage occur automatically. Moreover, the processes of alternating cycles of biochemical reactions at the levels of organs, systems and the whole organism, for example, respiration, heart contractions and others occur with different periods and externally manifest themselves in the form of biorhythms. At the same time, the stability of the nonequilibrium thermodynamic state, optimal under certain conditions of vital activity, is provided by feedback systems through the regulation of biochemical reactions in accordance with the Lyapunov stability theory. This principle of vital activity was formulated by B. Dobroborsky in the form of the 2nd law of thermodynamics of biological systems in the following wording: The stability of the nonequilibrium thermodynamic state of biological systems is ensured by the continuous alternation of phases of energy consumption and release through controlled reactions of synthesis and cleavage of ATP. The following consequences follow from this law: 1. In living organisms, no process can occur continuously, but must alternate with the opposite direction: inhalation with exhalation, work with rest, wakefulness with sleep, synthesis with cleavage, etc. 2. The state of a living organism is never static, and all its physiological and energy parameters are always in a state of continuous fluctuations relative to the average values both in frequency and amplitude. This principle of functioning of living organisms provides them with the properties of phenotypic adaptation and a number of others. See also References Further reading External links
========================================