text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/International_Conference_on_Learning_Representations] | [TOKENS: 157]
Contents International Conference on Learning Representations The International Conference on Learning Representations (ICLR) is a machine learning conference typically held in late April or early May each year. Along with NeurIPS and ICML, it is one of the three primary conferences of highest impact and reputation in machine learning and artificial intelligence research. The conference includes invited talks as well as oral and poster presentations of refereed papers. Since its inception in 2013, ICLR has employed an open peer review process to referee paper submissions (based on models proposed by Yann LeCun). It was founded by LeCun and Yoshua Bengio in 2012. Editions See also References External links This article about a computer conference is a stub. You can help Wikipedia by adding missing information.
========================================
[SOURCE: https://en.wikipedia.org/wiki/English_language] | [TOKENS: 16893]
Contents English language English is a West Germanic language that emerged in early medieval England and has since become a global lingua franca. The namesake of the language is the Angles, one of the Germanic peoples who migrated to Britain after the end of Roman rule. English is the most spoken language in the world, primarily due to the global influences of the former British Empire (succeeded by the Commonwealth of Nations) and the United States. It is the most widely learned second language in the world, with more second-language speakers than native speakers. However, English is only the third-most spoken native language, after Mandarin Chinese and Spanish. English is either the official language, or one of the official languages, in 57 sovereign states and 30 dependent territories, making it the most geographically widespread language in the world. In the United Kingdom, the United States, Australia, and New Zealand, it is the dominant language for historical reasons without being explicitly defined by law. It is a co-official language of the United Nations, the European Union, and many other international and regional organisations. It has also become the de facto lingua franca of diplomacy, science, technology, international trade, logistics, tourism, aviation, entertainment, and the Internet. Ethnologue estimated that there were over 1.4 billion speakers worldwide as of 2021[update]. Old English emerged from a group of West Germanic dialects spoken by the Anglo-Saxons. Early inscriptions were written with runes before a Latin-based alphabet was adopted for longer texts. Late Old English borrowed some grammar and core vocabulary from Old Norse, a North Germanic language. An evolution of the Latin alphabet, the English alphabet, fully supplanted the runic alphabet by the High Middle Ages, coinciding with the emergence of Middle English in England under Norman control. Middle English borrowed vocabulary extensively from French dialects, which are the source of approximately 28 per cent of Modern English words, and from Latin, which is the source of an additional 28 per cent. While Latin and the Romance languages are thus the source for a majority of its lexicon taken as a whole, English's grammar and phonology remain Germanic, as does most of its basic everyday vocabulary. Finally, Middle English transformed, in part through the Great Vowel Shift, into Modern English, which exists on a dialect continuum with Scots; it is next-most closely related to Low Saxon and Frisian. Classification English is a member of the Indo-European language family, belonging to the West Germanic branch of Germanic languages. Owing to their descent from a shared ancestor language known as Proto-Germanic, English and other Germanic languages – which include Dutch, German, and Swedish – have characteristic features in common, including a division of verbs into strong and weak classes, the use of modal verbs, and sound changes affecting Proto-Indo-European consonants known as Grimm's and Verner's laws. Old English was one of several Ingvaeonic languages, which emerged from a dialect continuum spoken by West Germanic peoples during the 5th century in Frisia, on the coast of the North Sea. Old English emerged among the Ingvaeonic speakers on the British Isles following their migration there, while the other Ingvaeonic languages (Frisian and Old Low German) developed in parallel on the continent. Old English evolved into Middle English, which in turn evolved into Modern English. Particular dialects of Old and Middle English also developed into other Anglic languages, including Scots and the extinct Fingallian and Yola dialects of Ireland. English was isolated from other Germanic languages on the continent and diverged considerably in vocabulary, syntax, and phonology as a result. It is not mutually intelligible with any continental Germanic language – though some, such as Dutch and Frisian, show strong affinities with it, especially in its earlier stages.[page needed] English and Frisian were traditionally considered more closely related to one another than they were to other West Germanic languages, but most modern scholarship does not recognise a particular affinity between them. Though they exhibited similar sound changes not otherwise found around the North Sea at that time, the specific changes appeared in English and Frisian at different times – a pattern uncharacteristic for languages sharing a unique phylogenetic ancestor. History Old English (also called Anglo-Saxon) was the earliest form of the English language, spoken from c. 450 to c. 1150. Old English developed from a set of West Germanic dialects, sometimes identified as Anglo-Frisian or North Sea Germanic, that were originally spoken along the coasts of Frisia, Lower Saxony and southern Jutland by Germanic peoples known to the historical record as the Angles, Saxons, and Jutes. From the 5th century, the Anglo-Saxons settled Britain as the Roman economy and administration collapsed. By the 7th century, Old English had become dominant in Britain – replacing the Common Brittonic and British Latin previously spoken during the Roman occupation, which ultimately left little influence on English. England and English (originally Ænglaland and Ænglisc) are both named after the Angles. Old English was divided into two Anglian dialects (Mercian and Northumbrian) and two Saxon dialects (Kentish and West Saxon). Through the influence exerted by the kingdom of Wessex, and the educational reforms instated by King Alfred during the 9th century, the West Saxon dialect became the standard written variety. The epic poem Beowulf is written in West Saxon, and the earliest English poem, Cædmon's Hymn, is written in Northumbrian. Modern English developed mainly from Mercian, but the Scots language developed from Northumbrian. During the earliest period of Old English, a few short inscriptions were made using a runic alphabet. By the 7th century, a Latin alphabet had been adopted. Written with half-uncial letterforms, it included the runic letters wynn ⟨ƿ⟩ and thorn ⟨þ⟩, and the modified Latin letters eth ⟨ð⟩, and ash ⟨æ⟩. Old English is markedly different from Modern English, such that 21st-century English speakers are entirely unable to understand Old English without special training. Its grammar was similar to that of modern German: nouns, adjectives, pronouns, and verbs had many more inflectional endings and forms, and word order was much freer than in Modern English. Modern English has case forms in pronouns (he, him, his) and has a few verb inflections (speak, speaks, speaking, spoke, spoken), but Old English had case endings in nouns as well, and verbs had more person and number endings. Between the 8th and 11th centuries, the English spoken in some regions underwent significant changes due to contact with Old Norse, a North Germanic language. Several waves of Norsemen colonising the northern British Isles in the 8th and 9th centuries put Old English speakers in constant contact with Old Norse. Norse influence was strongest in the north-eastern varieties of Old English spoken in the Danelaw surrounding York; today these features are still particularly present in Scots and Northern English. The centre of Norse influence was Lindsey, located in the Midlands. After Lindsey was incorporated into the Anglo-Saxon polity in 920, English spread extensively throughout the region. An element of Norse influence that continues in all English varieties today is the third person pronoun group beginning with th- (they, them, their) which replaced the Anglo-Saxon pronouns with h- (hie, him, hera). Other Norse loanwords include give, get, sky, skirt, egg, and cake, typically displacing a native Anglo-Saxon equivalent. Old Norse in this era retained considerable mutual intelligibility with some dialects of Old English, particularly northern ones. Englischmen þeyz hy hadde fram þe bygynnyng þre manner speche, Souþeron, Northeron, and Myddel speche in þe myddel of þe lond, ... Noþeles by comyxstion and mellyng, furst wiþ Danes, and afterward wiþ Normans, in menye þe contray longage ys asperyed, and som vseþ strange wlaffyng, chyteryng, harryng, and garryng grisbytting. [Although, from the beginning, Englishmen had three manners of speaking, southern, northern and midlands speech in the middle of the country, ... Nevertheless, through intermingling and mixing, first with Danes and then with Normans, amongst many the country language has arisen, and some use strange stammering, chattering, snarling, and grating gnashing.] The Middle English period is often defined as beginning with the Norman Conquest in 1066. During the centuries that followed, English was heavily influenced by the form of Old French spoken by the new Norman ruling class that had migrated to England (known as Old Norman). Over the following decades of contact, members of the middle and upper classes, whether native English or Norman, became increasingly bilingual. By 1150 at the latest, bilingual speakers represented a majority of the English aristocracy, and monolingual French speakers were nearly non-existent. The French spoken by the Norman elite in England eventually developed into the Anglo-Norman language. The division between Old to Middle English can also be placed during the composition of the Ormulum (c. late 12th century), a work by the Augustinian canon Orrm which highlights blending of Old English and Anglo-Norman elements in the language for the first time. As the lower classes, who represented the vast majority of the population, remained monolingual English speakers, a primary influence of Norman was as a lexical superstratum, introducing a wide range of loanwords related to politics, legislation and prestigious social domains. For instance, the French word trône appears for the first time, from which the English word throne is derived. Middle English also greatly simplified the inflectional system, probably in order to reconcile Old Norse and Old English, which were inflectionally different but morphologically similar. The distinction between nominative and accusative cases was lost except in personal pronouns, the instrumental case was dropped, and the use of the genitive case was limited to indicating possession. The inflectional system regularised many irregular inflectional forms, and gradually simplified the system of agreement, making word order less flexible. Middle English literature includes Geoffrey Chaucer's Canterbury Tales (c. 1400), and Thomas Malory's Le Morte d'Arthur (1485). In the Middle English period, the use of regional dialects in writing proliferated, and dialect traits were even used for effect by authors such as Chaucer. In the first translation of the entire Bible into English by John Wycliffe (1382), Matthew 8:20 reads: "Foxis han dennes, and briddis of heuene han nestis." Here the plural suffix -n on the verb have is still retained, but none of the case endings on the nouns are present. The period of Early Modern English, lasting between 1500 and 1700, was characterised by the Great Vowel Shift (1350–1700), inflectional simplification, and linguistic standardisation. The Great Vowel Shift affected the stressed long vowels of Middle English. It was a chain shift, meaning that each shift triggered a subsequent shift in the vowel system. Mid and open vowels were raised, and close vowels were broken into diphthongs. For example, the word bite was originally pronounced as the word beet is today, and the second vowel in the word about was pronounced as the word boot is today. The Great Vowel Shift explains many irregularities in spelling since English retains many spellings from Middle English, and it also explains why English vowel letters have very different pronunciations from the same letters in other languages. English began to rise in prestige, relative to Norman French, during the reign of Henry V. Around 1430, the Court of Chancery in Westminster began using English in its official documents, and a new standard form of Middle English, known as Chancery Standard, developed from the dialects of London and the East Midlands. In 1476, William Caxton introduced the printing press to England and began publishing the first printed books in London, expanding the influence of this form of English. Literature in Early Modern English includes the works of William Shakespeare and the 1611 King James Version (KJV) of the Bible. Even after the vowel shift the language still sounded different from Modern English: for example, the consonant clusters /kn ɡn sw/ in knight, gnat, and sword were still pronounced. Many of the grammatical features that a modern reader of Shakespeare might find quaint or archaic represent the distinct characteristics of Early Modern English. Matthew 8:20 in the KJV reads: "The Foxes have holes and the birds of the ayre have nests." This exemplifies the loss of case and its effects on sentence structure (replacement with subject–verb–object word order, and the use of of instead of the non-possessive genitive), and the introduction of loanwords from French (ayre) and word replacements (bird, originally meaning 'nestling', which had replaced Old English fugol). By the late 18th century, the British Empire had spread English through its colonies and geopolitical dominance. Commerce, science and technology, diplomacy, art, and formal education all contributed to English becoming the first truly global language. English also facilitated worldwide international communication. English was adopted in parts of North America, parts of Africa, Oceania, and many other regions. When they obtained political independence, some of the newly independent states that had multiple indigenous languages opted to continue using English as the official language to avoid the political and other difficulties inherent in promoting any one indigenous language above the others. In the 20th century the growing economic and cultural influence of the United States and its status as a superpower following the Second World War has, along with worldwide broadcasting in English by the BBC and other broadcasters, caused the language to spread across the planet much faster. In the 21st century, English is more widely spoken and written than any language has ever been. As Modern English developed, explicit norms for standard usage were published, and spread through official media such as public education and state-sponsored publications. In 1755, Samuel Johnson published his Dictionary of the English Language, which introduced standard spellings of words and usage norms. In 1828, Noah Webster published the American Dictionary of the English language to try to establish a norm for speaking and writing American English that was independent of the British standard. Within Britain, non-standard or lower class dialect features were increasingly stigmatised, leading to the quick spread of the prestige varieties among the middle classes. In modern English, the loss of grammatical case is almost complete (it is now found only in pronouns, such as he and him, she and her, who and whom), and subject–verb–object word order is mostly fixed. Some changes, such as the use of do-support, have become universalised. (Earlier English did not use the word do as a general auxiliary as Modern English does; at first it was only used in question constructions, and even then was not obligatory. Now, do-support with the verb have is becoming increasingly standardised.) The use of progressive forms in -ing, appears to be spreading to new constructions, and forms such as "had been being built" are becoming more common. Regularisation of irregular forms also slowly continues (e.g. dreamed instead of dreamt), and analytical alternatives to inflectional forms are becoming more common (e.g. more polite instead of politer). British English is also undergoing change under the influence of American English, fuelled by the strong presence of American English in the media. Geographical distribution As of 2016[update], 400 million people spoke English as their first language, and 1.1 billion spoke it as a second language. English is the largest language by number of speakers, spoken by communities on every continent. Estimates of second language and foreign-language speakers vary greatly depending on how proficiency is defined, from 470 million to more than 1 billion. In 2003, David Crystal estimated that non-native speakers outnumbered native speakers by a ratio of three-to-one. Braj Kachru has categorised countries into the Three Circles of English model, according to how the language historically spread in each country, how it is acquired by the populace, and the range of uses it has there – with a country's classification able to change over time. "Inner-circle" countries have large communities of native English speakers; these include the United Kingdom, the United States, Australia, Canada, Ireland, and New Zealand, where the majority speaks English – and South Africa, where a significant minority speaks English. The countries with the most native English speakers are, in descending order, the United States (at least 231 million), the United Kingdom (60 million), Canada (19 million), Australia (at least 17 million), South Africa (4.8 million), Ireland (4.2 million), and New Zealand (3.7 million). In these countries, children of native speakers learn English from their parents, and local people who speak other languages and new immigrants learn English to communicate in their neighbourhoods and workplaces. Inner-circle countries are the base from which English spreads to other regions of the world. "Outer-circle" countries – such as the Philippines, Jamaica, India, Pakistan, Singapore, Malaysia, and Nigeria – have much smaller proportions of native English speakers, but use of English as a second language in education, government, or domestic business is significant, and its use for instruction in schools and official government operations is routine. These countries have millions of native speakers on dialect continua, which range from English-based creole languages to standard varieties of English used in inner-circle countries. They have many more speakers who acquire English as they grow up through day-to-day use and exposure to English-language broadcasting, especially if they attend schools where English is the language of instruction. Varieties of English learned by non-native speakers born to English-speaking parents may be influenced, especially in their grammar, by the other languages spoken by those learners – with most including words rarely used by native speakers in inner-circle countries, as well as grammatical and phonological differences from inner-circle varieties. "Expanding-circle" countries are where English is taught as a foreign language – though the character of English as a first, second, or foreign language in a given country is often debatable, and may change over time. For example, in countries like the Netherlands, an overwhelming majority of the population can speak English, and it is often used in higher education and to communicate with foreigners. English is a pluricentric language, which means that no one national authority sets the standard for use of the language. Spoken English, including English used in broadcasting, generally follows national pronunciation standards that are established by custom rather than by regulation. International broadcasters are usually identifiable as coming from one country rather than another through their accents, but newsreader scripts are also composed largely in international standard written English. The norms of standard written English are maintained purely by the consensus of educated English speakers around the world, without any oversight by any government or international organisation. American listeners readily understand most British broadcasting, and British listeners readily understand most American broadcasting. Most English speakers around the world can understand radio programmes, television programmes, and films from many parts of the English-speaking world. Both standard and non-standard varieties of English can include both formal or informal styles, distinguished by word choice and syntax and use both technical and non-technical registers. The settlement history of the English-speaking inner circle countries outside Britain helped level dialect distinctions and produce koiné forms of English in South Africa, Australia, and New Zealand. The majority of immigrants to the United States without British ancestry rapidly adopted English after arrival. Now the majority of the United States population are monolingual English speakers. Modern English is sometimes described as the first global lingua franca, or as the first world language. English is the world's most widely used language in newspaper publishing, book publishing, international telecommunications, scientific publishing, international trade, mass entertainment, and diplomacy. Parity with French as a language of diplomacy had been achieved by Treaty of Versailles negotiations in 1919. By the time the United Nations was founded at the end of World War II, English had become pre-eminent; it is one of six official languages of the United Nations. and is now the main worldwide language of diplomacy and international relations. Many other worldwide international organisations, including the International Olympic Committee, specify English as a working language or official language of the organisation. Many regional international organisations, such as the European Free Trade Association (EFTA), Association of Southeast Asian Nations (ASEAN), and Asia-Pacific Economic Cooperation (APEC) use English as their sole working language, despite most members not being countries with a majority of native English speakers. While the EU allows member states to designate any of the national languages as an official language of the Union, in practice English is the main working language of EU organisations. English serves as the basis for the required controlled natural languages Seaspeak and Airspeak, used as international languages of seafaring and aviation. English is the most frequently taught foreign language in the world. Most people learning English do so for practical reasons, as opposed to ideological reasons. In EU countries, English is the most widely spoken foreign language in 19 of the 25 member states where it is not an official language (that is, the countries other than Ireland and Malta). In a 2012 official Eurobarometer poll (conducted when the UK was still a member of the EU), 38 per cent of the EU respondents outside the countries where English is an official language said they could speak English well enough to have a conversation in that language. The next most commonly mentioned foreign language, French (which is the most widely known foreign language in the UK and Ireland), could be used in conversation by 12 per cent of respondents. The global influence of English has led to concerns about language death, and to claims of linguistic imperialism, and has provoked resistance to the spread of English; however, the number of speakers continues to increase because many people around the world think English provides them with better employment opportunities and increased quality of life. Working knowledge of English has become a requirement in a number of occupations and professions such as medicine and computing. Though it formerly had parity with French and German in scientific research, English now dominates the field. Its importance in scientific publishing is such that over 80 per cent of scientific journal articles indexed by Chemical Abstracts in 1998 were written in English, as were 90 per cent of all articles in natural science publications by 1996, and 82 per cent of articles in humanities publications by 1995. As decolonisation proceeded throughout the British Empire in the 1950s and 1960s, former colonies often did not reject English but rather continued to use it as independent countries setting their own language policies. For example, English is one of the official languages of India. Many Indians have shifted from associating the language with colonialism to associating it with economic progress. English is widely used in media and literature, with India being the third-largest publisher of English-language books in the world, after the US and UK. However, less than 5 per cent of the population speak English fluently, with the country's native English speakers numbering in the low hundreds of thousands. In 2004, David Crystal claimed India had the largest population of people able to speak or understand English in the world, though most scholars estimate the US remains home to a larger English-speaking population. Many English speakers in Africa have become part of an "Afro-Saxon" language community that unites Africans from different countries. Regarding its future development, it is considered most likely that English will continue to function as a koiné language, with a standard form that unifies speakers around the world. Phonology English phonology and phonetics differ from one dialect to another, usually without interfering with mutual communication. Phonological variation affects the inventory of phonemes (speech sounds that distinguish meaning), and phonetic variation consists in differences in pronunciation of the phonemes. This overview mainly describes Received Pronunciation (RP) and General American (GA), the standard varieties of the United Kingdom and the United States respectively. Most English dialects share the same 24 consonant phonemes (or 26, if marginal /x/ and glottal stop /ʔ/ are included). The consonant inventory shown below is valid for California English, and for RP. For pairs of obstruents (stops, affricates, and fricatives) such as /p b/, /tʃ dʒ/, and /s z/, the first is fortis (strong) and the second is lenis (weak). Fortis obstruents, such as /p tʃ s/ are pronounced with more muscular tension and breath force than lenis consonants, such as /b dʒ z/, and are always voiceless. Lenis consonants are partly voiced at the beginning and end of utterances, and fully voiced between vowels. Fortis stops such as /p/ have additional articulatory or acoustic features in most dialects: they are aspirated [pʰ] when they occur alone at the beginning of a stressed syllable, often unaspirated in other cases, and often unreleased [p̚] or pre-glottalised [ʔp] at the end of a syllable. In a single-syllable word, a vowel before a fortis stop is shortened: e.g. nip has a noticeably shorter vowel (phonetically, not phonemically) than nib [nɪˑb̥] (see below). In RP, the lateral approximant /l/ has two main allophones (pronunciation variants): the clear or plain [l], as in light, and the dark or velarised [ɫ], as in full. GA has dark l in most cases. All sonorants (liquids /l, r/ and nasals /m, n, ŋ/) devoice when following a voiceless obstruent, and they are syllabic when following a consonant at the end of a word. The pronunciation of vowels varies a great deal between dialects and is one of the most detectable aspects of a speaker's accent. The accompanying table below lists the vowel phonemes in RP and GA, with example words from lexical sets. The vowels are represented with symbols from the International Phonetic Alphabet; those given for RP are standard in British dictionaries and other publications. In RP, vowel length is phonemic; long vowels are marked with a triangular colon ⟨ː⟩ in the table above, such as the vowel of need [niːd] as opposed to bid [bɪd]. In GA, vowel length is non-distinctive. In both RP and GA, vowels are phonetically shortened before fortis consonants in the same syllable, like /t tʃ f/, but not before lenis consonants like /d dʒ v/ or in open syllables: thus, the vowels of rich [rɪtʃ], neat [nit], and safe [seɪ̯f] are noticeably shorter than the vowels of ridge [rɪˑdʒ], need [niˑd], and save [seˑɪ̯v], and the vowel of light [laɪ̯t] is shorter than that of lie [laˑɪ̯]. Because lenis consonants are frequently voiceless at the end of a syllable, vowel length is an important cue as to whether the following consonant is lenis or fortis. The vowel /ə/ only occurs in unstressed syllables and is more open in quality in stem-final positions. Some dialects do not contrast /ɪ/ and /ə/ in unstressed positions, such that rabbit and abbot rhyme and Lenin and Lennon are homophonous, a dialectal feature called the weak vowel merger. GA /ɜr/ and /ər/ are realised as an r-coloured vowel [ɚ], as in further [ˈfɚðɚ] (phonemically /ˈfɜrðər/), which in RP is realised as [ˈfəːðə] (phonemically /ˈfɜːðə/). An English syllable includes a syllable nucleus consisting of a vowel sound. Syllable onset and coda (start and end) are optional. A syllable can start with up to three consonant sounds, as in sprint /sprɪnt/, and end with up to five, as in (for some dialects) angsts /aŋksts/. This gives an English syllable a structure of (CCC)V(CCCCC) – where C represents a consonant and V a vowel. The word strengths /strɛŋθs/ is thus close to the most complex syllable possible in English. The consonants that may appear together in onsets or codas are restricted, as is the order in which they may appear. Onsets can only have four types of consonant clusters: a stop and approximant, as in play; a voiceless fricative and approximant, as in fly or sly; s and a voiceless stop, as in stay; and s, a voiceless stop, and an approximant, as in string. Clusters of nasal and stop are only allowed in codas. Clusters of obstruents always agree in voicing, and clusters of sibilants and of plosives with the same point of articulation are prohibited. Several consonants have limited distributions: /h/ can only occur in syllable-initial position, and /ŋ/ only in syllable-final position. Stress plays an important role in English. Certain syllables are stressed, while others are unstressed. Stress is a combination of duration, intensity, vowel quality, and sometimes changes in pitch. Stressed syllables are pronounced longer and louder than unstressed syllables, and vowels in unstressed syllables are frequently reduced while vowels in stressed syllables are not. Stress in English is phonemic, gaining freedom and dynamicity once lost in Proto-Germanic through a majority of borrowings from non-Germanic languages. For instance, the word contract is stressed on the first syllable (/ˈkɒntrækt/ KON-trakt) when used as a noun, but on the last syllable (/kənˈtrækt/ kən-TRAKT) for most meanings (for example, "reduce in size") when used as a verb. Here stress is connected to vowel reduction: in the noun "contract" the first syllable is stressed and has the unreduced vowel /ɒ/, but in the verb "contract" the first syllable is unstressed and its vowel is reduced to /ə/. Stress is also used to distinguish between words and phrases, so that a compound word receives a single stress unit, but the corresponding phrase has two: e.g. "a burnout" (/ˈbɜːrnaʊt/) versus "to burn out" (/ˈbɜːrn ˈaʊt/), and "a hotdog" (/ˈhɒtdɒɡ/) versus "a hot dog" (/ˈhɒt ˈdɒɡ/). In terms of rhythm, English is generally described as a stress-timed language, meaning that the amount of time between stressed syllables tends to be equal. Stressed syllables are pronounced longer, but unstressed syllables (syllables between stresses) are shortened. Vowels in unstressed syllables are shortened as well, and vowel shortening causes changes in vowel quality: vowel reduction. Varieties of English vary the most in pronunciation of vowels. The best-known national varieties used as standards for education in non-English-speaking countries are British (BrE) and American (AmE). Countries such as Canada, Australia, Ireland, New Zealand and South Africa have their own standard varieties which are less often used as standards for education internationally. English has undergone many historical sound changes, some of them affecting all varieties, and others affecting only a few. Most standard varieties are affected by the Great Vowel Shift, which changed the pronunciation of long vowels, but a few dialects have slightly different results. In North America, a number of chain shifts such as the Northern Cities Vowel Shift and Canadian Shift have produced very different vowel landscapes in some regional accents. Some dialects have fewer or more consonant phonemes and phones than the standard varieties. Some conservative varieties like Scottish English have a voiceless [ʍ] sound in whine that contrasts with the voiced [w] in wine, but most other dialects pronounce both words with voiced [w], a dialect feature called wine–whine merger. The voiceless velar fricative sound /x/ is found in Scottish English, which distinguishes loch /lɔx/ from lock /lɔk/. Accents like Cockney with "h-dropping" lack the glottal fricative /h/, and dialects with th-stopping and th-fronting like African-American Vernacular and Estuary English do not have the dental fricatives /θ, ð/, but replace them with dental or alveolar stops /t, d/ or labiodental fricatives /f, v/. Other changes affecting the phonology of local varieties are processes such as yod-dropping, yod-coalescence, and reduction of consonant clusters.[page needed] GA and RP vary in their pronunciation of historical /r/ after a vowel at the end of a syllable (in the syllable coda). GA is a rhotic dialect, meaning that it pronounces /r/ at the end of a syllable, but RP is non-rhotic, meaning that it loses /r/ in that position. English dialects are classified as rhotic or non-rhotic depending on whether they elide /r/ like RP or keep it like GA. There is complex dialectal variation in words with the open front and open back vowels /æ ɑː ɒ ɔː/. These four vowels are only distinguished in RP, Australia, New Zealand and South Africa. In GA, these vowels merge to three /æ ɑ ɔ/, and in Canadian English, they merge to two /æ ɑ/. Grammar Typical for an Indo-European language, English grammar follows accusative morphosyntactic alignment. Unlike other Indo-European languages, English has largely abandoned the inflectional case system in favour of analytic constructions. Only the personal pronouns retain morphological case more strongly than any other word class. English distinguishes at least seven major word classes: verbs, nouns, adjectives, adverbs, determiners (including articles), prepositions, and conjunctions. Some analyses add pronouns as a class separate from nouns, and subdivide conjunctions into subordinators and coordinators, and add the class of interjections. English also has a rich set of auxiliary verbs, such as have and do, expressing the categories of mood and aspect. Questions are marked by do-support, wh-movement (fronting of question words beginning with wh-) and word order inversion with some verbs. Some traits typical of Germanic languages persist in English, such as the distinction between irregularly inflected strong stems inflected through ablaut (i.e. changing the vowel of the stem, as in the pairs speak / spoke and foot / feet) and weak stems inflected through affixation (such as love / loved, hand / hands). Vestiges of the case and gender system are found in the pronoun system (he / him, who / whom); similarly, traces of more complex verb conjugation are seen in the inflection of the copula verb to be. The seven word classes are exemplified in this sample sentence: English nouns are only inflected for number and possession. New nouns can be formed through derivation or compounding. They are semantically divided into proper nouns (names) and common nouns. Common nouns are in turn divided into concrete and abstract nouns, and grammatically into count nouns and mass nouns. Most count nouns are inflected for plural number through the use of the plural suffix -s, but a few nouns have irregular plural forms. Mass nouns can only be pluralised through the use of a count noun classifier, e.g. "one loaf of bread", "two loaves of bread". Regular plural formation: Irregular plural formation: Possession can be expressed either by the possessive enclitic -s (also traditionally called a genitive suffix), or by the preposition of. Historically the -s possessive has been used for animate nouns, whereas the of possessive has been reserved for inanimate nouns. Today this distinction is less clear, and many speakers use -s also with inanimates. Orthographically the possessive -s is separated from a singular noun with an apostrophe. If the noun is plural formed with -s the apostrophe follows the -s. Possessive constructions: Nouns can form noun phrases (NPs) where they are the syntactic head of the words that depend on them such as determiners, quantifiers, conjunctions or adjectives. Noun phrases can be short, such as the man, composed only of a determiner and a noun. They can also include modifiers such as adjectives (e.g. red, tall, all) and specifiers such as determiners (e.g. the, that). But they can also tie together several nouns into a single long NP, using conjunctions such as and, or prepositions such as with, e.g. "the tall man with the long red trousers and his skinny wife with the spectacles" (this NP uses conjunctions, prepositions, specifiers, and modifiers). Regardless of length, an NP functions as a syntactic unit. For example, the possessive enclitic can, in cases which do not lead to ambiguity, follow the entire noun phrase, as in "The President of India's wife", where the enclitic follows India and not President. The class of determiners is used to specify the noun they precede in terms of definiteness, where the marks a definite noun and a or an an indefinite one. A definite noun is assumed by the speaker to be already known by the interlocutor, whereas an indefinite noun is not specified as being previously known. Quantifiers, which include one, many, some and all, are used to specify the noun in terms of quantity or number. The noun must agree with the number of the determiner, e.g. one man (sg.) but all men (pl.). Determiners are the first constituents in a noun phrase. English adjectives are words such as good, big, interesting, and Canadian that most typically modify nouns, denoting characteristics of their referents (e.g. "a red car"). As modifiers, they come before the nouns they modify and after determiners. English adjectives also function as predicative complements (e.g. "the child is happy"). In Modern English, adjectives are not inflected so as to agree in form with the noun they modify, as in most other Indo-European languages. For example, in the phrases "the slender boy", and "many slender girls", the adjective slender does not change form to agree with either the number or gender of the noun. Some adjectives are inflected for degree of comparison, with the positive degree unmarked, the suffix -er marking the comparative, and -est marking the superlative: "a small boy", "the boy is smaller than the girl", "that boy is the smallest". Some adjectives have irregular suppletive comparative and superlative forms, such as good, better, and best. Other adjectives have comparatives formed by periphrastic constructions, with the adverb more marking the comparative, and most marking the superlative: happier or more happy, the happiest or most happy. There is some variation among speakers regarding which adjectives use inflected or periphrastic comparison, and some studies have shown a tendency for the periphrastic forms to become more common at the expense of the inflected form. English determiners are words such as the, each, many, some, and which, occurring most typically in noun phrases before the head nouns and any modifiers and marking the noun phrase as definite or indefinite. They often agree with the noun in number. They do not typically inflect for degree of comparison. English pronouns conserve many traits of case and gender inflection. The personal pronouns retain a difference between subjective and objective case in most persons (I/me, he/him, she/her, we/us, they/them) as well as an animateness distinction in the third person singular (distinguishing it from the three sets of animate third person singular pronouns) and an optional gender distinction in the animate third person singular (distinguishing between feminine she/her, epicene they/them, and masculine he/him. The subjective case corresponds to the Old English nominative case, and the objective case is used in the sense both of the previous accusative case (for a patient, or direct object of a transitive verb), and of the Old English dative case (for a recipient or indirect object of a transitive verb). The subjective is used when the pronoun is the subject of a finite clause, otherwise the objective is used. While grammarians such as Henry Sweet and Otto Jespersen noted that the English cases did not correspond to the traditional Latin-based system, some contemporary grammars, including The Cambridge Grammar of the English Language, retain traditional nominative and accusative labels for the cases. Possessive pronouns exist in dependent and independent forms; the dependent form functions as a determiner specifying a noun (as in my chair), while the independent form can stand alone as if it were a noun (e.g. "the chair is mine"). Grammatical person in English no longer distinguishes between formal and informal pronouns of address, with the second person singular familiar pronoun thou that previously existed in the language having fallen almost entirely out of use by the 18th century. Both the second and third persons share pronouns between the plural and singular: Pronouns are used to refer to entities deictically or anaphorically. A deictic pronoun points to some person or object by identifying it relative to the speech situation – for example, the pronoun I identifies the speaker, and the pronoun you, the addressee. Anaphoric pronouns such as that refer back to an entity already mentioned or assumed by the speaker to be known by the audience, for example in the sentence "I already told you that". The reflexive pronouns are used when the oblique argument is identical to the subject of a phrase (e.g. "he sent it to himself" or "she braced herself for impact"). Prepositional phrases (PP) are phrases composed of a preposition and one or more nouns, e.g. "with the dog", "for my friend", "to school", "in England". English prepositions have a wide range of uses – including describing movement, place, and other relations between entities, as well as functions that are syntactic in nature, like introducing complement clauses and oblique arguments of verbs. For example, in the phrase "I gave it to him", the preposition to marks the indirect object of the verb to give. Traditionally words were only considered prepositions if they governed the case of the noun they preceded, for example causing the pronouns to use the objective rather than subjective form, "with her", "to me", "for us". But some contemporary grammars no longer consider government of case to be the defining feature of the class of prepositions, rather defining prepositions as words that can function as the heads of prepositional phrases. English verbs are inflected for tense and aspect and marked for agreement with a third person present singular subject. Only the copula verb to be is still inflected for agreement with the plural and first and second person subjects. Auxiliary verbs such as have and be are paired with verbs in the infinitive, past, or progressive forms. They form complex tenses, aspects, and moods. Auxiliary verbs differ from other verbs in that they can be followed by the negation, and in that they can occur as the first constituent in a question sentence. Most verbs have six inflectional forms. The primary forms are a plain present, a third person singular present, and a preterite (past) form. The secondary forms are a plain form used for the infinitive, a gerund-participle and a past participle. The verb to be – which among other uses in English functions as the primary auxiliary verb indicating the imperfective aspect (e.g. "I am going"), as well as the copula – is the only verb to retain some of its original conjugation, and takes different inflectional forms depending on the subject. The first person present form is am, the third person singular form is is, and the form are is used in the second person singular and all three plurals. The only verb past participle is been and its gerund-participle is being. English has two primary tenses, past (preterite) and non-past. The preterite is inflected by using the preterite form of the verb, which for the regular verbs includes the suffix -ed, and for the strong verbs either the suffix -t or a change in the stem vowel. The non-past form is unmarked except in the third person singular, which takes the suffix -s. English does not have future verb forms. The future tense is expressed periphrastically with one of the auxiliary verbs will or shall. Many varieties also use a near future constructed with the phrasal verb "be going to" (going-to future). Further aspectual distinctions are shown by auxiliary verbs, primarily have and be, which show the contrast between a perfect and non-perfect past tense ("I have run" vs. "I was running"), and compound tenses such as preterite perfect ("I had been running") and present perfect ("I have been running"). For the expression of mood, English uses a number of modal auxiliaries, such as can, may, will, shall and the past tense forms could, might, would, should. There are also subjunctive and imperative moods, both based on the plain form of the verb (i.e. without the third person singular -s), for use in subordinate clauses (e.g. subjunctive: "It is important that he run every day"; imperative Run!). An infinitive form, that uses the plain form of the verb and the preposition to, is used for verbal clauses that are syntactically subordinate to a finite verbal clause. Finite verbal clauses are those that are formed around a verb in the present or preterite form. In clauses with auxiliary verbs, they are the finite verbs and the main verb is treated as a subordinate clause. For example, "he has to go" where only the auxiliary verb have is inflected for time and the main verb to go is in the infinitive, or in a complement clause such as "I saw him leave", where the main verb is see, which is in a preterite form, and leave is in the infinitive. English also makes frequent use of constructions traditionally called phrasal verbs, verb phrases that are made up of a verb root and a preposition or particle that follows the verb. The phrase then functions as a single predicate. In terms of intonation the preposition is fused to the verb, but in writing it is written as a separate word. Examples of phrasal verbs are "to get up", "to ask out", "to get together", and "to put up with". The phrasal verb frequently has a highly idiomatic meaning that is more specialised and restricted than what can be simply extrapolated from the combination of verb and preposition complement (e.g. lay off meaning terminate someone's employment). Some grammarians do not consider this type of construction to form a syntactic constituent and hence refrain from using the term "phrasal verb". Instead, they consider the construction simply to be a verb with a prepositional phrase as its syntactic complement, e.g. "he woke up in the morning" and "he ran up in the mountains" are syntactically equivalent. The function of adverbs is to modify the action or event described by the verb by providing additional information about the manner in which it occurs. Many English adverbs are derived from adjectives by appending the suffix -ly. For example, in the phrase "the woman walked quickly", the adverb quickly is derived from the adjective quick. Some commonly used adjectives have irregular adverbial forms, such as good, which has the adverbial form well. Modern English syntax is moderately analytic. It has developed features such as modal verbs and word order as resources for conveying meaning. Auxiliary verbs mark constructions such as questions, negative polarity, the passive voice and progressive aspect. English has moved from the Germanic verb-second (V2) word order to being almost exclusively subject–verb–object (SVO). The combination of SVO order and use of auxiliary verbs often creates clusters of two or more verbs at the centre of the sentence, such as "he had been hoping to try opening it". In most sentences, English only marks grammatical relations through word order. The subject constituent precedes the verb and the object constituent follows it. The grammatical roles of each constituent are marked only by the position relative to the verb: An exception is found in sentences where one of the constituents is a pronoun, in which case it is doubly marked, both by word order and by case inflection, where the subject pronoun precedes the verb and takes the subjective case form, and the object pronoun follows the verb and takes the objective case form. The example below demonstrates this double marking in a sentence where both object and subject are represented with a third person singular masculine pronoun: Indirect objects (IO) of ditransitive verbs can be placed either as the first object in a double object construction (S V IO O), such as "I gave Jane the book" or in a prepositional phrase, such as "I gave the book to Jane". English sentences may be composed of one or more clauses, that may in turn be composed of one or more phrases (e.g. noun phrases, verb phrases, prepositional phrases). A clause is built around a verb and includes its constituents, such as any noun or prepositional phrases. Within a sentence, there is always at least one main clause (or matrix clause) whereas other clauses are subordinate to a main clause. Subordinate clauses may function as arguments of the verb in the main clause. For example, in the phrase "I think (that) you are lying", the main clause is headed by the verb think, the subject is I, but the object of the phrase is the subordinate clause "(that) you are lying". The subordinating conjunction that shows that the clause that follows is a subordinate clause, but it is often omitted. Relative clauses are clauses that function as a modifier or specifier to some constituent in the main clause: For example, in the sentence "I saw the letter that you received today", the relative clause "that you received today" specifies the meaning of the word letter, the object of the main clause. Relative clauses can be introduced by the pronouns who, whose, whom, and which as well as by that (which can also be omitted). In contrast to many other Germanic languages there are no major differences between word order in main and subordinate clauses. English auxiliary verbs are relied upon for many functions, including the expression of tense, aspect, and mood. Auxiliary verbs form main clauses, and the main verbs function as heads of a subordinate clause of the auxiliary verb. For example, in the sentence "the dog did not find its bone", the clause "find its bone" is the complement of the negated verb did not. Subject–auxiliary inversion is used in many constructions, including focus, negation, and interrogative constructions. The verb do can be used as an auxiliary even in simple declarative sentences, where it usually serves to add emphasis, as in "I did shut the fridge." However, in the negated and inverted clauses referred to above, it is used because the rules of English syntax permit these constructions only when an auxiliary is present. Modern English does not allow the addition of the negating adverb not to an ordinary finite lexical verb, as in *"I know not" – it can only be added to an auxiliary (or copular) verb, hence if there is no other auxiliary present when negation is required, the auxiliary do is used, to produce a form like "I do not (don't) know." The same applies in clauses requiring inversion, including most questions – inversion must involve the subject and an auxiliary verb, so it is not possible to say *"Know you him?"; grammatical rules require "Do you know him?" Negation is done with the adverb not, which precedes the main verb and follows an auxiliary verb. A contracted form of not -n't can be used as an enclitic attaching to auxiliary verbs and to the copula verb to be. Just as with questions, many negative constructions require the negation to occur with do-support, thus in Modern English "I don't know him" is the correct answer to the question "Do you know him?", but not *"I know him not", although this construction may be found in older English. Passive constructions also use auxiliary verbs. A passive construction rephrases an active construction in such a way that the object of the active phrase becomes the subject of the passive phrase, and the subject of the active phrase is either omitted or demoted to a role as an oblique argument introduced in a prepositional phrase. They are formed by using the past participle either with the auxiliary verb to be or to get, although not all varieties of English allow the use of passives with get. For example, putting the sentence "she sees him" into the passive becomes "he is seen (by her)", or "he gets seen (by her)". Both yes/no questions and wh-questions in English are mostly formed using subject–auxiliary inversion ("Am I going tomorrow?", "Where can we eat?"), which may require do-support ("Do you like her?", "Where did he go?"). In most cases, interrogative words (or wh-words) – which include who, what, when, where, why, and how – appear in a fronted position. For example, in the question "What did you see?", the word what appears as the first constituent despite being the grammatical object of the sentence. When the wh-word is the subject or forms part of the subject, no inversion occurs (e.g. "Who saw the cat?"). Prepositional phrases can also be fronted when they are the questions theme (e.g. "To whose house did you go last night?"). The personal interrogative pronoun who is the only interrogative pronoun to still show inflection for case, with the variant whom serving as the objective case form, although this form may be going out of use in many contexts. While English is a subject-prominent language, at the discourse level it tends to use a topic–comment structure, where the known information (topic) precedes the new information (comment). Because of the strict SVO syntax, the topic of a sentence generally has to be the grammatical subject of the sentence. In cases where the topic is not the grammatical subject of the sentence, it is often promoted to subject position through syntactic means. One way of doing this is through a passive construction, "the girl was stung by the bee". Another way is through a cleft sentence where the main clause is demoted to be a complement clause of a copula sentence with a dummy subject such as it or there, e.g. "it was the girl that the bee stung", "there was a girl who was stung by a bee". Dummy subjects are also used in constructions where there is no grammatical subject such as with impersonal verbs (e.g. "it is raining") or in existential clauses ("there are many cars on the street"). Through the use of these complex sentence constructions with informationally vacuous subjects, English is able to maintain both a topic–comment sentence structure and a SVO syntax. Focus constructions emphasise a particular piece of new or salient information within a sentence, generally through allocating the main sentence level stress on the focal constituent. For example, "the girl was stung by a bee" (emphasising it was a bee and not, for example, a wasp that stung her), or "the girl was stung by a bee" (contrasting with another possibility, for example that it was the boy). Topic and focus can also be established through syntactic dislocation, either preposing or postposing the item to be focused on relative to the main clause. For example, "That girl over there, she was stung by a bee", emphasises the girl by preposition, but a similar effect could be achieved by postposition, "she was stung by a bee, that girl over there", where reference to the girl is established as an afterthought. Cohesion between sentences is achieved through the use of deictic pronouns as anaphora (e.g. "that is exactly what I mean" where that refers to some fact known to both interlocutors, or then used to locate the time of a narrated event relative to the time of a previously narrated event). Discourse markers such as oh, so, or well, also signal the progression of ideas between sentences and help to create cohesion. Discourse markers are often the first constituents in sentences. Discourse markers are also used for stance taking in which speakers position themselves in a specific attitude towards what is being said, for example, "no way is that true!" (the idiomatic marker "no way!" expressing disbelief), or "boy! I'm hungry" (the marker boy expressing emphasis). While discourse markers are particularly characteristic of informal and spoken registers of English, they are also used in written and formal registers. Vocabulary The English lexicon consists of around 170,000 words (or 220,000, if counting obsolete words), according to an estimate based on the 1989 edition of the Oxford English Dictionary. Over one-half are nouns, one-quarter are adjectives, and one-seventh are verbs. Another estimate – which includes scientific jargon, prefixed and suffixed words, loanwords of extremely limited use, technical acronyms, etc. – counts around 1 million total English words. English borrows vocabulary quickly from many languages and other sources. Early studies of English vocabulary by lexicographers (scholars who study vocabulary and compile dictionaries) were impeded by a lack of comprehensive data on actual vocabulary in use from high-quality linguistic corpora (collections of actual written texts and spoken passages). Many statements published before the end of the 20th century about the growth of English vocabulary over time, the dates of first use of various words in English, and the sources of English vocabulary will have to be corrected as new computerised analyses of linguistic corpus data become available. English forms new words from existing words or roots in its vocabulary through a variety of processes. One of the most productive processes in English is conversion, using a word with a different grammatical role, for example using a noun as a verb or a verb as a noun. Another productive word-formation process is nominal compounding, producing compound words such as babysitter or ice cream or homesick. Formation of new words, called neologisms, based on Greek or Latin roots (for example television or optometry) is a highly productive process in modern European languages like English, so much so that it is often difficult to determine in which language a neologism originated. For this reason, American lexicographer Philip Gove attributed many such words to the "international scientific vocabulary" (ISV) when compiling Webster's Third New International Dictionary (1961). Another active word-formation process in English is that of acronyms, which result from pronouncing abbreviations of longer phrases as single words, e.g. NATO, laser, scuba. Throughout its history, English has been a particularly frequent borrower of loanwords from other languages. West Germanic words in use since the Anglo-Saxon period still comprise most of the language's core vocabulary, as well as most of its most frequently used words. Many sentences can be constructed without loanwords, but not without core Anglo-Saxon vocabulary. English has formal and informal speech registers; informal registers, including child-directed speech, tend to be made up predominantly of Anglo-Saxon vocabulary, while Latinate vocabulary appears more frequently in legal, scientific, and academic writing. Prolonged and intense contact with French has resulted in English having a very high proportion of Latinate words – with French loanwords borrowed during different stages of the language's history comprising 28 per cent of the English lexicon. In all periods of its history, English has also borrowed words from Latin directly, representing another 28 per cent of the lexicon. In turn, many of these words had originally entered Latin from Greek. Greek and Latin stems remain highly productive sources for new literary, technical, and scientific vocabulary in English. Loanwords from Old Norse primarily entered English between the 8th and 11th centuries, during the Norse colonisation of eastern and northern England, and typically displaced an Anglo-Saxon equivalent. Many represent core vocabulary – including give, get, sky, skirt, egg, and cake. English has had a strong influence on the vocabulary of other languages. The influence of English comes from such factors as opinion leaders in other countries knowing the English language, the role of English as a world lingua franca, and the large number of books and films that are translated from English into other languages. That pervasive use of English leads to a conclusion in many places that English is an especially suitable language for expressing new ideas or describing new technologies. Among varieties of English, it is especially American English that influences other languages. Some languages, such as Chinese, write words borrowed from English mostly as calques, while others, such as Japanese, readily take in English loanwords written in sound-indicating script. Dubbed films and television programmes are an especially fruitful source of English influence on languages in Europe. Orthography Since the 9th century, English has been written using the English alphabet, which uses the Latin script. Anglo-Saxon runes were previously used to write Old English, but only in short inscriptions; the overwhelming majority of attested writings in Old English are in the Old English Latin alphabet. English orthography is multi-layered and complex, with elements of French, Latin, and Greek spelling on top of the native Germanic system. Further complications have arisen through sound changes with which the orthography has not kept pace. Compared to European languages for which official organisations have promoted spelling reforms, English has spelling that is a less consistent indicator of pronunciation, and standard spellings of words that are more difficult to guess from knowing how a word is pronounced. There are also systematic spelling differences between British and American English. These situations have prompted proposals for spelling reform in English. Although letters and speech sounds do not have a one-to-one correspondence in standard English spelling, spelling rules that take into account syllable structure, phonetic changes in derived words, and word accent are reliable for most English words. Moreover, standard English spelling shows etymological relationships between related words that would be obscured by a closer correspondence between pronunciation and spelling – for example, the words photograph, photography, and photographic, or the words electricity and electrical. While few scholars agree with Chomsky and Halle (1968) that conventional English orthography is "near-optimal", there is a rationale for current English spelling patterns. The standard orthography of English is the most widely used writing system in the world. Standard English spelling is based on a graphomorphemic segmentation of words into written clues of what meaningful units make up each word. Readers of English can generally rely on the correspondence between spelling and pronunciation to be fairly regular for letters or digraphs used to spell consonant sounds. The letters b, d, f, h, j, k, l, m, n, p, r, s, t, v, w, y, z represent, respectively, the phonemes /b, d, f, h, dʒ, k, l, m, n, p, r, s, t, v, w, j, z/. The letters c and g normally represent /k/ and /ɡ/, but there is also a soft c pronounced /s/, and a soft g pronounced /dʒ/. The differences in the pronunciations of the letters c and g are often signalled by the following letters in standard English spelling. Digraphs used to represent phonemes and phoneme sequences include ch for /tʃ/, sh for /ʃ/, th for /θ/ or /ð/, ng for /ŋ/, qu for /kw/, and ph for /f/ in Greek-derived words. The single letter x is generally pronounced as /z/ in word-initial position and as /ks/ otherwise. There are exceptions to these generalisations, often the result of loanwords being spelled according to the spelling patterns of their languages of origin or residues of proposals by scholars in the early period of Modern English to follow the spelling patterns of Latin for English words of Germanic origin. For the vowel sounds of the English language, however, correspondences between spelling and pronunciation are more irregular. There are many more vowel phonemes in English than there are single vowel letters (a, e, i, o, u, y, and very rarely w). As a result, some "long vowels" are often indicated by combinations of letters (like the oa in boat, the ow in how, and the ay in stay), or the historically based silent e (as in note and cake). The consequence of this complex orthographic history is that learning to read and write can be challenging in English. It can take longer for school pupils to become independently fluent readers of English than of many other languages, including Italian, Spanish, and German. Nonetheless, there is an advantage for learners of English reading in learning the specific sound-symbol regularities that occur in the standard English spellings of commonly used words. Such instruction greatly reduces the risk of children experiencing reading difficulties in English. Making primary school teachers more aware of the primacy of morpheme representation in English may help learners learn more efficiently to read and write English. English writing also includes a system of punctuation marks that is similar to those used in most alphabetic languages around the world. The purpose of punctuation is to mark meaningful grammatical relationships in sentences to aid readers in understanding a text and to indicate features important for reading a text aloud. Dialects, accents, and varieties Dialectologists identify many English dialects, which usually refer to regional varieties that differ from each other in terms of patterns of grammar, vocabulary, and pronunciation. The pronunciation of particular areas distinguishes dialects as separate regional accents. The major native dialects of English are often divided by linguists into the two extremely general categories of British English (BrE) and North American English (NAE). The fact that English has been spoken in England for 1,500 years explains why England has a great wealth of regional dialects. Within the United Kingdom, Received Pronunciation (RP), an educated accent associated originally with South East England, has been traditionally used as a broadcast standard and is considered the most prestigious of British accents. The spread of RP (also known as BBC English) through the media has caused many traditional dialects of rural England to recede, as youths adopt the traits of the prestige variety instead of traits from local dialects. At the time of the 1950–61 Survey of English Dialects, grammar and vocabulary differed across the country, but a process of lexical attrition has led most of this variation to disappear. Nonetheless, this attrition has mostly affected dialectal variation in grammar and vocabulary. Only 3% of the English population actually speak RP, the remainder speaking in regional accents and dialects with varying degrees of RP influence. There is also variability within RP, particularly along class lines between Upper and Middle-class RP speakers and between native RP speakers and speakers who adopt RP later in life. Within Britain, there is also considerable variation along lines of social class; some traits, though exceedingly common, are nonetheless considered "non-standard" and associated with lower-class speakers and identities. An example of this is h-dropping, which was historically a feature of lower-class London English, particularly Cockney, and can now be heard in the local accents of most parts of England. However, it remains largely absent in broadcasting and among the upper crust of British society. English in England can be divided into four major dialect regions: South East English, South West English (also known as West Country English), Midlands English and Northern English. Within each of these regions, several local dialects exist: within the Northern region, there is a division between the Yorkshire dialects, the Geordie dialect (spoken around Newcastle, in Northumbria) and the Lancashire dialects, which include the urban subdialects of Manchester (Mancunian) and Liverpool (Scouse). Having been the centre of Danish occupation during the Viking invasions of England, Northern English dialects, particularly the Yorkshire dialect, retain Norse features not found in other English varieties. In the West Midlands, dialects such as Black Country (Yam Yam), and by less extent Birmingham (Brummie), preserve archaic features from Early Modern and Middle English, retaining Germanic elements such as specific grammatical structures and vocabulary. Since the 15th century, South East England varieties have centred on London, which has been the centre from which dialectal innovations have spread to other dialects. In London, the Cockney dialect was traditionally used by the lower classes, and it was long a socially stigmatised variety. The spread of Cockney features across the South East led the media to talk of Estuary English as a new dialect, but the notion was criticised by many linguists on the grounds that London had been influencing neighbouring regions throughout history. Traits that have spread from London in recent decades include the use of intrusive R (drawing is pronounced "drawring" /ˈdrɔːrɪŋ/), t-glottalisation (Potter is pronounced with a glottal stop as Po'er /ˈpɒʔə/) and th-fronting, or the pronunciation of th- as /f/ (thanks pronounced "fanks") or /v/ (bother pronounced "bover"). Scots is today considered a separate language from English, but it has its origins in early Northern Middle English and developed and changed during its history with influence from other sources, particularly Scottish Gaelic and Old Norse. Scots itself has a number of regional dialects. In addition to Scots, Scottish English comprises the varieties of Standard English spoken in Scotland; most varieties are Northern English accents, with some influence from Scots. In Ireland, various forms of English have been spoken following the Norman invasion of the island during the 11th century. In County Wexford and in the area surrounding Dublin, two extinct dialects known as Forth and Bargy and Fingallian developed as offshoots from Early Middle English and were spoken until the 19th century. Modern Irish English, however, has its roots in English colonisation in the 17th century. Today Irish English is divided into Ulster English, the Northern Ireland dialect with strong influence from Scots, and various dialects of the Republic of Ireland. Like Scottish and most North American accents, almost all Irish accents preserve the rhoticity which has been lost in the dialects influenced by RP. Due to the relatively strong degree of mixing, mutual accommodation, and koinéisation that occurred during the colonial period, North American English has traditionally been perceived as relatively homogeneous, at least in comparison with British dialects. However, modern scholars have strongly opposed this notion, arguing that North American English shows a great deal of phonetic, lexical, and geographic variability. This becomes all the more apparent considering social, ethnolinguistic, and regional varieties such as African-American English, Chicano English, Cajun English, or Newfoundland English. American accent variation is increasing at the regional level and decreasing at the very local level, though most Americans still speak within a phonological continuum of similar accents, known collectively as General American English (GA), with differences hardly noticed even among Americans themselves, including Midland and Western American English. Canadian English varieties, excepting those from Atlantic Canada and possibly Quebec, are generally considered to belong to the GA continuum, although they often show raising of the vowels /aɪ/ and /aʊ/ before voiceless consonants and have distinct norms for writing and pronunciation as well. Atlantic Canadian English, notably distinct from Standard Canadian English, comprises Maritime English and Newfoundland English. It was influenced mostly by British and Irish English, as well as Irish, Scottish Gaelic, and Acadian French. In most American and Canadian English dialects, rhoticity (or r-fullness) is dominant, with non-rhoticity (or r-dropping) being associated with lower prestige and social class, especially since the end of World War II. This contrasts with the situation in England, where non-rhoticity has become the standard. Varieties beyond GA which have developed distinct sound systems include the Southern American English, New York City English, Eastern New England English, and African-American Vernacular English (AAVE) groups – all of which are historically non-rhotic, save a few varieties of Southern American. In Southern American English, the most populous grouping outside GA, rhoticity now strongly prevails, replacing the region's historical non-rhotic prestige. Southern accents are colloquially described as a "drawl" or "twang", being recognised most readily by the Southern Vowel Shift initiated by glide-deleting in the /aɪ/ vowel (e.g. pronouncing spy almost like spa), the "Southern breaking" of several front pure vowels into a gliding vowel or even two syllables (e.g. pronouncing the word press almost like "pray-us"), the pin–pen merger, and other distinctive phonological, grammatical, and lexical features, many of which are actually recent developments of the 19th century or later. Spoken primarily by working- and middle-class African Americans, African-American Vernacular English (AAVE) is largely non-rhotic, and likely originated among enslaved Africans and African Americans influenced primarily by the non-standard older Southern dialects. A minority of linguists, contrarily, propose that AAVE mostly traces back to African languages spoken by the slaves who had to develop a pidgin or English-based creole to communicate with slaves of other ethnic and linguistic origins. AAVE's important commonalities with Southern accents suggest it developed into a highly coherent and homogeneous variety in the 19th or early 20th century. AAVE is commonly stigmatised in North America as a form of "broken" or "uneducated" English, as are white Southern accents, but linguists today recognise both as fully developed varieties of English with their own norms shared by large speech communities. Since 1788, English has been spoken in Oceania, and Australian English has developed as the first language of the vast majority of the inhabitants of the Australian continent, its standard accent being General Australian. The English of neighbouring New Zealand has to a lesser degree become an influential standard variety of the language. Australian and New Zealand English are each other's closest relatives with few differentiating characteristics, followed by South African English and the English of South East England, all of which have similarly non-rhotic accents, aside from some accents in the South Island of New Zealand. Australian and New Zealand English stand out for their innovative vowels: many short vowels are fronted or raised, whereas many long vowels have diphthongised. Australian English also has a contrast between long and short vowels, not found in most other varieties. Australian English grammar aligns closely with British and American English; like American English, collective plural subjects take on a singular verb, e.g. "the government is" (rather than are). New Zealand English uses front vowels that are often even higher than in Australian English. English is an official language of the Philippines. Its use is ubiquitous in the country, and appears in areas including on street signs, marquees, and government documents, and in courtrooms, public media, the entertainment industry, and the business sector. It became an important and widely spoken language in the country during the period of American rule between 1898 and 1946. Taglish is a prominent form of code-switching between Tagalog and English. English is spoken widely in southern Africa and is an official or co-official language in several of the region's countries. In South Africa, English has been spoken since 1820, co-existing with Afrikaans and various African languages such as the Khoe and Bantu languages. Today, about nine per cent of the South African population speaks South African English (SAE) as a first language. SAE is a non-rhotic variety that tends to follow RP as a norm. It is one of the few non-rhotic English varieties that lack intrusive R. The second-language varieties of South Africa differ based on the native languages of their speakers. Most phonological differences from RP are in the vowels. Consonant differences include the tendency to pronounce /p, t, t͡ʃ, k/ without aspiration (e.g. pin pronounced [pɪn] rather than as [pʰɪn] as in most other varieties), while r is often pronounced as a flap [ɾ] instead of as the more common fricative. Nigerian English is a variety of English spoken in Nigeria; over 150 million Nigerians speak some form of the language. Though traditionally based on British English, increasing United States influence during the latter 20th century has resulted in American English vocabulary entering Nigerian English. Additionally, some new words and collocations have emerged from the variety out of a need to express concepts specific to the culture of the nation (e.g. senior wife). Varieties of English are spoken throughout the former British colonial possessions in the Caribbean, including Jamaica, the Leeward and Windward Islands, Trinidad and Tobago, Barbados, the Cayman Islands, and Belize. Each of these areas is home both to a local variety of English and a local English-based creole, combining English and African languages. The most prominent varieties are Jamaican English and Jamaican Creole. In Central America, English-based creoles are spoken on the Caribbean coasts of Nicaragua and Panama. Residents are often fluent in both the local English variety and the local creole languages, and frequently code-switch between them. The relationship between different varieties can be conceptualised as a continuum, in which more creole-like or RP-like forms function as more formal and informal registers of the language respectively. Most Caribbean varieties are based on British English and consequently, most are non-rhotic, except for formal styles of Jamaican English which are often rhotic. Jamaican English differs from RP in its vowel inventory, which has a distinction between long and short vowels rather than tense and lax vowels as in Standard English. The diphthongs /ei/ and /ou/ are monophthongs [eː] and [oː] or even the reverse diphthongs [ie] and [uo] (e.g. bay and boat pronounced [bʲeː] and [bʷoːt]). Often word-final consonant clusters are simplified so that "child" is pronounced [t͡ʃail] and "wind" [win]. Indian English historically tends towards RP as an ideal, with the proximity of speakers to RP generally reflective of class distinctions. Indian English accents are marked by the pronunciation of phonemes such as /t/ and /d/ (often pronounced with retroflex articulation as [ʈ] and [ɖ]) and the replacement of /θ/ and /ð/ with dentals [t̪] and [d̪]. Sometimes Indian English speakers may also use spelling-based pronunciations where the silent ⟨h⟩ found in words such as ghost is pronounced as an Indian voiced aspirated stop [ɡʱ]. Non-native English speakers may pronounce words differently due to having not fully mastered English pronunciation. This can happen either because they apply the speech rules of their mother tongue to English ("interference") or through implementing strategies similar to those used in first language acquisition. They may create novel pronunciations for English sounds not found in their first language. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Common_Germanic] | [TOKENS: 11785]
Contents Proto-Germanic language Extant Extinct Reconstructed Hypothetical Grammar Other Mainstream Alternative and fringe Pontic Steppe Caucasus East Asia Eastern Europe Northern Europe Bronze Age Pontic Steppe Northern/Eastern Steppe Europe South Asia Iron Age Steppe Europe Caucasus Central Asia India Iron Age Indo-Aryans Iranians Nuristanis East Asia Europe Middle Ages East Asia Europe Indo-Aryan Iranian Historical Indo-Aryan Iranian Others European Practices Institutes Publications Proto-Germanic (abbreviated PGmc; also called Common Germanic) is the reconstructed common ancestor of the Germanic languages. A defining feature of Proto-Germanic is the completion of the process described by Grimm's law, a set of sound changes that occurred between its status as a dialect of Proto-Indo-European and its gradual divergence into a separate language. The end of the Common Germanic period is reached with the beginning of the Migration Period in the fourth century AD. The Proto-Germanic language is not directly attested and has been reconstructed using the comparative method with other more archaic and earlier attested Indo-European languages,[note 1] extremely early Germanic loanwords in Baltic and Finnish languages (for example, Finnish kuningas 'king'), early runic inscriptions (specifically the Vimose inscriptions in Denmark, dated to the 2nd century CE), and in Roman Empire era transcriptions of individual words (notably in Tacitus's Germania, c. AD 90[note 2]). The non-runic Negau helmet inscription, dated to the 2nd century BCE, has also been argued by some to represent the earliest attestation of Grimm's law (also known as the First Germanic Sound Shift). Archaeology and early historiography Proto-Germanic developed out of pre-Proto-Germanic during the Pre-Roman Iron Age of Northern Europe. According to the Germanic substrate hypothesis, it may have been influenced by non-Indo-European cultures, such as the Funnelbeaker culture, but the sound change in the Germanic languages known as Grimm's law points to a non-substratic development away from other branches of Indo-European.[clarification needed][note 3] Proto-Germanic itself was likely spoken after c. 500 BC, and Proto-Norse, from the second century AD and later, is still quite close to reconstructed Proto-Germanic, but other common innovations separating Germanic from Proto-Indo-European suggest a common history of pre-Proto-Germanic speakers throughout the Nordic Bronze Age. The Proto-Germanic language developed in southern Scandinavia (Denmark, south Sweden and southern Norway) and the northern-most part of Germany in Schleswig Holstein and northern Lower Saxony, the Urheimat (original home) of the Germanic tribes. It is possible that Indo-European speakers first arrived in southern Scandinavia with the Corded Ware culture in the mid-3rd millennium BC, developing into the Nordic Bronze Age cultures by the early second millennium BC.[citation needed] According to Mallory, Germanicists "generally agree" that the Urheimat ('original homeland') of the Proto-Germanic language, the ancestral idiom of all attested Germanic dialects, was primarily situated in an area corresponding to the extent of the Jastorf culture.[note 4] Early Germanic expansion in the Pre-Roman Iron Age (fifth to first centuries BC) placed Proto-Germanic speakers in contact with the Continental Celtic La Tène horizon. A number of Celtic loanwords in Proto-Germanic have been identified. By the first century AD, Germanic expansion reached the Danube and the Upper Rhine in the south and the Germanic peoples first entered the historical record. At about the same time, extending east of the Vistula (Oksywie culture, Przeworsk culture), Germanic speakers came into contact with early Slavic cultures, as reflected in early Germanic loans in Proto-Slavic. By the third century, Late Proto-Germanic speakers had expanded over significant distance, from the Rhine to the Dniepr spanning about 1,200 km (700 mi). The period marks the breakup of Late Proto-Germanic and the beginning of the (historiographically recorded) Germanic migrations. The earliest attested stage of the Germanic languages is known as Proto-Norse, variably dated to the 2nd century AD, around 300 AD or the first century AD in runic inscriptions (such as the Tune Runestone). The first coherent text recorded in a Germanic language is the Gothic Bible, written in the later fourth century in the East Germanic variety of the Thervingi Gothic Christians, who had escaped persecution by moving from Scythia to Moesia in 348. Early West Germanic text is available from the fifth century, beginning with the Frankish Bergakker runic inscription. Evolution The evolution of Proto-Germanic from its ancestral forms, beginning with its ancestor Proto-Indo-European, began with the development of a separate common way of speech among some geographically nearby speakers of a prior language and ended with the dispersion of the proto-language speakers into distinct populations with mostly independent speech habits. Between the two points, many sound changes occurred. Phylogeny as applied to historical linguistics involves the evolutionary descent of languages. The phylogeny problem is the question of what specific tree, in the tree model of language evolution, best explains the paths of descent of all the members of a language family from a common language, or proto-language (at the root of the tree) to the attested languages (at the leaves of the tree). The Germanic languages form a tree with Proto-Germanic at its root that is a branch of the Indo-European tree, which in turn has Proto-Indo-European at its root. Borrowing of lexical items from contact languages makes the relative position of the Germanic branch within Indo-European less clear than the positions of the other branches of Indo-European. In the course of the development of historical linguistics, various solutions have been proposed, none certain and all debatable. In the evolutionary history of a language family, philologists consider a genetic "tree model" appropriate only if communities do not remain in effective contact as their languages diverge. Early Indo-European had limited contact between distinct lineages, and, uniquely, the Germanic subfamily exhibited a less treelike behaviour, as some of its characteristics were acquired from neighbours early in its evolution rather than from its direct ancestors. The internal diversification of West Germanic developed in an especially non-treelike manner. Proto-Germanic is generally agreed to have begun about 500 BC. Its hypothetical ancestor between the end of Proto-Indo-European and 500 BC is termed Pre-Proto-Germanic. Whether it is to be included under a wider meaning of Proto-Germanic is a matter of usage. Winfred P. Lehmann regarded Jacob Grimm's "First Germanic Sound Shift", or Grimm's law, and Verner's law,[note 5] (which pertained mainly to consonants and were considered for many decades to have generated Proto-Germanic) as pre-Proto-Germanic and held that the "upper boundary" (that is, the earlier boundary) was the fixing of the accent, or stress, on the root syllable of a word, typically on the first syllable. Proto-Indo-European had featured a moveable pitch-accent consisting of "an alternation of high and low tones" as well as stress of position determined by a set of rules based on the lengths of a word's syllables. The fixation of the stress led to sound changes in unstressed syllables. For Lehmann, the "lower boundary" was the dropping of final -a or -e in unstressed syllables; for example, post-PIE *wóyd-e > Gothic wait, 'knows'. Elmer H. Antonsen agreed with Lehmann about the upper boundary but later found runic evidence that the -a was not dropped: ékwakraz ... wraita, 'I, Wakraz, … wrote (this)'. He says: "We must therefore search for a new lower boundary for Proto-Germanic." Antonsen's own scheme divides Proto-Germanic into an early stage and a late stage. The early stage includes the stress fixation and resulting "spontaneous vowel-shifts" while the late stage is defined by ten complex rules governing changes of both vowels and consonants. The following changes are known or presumed to have occurred in the history of Proto-Germanic in the wider sense from the end of Proto-Indo-European up to the point that Proto-Germanic began to break into mutually unintelligible dialects. The changes are listed roughly in chronological order, with changes that operate on the outcome of earlier ones appearing later in the list. The stages distinguished and the changes associated with each stage rely heavily on Ringe, who in turn summarizes standard concepts and terminology. This stage began with the separation of a distinct speech, perhaps while it was still forming part of the Proto-Indo-European dialect continuum. It contained many innovations that were shared with other Indo-European branches to various degrees, probably through areal contacts, and mutual intelligibility with other dialects would remain for some time. It was nevertheless on its own path, whether dialect or language. This stage began its evolution as a dialect of Proto-Indo-European that had lost its laryngeals and had five long and six short vowels as well as one or two overlong vowels. The consonant system was still that of PIE minus palatovelars and laryngeals, but the loss of syllabic resonants already made the language markedly different from PIE proper. Mutual intelligibility might have still existed with other descendants of PIE, but it would have been strained, and the period marked the definitive break of Germanic from the other Indo-European languages and the beginning of Germanic proper, containing most of the sound changes that are now held to define this branch distinctively. This stage contained various consonant and vowel shifts, the loss of the contrastive accent inherited from PIE for a uniform accent on the first syllable of the word root, and the beginnings of the reduction of the resulting unstressed syllables. By this stage, Germanic had emerged as a distinctive branch and had undergone many of the sound changes that would make its later descendants recognisable as Germanic languages. It had shifted its consonant inventory from a system that was rich in plosives to one containing primarily fricatives, had lost the PIE mobile pitch accent for a predictable stress accent, and had merged two of its vowels. The stress accent had already begun to cause the erosion of unstressed syllables, which would continue in its descendants. The final stage of the language included the remaining development until the breakup into dialects and, most notably, featured the development of nasal vowels and the start of umlaut, another characteristic Germanic feature. Loans into Proto-Germanic from other (known) languages or from Proto-Germanic into other languages can be dated relative to each other by which Germanic sound laws have acted on them. Since the dates of borrowings and sound laws are not precisely known, it is not possible to use loans to establish absolute or calendar chronology. Most loans from Celtic appear to have been made before or during the Germanic Sound Shift. For instance, one specimen *rīks 'ruler' was borrowed from Celtic *rīxs 'king' (stem *rīg-), with g → k. It is clearly not native because PIE **ē → *ī is typical not of Germanic but Celtic languages. Another is *walhaz 'foreigner; Celt' from the Celtic tribal name Volcae with k → h and o → a. Other likely Celtic loans include *ambahtaz 'servant', *brunjǭ 'mailshirt', *gīslaz 'hostage', *īsarną 'iron', *lēkijaz 'healer', *laudą 'lead', *Rīnaz 'Rhine', and *tūnaz, tūną 'fortified enclosure'.[note 6] These loans would likely have been borrowed during the Celtic Hallstatt and early La Tène cultures when the Celts dominated central Europe, although the period spanned several centuries. From East Iranian came *hanapiz 'hemp' (compare Khotanese kaṃhā, Ossetian gæn(æ) 'flax'), *humalaz, humalǭ 'hops' (compare Ossetian xumællæg), *keppǭ ~ skēpą 'sheep' (compare Persian čapiš 'yearling kid'), *kurtilaz 'tunic' (cf. Osset kʷəræt 'shirt'), *kutą 'cottage' (compare Persian kad 'house'), *paidō 'cloak', *paþaz 'path' (compare Avestan pantā, gen. pathō), and *wurstwą 'work' (compare Avestan vərəštuua).[note 7] The words could have been transmitted directly by the Scythians from the Ukraine plain, groups of whom entered Central Europe via the Danube and created the Vekerzug Culture in the Carpathian Basin (sixth to fifth centuries BC), or by later contact with Sarmatians, who followed the same route. Unsure is *marhaz 'horse', which was either borrowed directly from Scytho-Sarmatian or through Celtic mediation. Numerous loanwords believed to have been borrowed from Proto-Germanic are known in the non-Germanic languages spoken in areas adjacent to the Germanic languages. The heaviest influence has been on the Finnic languages, which have received hundreds of Proto-Germanic or pre-Proto-Germanic loanwords. Well-known examples include PGmc *druhtinaz 'warlord' (compare Finnish ruhtinas), *hrengaz (later *hringaz) 'ring' (compare Finnish rengas, Estonian rõngas), *kuningaz 'king' (Finnish kuningas), *lambaz 'lamb' (Finnish lammas), *lunaz 'ransom' (Finnish lunnas). Loanwords into the Samic languages, Baltic languages and Slavic languages are also known. The term substrate with reference to Proto-Germanic refers to lexical items and phonological elements that do not appear to be descended from Proto-Indo-European. The substrate theory postulates that the elements came from an earlier population that stayed amongst the Indo-Europeans and was influential enough to bring over some elements of its own language. The theory of a non-Indo-European substrate was first proposed by Sigmund Feist, who estimated that about a third of all Proto-Germanic lexical items came from the substrate.[note 8] Theo Vennemann has hypothesized a Basque substrate and a Semitic superstrate in Germanic; however, his speculations, too, are generally rejected by specialists in the relevant fields. Phonology The following conventions are used in this article for transcribing Proto-Germanic reconstructed forms: The table below[citation needed] lists the consonantal phonemes of Proto-Germanic, ordered and classified by their reconstructed pronunciation. The slashes around the phonemes are omitted for clarity. When two phonemes appear in the same box, the first of each pair is voiceless, the second is voiced. Phones written in parentheses represent allophones and are not themselves independent phonemes. For descriptions of the sounds and definitions of the terms, follow the links on the column and row headings.[note 9] Notes: Grimm's law as applied to pre-proto-Germanic is a chain shift of the original Indo-European plosives. Verner's Law explains a category of exceptions to Grimm's Law, where a voiced fricative appears where Grimm's Law predicts a voiceless fricative. The discrepancy is conditioned by the placement of the original Indo-European word accent. p, t, and k did not undergo Grimm's law after a fricative (such as s) or after other plosives (which were shifted to fricatives by the Germanic spirant law); for example, where Latin (with the original t) has stella 'star' and octō 'eight', Middle Dutch has ster and acht (with unshifted t). This original t merged with the shifted t from the voiced consonant; that is, most of the instances of /t/ came from either the original /t/ or the shifted /t/. (A similar shift on the consonant inventory of Proto-Germanic later generated High German. McMahon says: Grimm's and Verner's Laws ... together form the First Germanic Consonant Shift. A second, and chronologically later Second Germanic Consonant Shift ... affected only Proto-Germanic voiceless stops ... and split Germanic into two sets of dialects, Low German in the north ... and High German further south) Verner's law is usually reconstructed as following Grimm's law in time, and states that unvoiced fricatives: /s/, /ɸ/, /θ/, /x/ are voiced when preceded by an unaccented syllable. The accent at the time of the change was the one inherited from Proto-Indo-European, which was free and could occur on any syllable. For example, PIE *bʰréh₂tēr > PGmc. *brōþēr 'brother' but PIE *meh₂tḗr > PGmc. *mōdēr 'mother'. The voicing of some /s/ according to Verner's Law produced /z/, a new phoneme. Sometime after Grimm's and Verner's law, Proto-Germanic lost its inherited contrastive accent, and all words became stressed on their root syllable. This was generally the first syllable unless a prefix was attached. The loss of the Proto-Indo-European contrastive accent got rid of the conditioning environment for the consonant alternations created by Verner's law. Without this conditioning environment, the cause of the alternation was no longer obvious to native speakers. The alternations that had started as mere phonetic variants of sounds became increasingly grammatical in nature, leading to the grammatical alternations of sounds known as grammatischer Wechsel. For a single word, the grammatical stem could display different consonants depending on its grammatical case or its tense. As a result of the complexity of this system, significant levelling of these sounds occurred throughout the Germanic period as well as in the later daughter languages. Already in Proto-Germanic, most alternations in nouns were leveled to have only one sound or the other consistently throughout all forms of a word, although some alternations were preserved, only to be levelled later in the daughters (but differently in each one). Alternations in noun and verb endings were also levelled, usually in favour of the voiced alternants in nouns, but a split remained in verbs where unsuffixed (strong) verbs received the voiced alternants while suffixed (weak) verbs had the voiceless alternants. Alternation between the present and past of strong verbs remained common and was not levelled in Proto-Germanic, and survives up to the present day in some Germanic languages. Some of the consonants that developed from the sound shifts are thought to have been pronounced in different ways (allophones) depending on the sounds around them. With regard to original /k/ or /kʷ/ Trask says: The resulting /x/ or /xʷ/ were reduced to /h/ and /hʷ/ in word-initial position. Many of the consonants listed in the table could appear lengthened or prolonged under some circumstances, which is inferred from their appearing in some daughter languages as doubled letters. This phenomenon is termed gemination. Kraehenmann says: Then, Proto-Germanic already had long consonants ... but they contrasted with short ones only word-medially. Moreover, they were not very frequent and occurred only intervocally almost exclusively after short vowels. The voiced phonemes /b/, /d/, /ɡ/ and /ɡʷ/ are reconstructed with the pronunciation of stops in some environments and fricatives in others. The pattern of allophony is not completely clear, but generally is similar to the patterns of voiced obstruent allophones in languages such as Spanish. The voiced fricatives of Verner's law, which only occurred in non-word-initial positions, merged with the fricative allophones of /b/, /d/, /ɡ/ and /ɡʷ/. Older accounts tended to suggest that the sounds were originally fricatives and later "hardened" into stops in some circumstances. However, Ringe notes that this belief was largely due to theory-internal considerations of older phonological theories, and in modern theories it is equally possible that the allophony was present from the beginning. Each of the three voiced phonemes /b/, /d/, and /ɡ/ had a slightly different pattern of allophony from the others, but in general stops occurred in "strong" positions (word-initial and in clusters) while fricatives occurred in "weak" positions (post-vocalic). More specifically: Labiovelars were affected by the following additional changes: These various changes often led to complex alternations, e.g. *sehwaną [ˈsexʷɑnɑ̃] 'to see', *sēgun [ˈsɛːɣun] 'they saw' (indicative), *sēwīn [ˈsɛːwiːn] 'they saw' (subjunctive), which were reanalysed and regularised differently in the various daughter languages. Kroonen posits a process of consonant mutation for Proto-Germanic, under the name consonant gradation. (This is distinct from the consonant mutation processes occurring in the neighboring Samic and Finnic languages, also known as consonant gradation since the 19th century.) The Proto-Germanic consonant gradation is not directly attested in any of the Germanic dialects, but may nevertheless be reconstructed on the basis of certain dialectal discrepancies in root of the n-stems and the ōn-verbs. Diachronically, the rise of consonant gradation in Germanic can be explained by Kluge's law, by which geminates arose from stops followed by a nasal in a stressed syllable. Since this sound law only operated in part of the paradigms of the n-stems and ōn-verbs, it gave rise to an alternation of geminated and non-geminated consonants in the same paradigms. These were largely regularized by various ways of analogy in the Germanic daughter languages. Since its formulation, the validity of Kluge's Law has been contested. The development of geminate consonants has also been explained by the idea of "expressive gemination". Although this idea remains popular, it does not explain why many words containing geminated stops do not have "expressive" or "intensive" semantics. The idea has been described as "methodically unsound", because it attempts to explain the phonological phenomenon through psycholinguistic factors and other irregular behaviour instead of exploring regular sound laws. The origin of the Germanic geminate consonants remains a disputed part of historical linguistics with no clear consensus at present. The reconstruction of grading paradigms in Proto-Germanic explains root alternations such as Old English steorra 'star' < *sterran- vs. Old Frisian stera 'id.' < **steran- and Norwegian (dial.) guva 'to swing' < *gubōn- vs. Middle High German gupfen 'id.' < *guppōn- as generalizations of the original allomorphy. In the cases concerned, this would imply reconstructing an n-stem nom. *sterō, gen. *sterraz < PIE *h₂stér-ōn, *h₂ster-n-ós and an ōn-verb 3sg. *guppōþi, 3pl. *gubunanþi < *gʱubʱ-néh₂-ti, *gʱubʱ-nh₂-énti. Proto-Germanic had four short vowels, five or six long vowels, and at least one "overlong" or "trimoraic" vowel. The exact phonetic quality of the vowels is uncertain. Notes: PIE ə, a, o merged into PGmc a; PIE ā, ō merged into PGmc ō. At the time of the merger, the vowels probably were [ɑ] and [ɑː], or perhaps [ɒ] and [ɒː]. Their timbres then differentiated by raising (and perhaps rounding) the long vowel to [ɔː][citation needed]. It is known that the raising of ā to ō can not have occurred earlier than the earliest contact between Proto-Germanic speakers and the Romans. This can be verified by the fact that Latin Rōmānī later emerges in Gothic as Rumoneis (that is, Rūmōnīs). It is explained by Ringe that at the time of borrowing, the vowel matching closest in sound to Latin ā was a Proto-Germanic ā-like vowel (which later became ō). And since Proto-Germanic therefore lacked a mid(-high) back vowel, the closest equivalent of Latin ō was Proto-Germanic ū: Rōmānī > *Rūmānīz > *Rūmōnīz > Gothic Rumoneis. A new ā was formed following the shift from ā to ō when intervocalic /j/ was lost in -aja- sequences. It was a rare phoneme, and occurred only in a handful of words, the most notable being the verbs of the third weak class. The agent noun suffix *-ārijaz (Modern English -er in words such as baker or teacher) was likely borrowed from Latin around or shortly after this time. The following diphthongs are known to have existed in Proto-Germanic: Note the change /e/ > /i/ before /i/ or /j/ in the same or following syllable. This removed /ei/ (which became /iː/) but created /iu/ from earlier /eu/. Diphthongs in Proto-Germanic can also be analysed as sequences of a vowel plus an approximant, as was the case in Proto-Indo-European. This explains why /j/ was not lost in *niwjaz ('new'); the second element of the diphthong iu was still underlyingly a consonant and therefore the conditioning environment for the loss was not met. This is also confirmed by the fact that later in the West Germanic gemination, -wj- is geminated to -wwj- in parallel with the other consonants (except /r/). Proto-Germanic had two overlong or trimoraic long vowels ô [ɔːː] and ê [ɛːː], the latter mainly in adverbs (cf. *hwadrê 'whereto, whither'). None of the documented languages still include such vowels. Their reconstruction is due to the comparative method, particularly as a way of explaining an otherwise unpredictable two-way split of reconstructed long ō in final syllables, which unexpectedly remained long in some morphemes but shows normal shortening in others. Trimoraic vowels generally occurred at morpheme boundaries where a bimoraic long vowel and a short vowel in hiatus contracted, especially after the loss of an intervening laryngeal (-VHV-). One example, without a laryngeal, includes the class II weak verbs (ō-stems) where a -j- was lost between vowels, so that -ōja → ōa → ô (cf. *salbōjaną → *salbôną → Gothic salbōn 'to anoint'). However, the majority occurred in word-final syllables (inflectional endings) probably because in this position the vowel could not be resyllabified. Additionally, Germanic, like Balto-Slavic, lengthened bimoraic long vowels in absolute final position, perhaps to better conform to a word's prosodic template; e.g., PGmc *arô 'eagle' ← PIE **h₃ér-ō just as Lith akmuõ 'stone', OSl kamy ← *aḱmō̃ ← PIE **h₂éḱ-mō. Contrast: But vowels that were lengthened by laryngeals did not become overlong. Compare: Trimoraic vowels are distinguished from bimoraic vowels by their outcomes in attested Germanic languages: word-final trimoraic vowels remained long vowels while bimoraic vowels developed into short vowels. Older theories about the phenomenon claimed that long and overlong vowels were both long but differed in tone, i.e., ô and ê had a "circumflex" (rise-fall-rise) tone while ō and ē had an "acute" (rising) tone, much like the tones of modern Scandinavian languages, Baltic, and Ancient Greek, and asserted that this distinction was inherited from PIE. However, this view was abandoned since languages in general do not combine distinctive intonations on unstressed syllables with contrastive stress and vowel length. Modern theories have reinterpreted overlong vowels as having superheavy syllable weight (three moras) and therefore greater length than ordinary long vowels. By the end of the Proto-Germanic period, word-final long vowels were shortened to short vowels. Following that, overlong vowels were shortened to regular long vowels in all positions, merging with originally long vowels except word-finally (because of the earlier shortening), so that they remained distinct in that position. This was a late dialectal development, because the result was not the same in all Germanic languages: word-final ē shortened to a in East and West Germanic but to i in Old Norse, and word-final ō shortened to a in Gothic but to o (probably [o]) in early North and West Germanic, with a later raising to u (the sixth century Salic law still has maltho in late Frankish). The shortened overlong vowels in final position developed as regular long vowels from that point on, including the lowering of ē to ā in North and West Germanic. The monophthongization of unstressed au in Northwest Germanic produced a phoneme which merged with this new word-final long ō, while the monophthongization of unstressed ai produced a new ē which did not merge with original ē, but rather with ē₂, as it was not lowered to ā. This split, combined with the asymmetric development in West Germanic, with ē lowering but ō raising, points to an early difference in the articulation height of the two vowels that was not present in North Germanic. It could be seen as evidence that the lowering of ē to ā began in West Germanic at a time when final vowels were still long, and spread to North Germanic through the late Germanic dialect continuum, but only reaching the latter after the vowels had already been shortened. ē₂ is uncertain as a phoneme and only reconstructed from a small number of words; it is posited by the comparative method because whereas all provable instances of inherited (PIE) *ē (PGmc. *ē₁) are distributed in Gothic as ē and the other Germanic languages as *ā, all the Germanic languages agree on some occasions of ē (e.g., Goth/OE/ON hēr 'here' ← late PGmc. *hē₂r). Gothic makes no orthographic and therefore presumably no phonetic distinction between ē₁ and ē₂, but the existence of two Proto-Germanic long e-like phonemes is supported by the existence of two e-like Elder Futhark runes, Ehwaz and Eihwaz. Krahe treats ē₂ (secondary ē) as identical with ī. It probably continues PIE ēi, and it may have been in the process of transition from a diphthong to a long simple vowel in the Proto-Germanic period. Lehmann lists the following origins for ē₂: Proto-Germanic developed nasal vowels from two sources. The earlier and much more frequent source was word-final -n (from PIE -n or -m) in unstressed syllables, which at first gave rise to short -ą, -į, -ų, long -į̄, -ę̄, -ą̄, and overlong -ę̂, -ą̂. -ę̄ and -ę̂ then merged into -ą̄ and -ą̂, which later developed into -ǭ and -ǫ̂. Another source, developing only in late Proto-Germanic times, was in the sequences -inh-, -anh-, -unh-, in which the nasal consonant lost its occlusion and was converted into lengthening and nasalisation of the preceding vowel, becoming -ą̄h-, -į̄h-, -ų̄h- (still written as -anh-, -inh-, -unh- in this article). In many cases, the nasality was not contrastive and was merely present as an additional surface articulation. No Germanic language that preserves the word-final vowels has their nasality preserved. Word-final short nasal vowels do not show different reflexes compared to non-nasal vowels. However, the comparative method does require a three-way phonemic distinction between word-final *-ō, *-ǭ and *-ōn, which each has a distinct pattern of reflexes in the later Germanic languages: The distinct reflexes of nasal -ǭ versus non-nasal -ō are caused by the Northwest Germanic raising of final -ō /ɔː/ to /oː/, which did not affect -ǭ. When the vowels were shortened and denasalised, these two vowels no longer had the same place of articulation, and did not merge: -ō became /o/ (later /u/) while -ǭ became /ɔ/ (later /ɑ/). This allowed their reflexes to stay distinct. The nasality of word-internal vowels (from -nh-) was more stable, and survived into the early dialects intact. Phonemic nasal vowels definitely occurred in Proto-Norse and Old Norse. They were preserved in Old Icelandic down to at least a.d. 1125, the earliest possible time for the creation of the First Grammatical Treatise, which documents nasal vowels. The PG nasal vowels from -nh- sequences were preserved in Old Icelandic as shown by examples given in the First Grammatical Treatise. For example: The phonemicity is evident from minimal pairs like ǿ̇ra 'younger' vs. ǿra 'vex' < *wor-, cognate with English weary. The inherited Proto-Germanic nasal vowels were joined in Old Norse by nasal vowels from other sources, e.g. loss of *n before s. Modern Elfdalian still includes nasal vowels that directly derive from Old Norse, e.g. gą̊s 'goose' < Old Norse gás (presumably nasalized, although not so written); compare German Gans, showing the original consonant. Similar surface (possibly phonemic) nasal/non-nasal contrasts occurred in the West Germanic languages down through Proto-Anglo-Frisian of AD 400 or so. Proto-Germanic medial nasal vowels were inherited, but were joined by new nasal vowels resulting from the Ingvaeonic nasal spirant law, which extended the loss of nasal consonants (only before -h- in Proto-Germanic) to all environments before a fricative (thus including -mf-, -nþ- and -ns- as well). The contrast between nasal and non-nasal long vowels is reflected in the differing output of nasalized long *ą̄, which was raised to ō in Old English and Old Frisian whereas non-nasal *ā appeared as fronted ǣ. Hence: Proto-Germanic allowed any single consonant to occur in one of three positions: initial, medial and final. However only certain clusters were possible in certain positions. It allowed the following clusters in initial and medial position: It allowed the following clusters in medial position only: It allowed continuant + obstruent clusters in medial and final position only: The s + voiceless plosive clusters (sp, st, sk) could appear in any position in a word. Due to the emergence of a word-initial stress accent, vowels in unstressed syllables were gradually reduced over time, beginning at the very end of the Proto-Germanic period and continuing into the history of the various dialects. Already in Proto-Germanic, word-final /e/ and /ɑ/ had been lost, and /e/ had merged with /i/ in unstressed syllables. Vowels in third syllables were also generally lost before dialect diversification began, such as final -i of some present tense verb endings, and in -maz and -miz of the dative plural ending and first person plural present of verbs. Word-final short nasal vowels were however preserved longer, as is reflected in Proto-Norse which still preserved word-final -ą (horna on the Gallehus horns), while the dative plural appears as -mz (gestumz on the Stentoften Runestone). Somewhat greater reduction is found in Gothic, which lost all final-syllable short vowels except u. Old High German and Old English initially preserved unstressed i and u, but later lost them in long-stemmed words and then Old High German lost them in many short-stemmed ones as well, by analogy. Old English shows indirect evidence that word-final -ą was preserved into the separate history of the language. This can be seen in the infinitive ending -an (< *aną) and the strong past participle ending -en (< *-anaz). Since the early Old English fronting of /ɑ/ to /æ/ did not occur in nasalized vowels or before back vowels, this created a vowel alternation because the nasality of the back vowel ą in the infinitive ending prevented the fronting of the preceding vowel: *-aną > *-an, but *-anaz > *-ænæ > *-en. Therefore, the Anglo-Frisian brightening must necessarily have occurred very early in the history of the Anglo-Frisian languages, before the loss of final -ą. The outcome of final vowels and combinations in the various daughters is shown in the table below: Some Proto-Germanic endings have merged in all of the literary languages but are still distinct in runic Proto-Norse, e.g. *-īz vs. -ijaz (þrijōz dohtrīz 'three daughters' in the Tune stone vs. the name Holtijaz in the Gallehus horns). Morphology Reconstructions are tentative and multiple versions with varying degrees of difference exist. All reconstructed forms are marked with an asterisk (*). It is often asserted that the Germanic languages have a highly reduced system of inflections as compared with Greek, Latin, or Sanskrit. Although this is true to some extent, it is probably due more to the late time of attestation of Germanic than to any inherent "simplicity" of the Germanic languages. As an example, there are less than 500 years between the Gothic Gospels of 360 and the Old High German Tatian of 830, yet Old High German, despite being the most archaic of the West Germanic languages, is missing a large number of archaic features present in Gothic, including dual and passive markings on verbs, reduplication in Class VII strong verb past tenses, the vocative case, and second-position (Wackernagel's Law) clitics. Many more archaic features may have been lost between the Proto-Germanic of 200 BC or so and the attested Gothic language. Furthermore, Proto-Romance and Middle Indic of the fourth century AD—contemporaneous with Gothic—were significantly simpler than Latin and Sanskrit, respectively, and overall probably no more archaic than Gothic. In addition, some parts of the inflectional systems of Greek, Latin, and Sanskrit were innovations that were not present in Proto-Indo-European. Proto-Germanic had six cases, three genders, three numbers, three moods (indicative, subjunctive (PIE optative), imperative), and two voices (active and passive (PIE middle)). This is quite similar to the state of Latin, Greek, and Middle Indic of c. AD 200. Nouns and adjectives were declined in (at least) six cases: vocative, nominative, accusative, dative, instrumental, genitive. The locative case had merged into the dative case, and the ablative may have merged with either the genitive, dative or instrumental cases. However, sparse remnants of the earlier locative and ablative cases are visible in a few pronominal and adverbial forms. Pronouns were declined similarly, although without a separate vocative form. The instrumental and vocative can be reconstructed only in the singular; the instrumental survives only in the West Germanic languages, and the vocative only in Gothic. Verbs and pronouns had three numbers: singular, dual, and plural. Although the pronominal dual survived into all the oldest languages, the verbal dual survived only into Gothic, and the (presumed) nominal and adjectival dual forms were lost before the oldest records. As in the Italic languages, it may have been lost before Proto-Germanic became a different branch at all. Several sound changes occurred in the history of Proto-Germanic that were triggered only in some environments but not in others. Some of these were grammaticalised while others were still triggered by phonetic rules and were partially allophonic or surface filters. Probably the most far-reaching alternation was between [*f, *þ, *s, *h, *hw] and [*b, *d, *z, *g, *gw], the voiceless and voiced fricatives, known as grammatischer Wechsel and triggered by the earlier operation of Verner's law. It was found in various environments: Another form of alternation was triggered by the Germanic spirant law, which continued to operate into the separate history of the individual daughter languages. It is found in environments with suffixal -t, including: An alternation not triggered by sound change was Sievers' law, which caused alternation of suffixal -j- and -ij- depending on the length of the preceding part of the morpheme. If preceded within the same morpheme by only a short vowel followed by a single consonant, -j- appeared. In all other cases, such as when preceded by a long vowel or diphthong, by two or more consonants, or by more than one syllable, -ij- appeared. The distinction between morphemes and words is important here, as the alternant -j- appeared also in words that contained a distinct suffix that in turn contained -j- in its second syllable. A notable example was the verb suffix *-atjaną, which retained -j- despite being preceded by two syllables in a fully formed word. Related to the above was the alternation between -j- and -i-, and likewise between -ij- and -ī-. This was caused by the earlier loss of -j- before -i-, and appeared whenever an ending was attached to a verb or noun with an -(i)j- suffix (which were numerous). Similar, but much more rare, was an alternation between -aV- and -aiC- from the loss of -j- between two vowels, which appeared in the present subjunctive of verbs: *-aų < *-ajų in the first person, *-ai- in the others. A combination of these two effects created an alternation between -ā- and -ai- found in class 3 weak verbs, with -ā- < -aja- < -əja- and -ai- < -əi- < -əji-. I-mutation was the most important source of vowel alternation, and continued well into the history of the individual daughter languages (although it was either absent or not apparent in Gothic). In Proto-Germanic, only -e- was affected, which was raised by -i- or -j- in the following syllable. Examples are numerous: The system of nominal declensions was largely inherited from PIE. Primary nominal declensions were the stems in /a/, /ō/, /n/, /i/, and /u/. The first three were particularly important and served as the basis of adjectival declension; there was a tendency for nouns of all other classes to be drawn into them. The first two had variants in /ja/ and /wa/, and /jō/ and /wō/, respectively; originally, these were declined exactly like other nouns of the respective class, but later sound changes tended to distinguish these variants as their own subclasses. The /n/ nouns had various subclasses, including /ōn/ (masculine and feminine), /an/ (neuter), and /īn/ (feminine, mostly abstract nouns). There was also a smaller class of root nouns (ending in various consonants), nouns of relationship (ending in /er/), and neuter nouns in /z/ (this class was greatly expanded in German). Present participles, and a few nouns, ended in /nd/. The neuter nouns of all classes differed from the masculines and feminines in their nominative and accusative endings, which were alike. Adjectives agree with the noun they qualify in case, number, and gender. Adjectives evolved into strong and weak declensions, originally with indefinite and definite meaning, respectively. As a result of its definite meaning, the weak form came to be used in the daughter languages in conjunction with demonstratives and definite articles. The terms strong and weak are based on the later development of these declensions in languages such as German and Old English, where the strong declensions have more distinct endings. In the proto-language, as in Gothic, such terms have no relevance. The strong declension was based on a combination of the nominal /a/ and /ō/ stems with the PIE pronominal endings; the weak declension was based on the nominal /n/ declension. Proto-Germanic originally had two demonstratives (proximal *hi-/*hei-/*he- 'this', distal *sa/*sō/*þat 'that') which could serve as both adjectives and pronouns. The proximal was already obsolescent in Gothic (e.g. Goth acc. hina, dat. himma, neut. hita) and appears entirely absent in North Germanic. In the West Germanic languages, it evolved into a third-person pronoun, displacing the inherited *iz in the northern languages while being ousted itself in the southern languages, such as Old High German. This is the basis of the distinction between English him/her (with h- from the original proximal demonstrative) and German ihm/ihr (lacking h-).[citation needed] Ultimately, only the distal survived in the function of demonstrative. In most languages, it developed a second role as definite article, and underlies both the English determiners the and that. In the North-West Germanic languages (but not in Gothic), a new proximal demonstrative ('this' as opposed to 'that') evolved by appending -si to the distal demonstrative (e.g. Runic Norse nom.sg. sa-si, gen. þes-si, dat. þeim-si), with complex subsequent developments in the various daughter languages. The new demonstrative underlies the English determiners this, these and those. (Originally, these, those were dialectal variants of the masculine plural of this.) Proto-Germanic had only two tenses (past and present), compared to 5–7 in Greek, Latin, Proto-Slavic and Sanskrit. Some of this difference is due to deflexion, featured by a loss of tenses present in Proto-Indo-European. For example, Donald Ringe assumes for Proto-Germanic an early loss of the PIE imperfect aspect (something that also occurred in most other branches), followed by merging of the aspectual categories present-aorist and the mood categories indicative-subjunctive. (This assumption allows him to account for cases where Proto-Germanic has present indicative verb forms that look like PIE aorist subjunctives.) However, many of the tenses of the other languages (e.g. future, future perfect, pluperfect, Latin imperfect) are not cognate with each other and represent separate innovations in each language. For example, the Greek future uses a -s- ending, apparently derived from a desiderative construction that in PIE was part of the system of derivational morphology (not the inflectional system); the Sanskrit future uses a -sy- ending, from a different desiderative verb construction and often with a different ablaut grade from Greek; while the Latin future uses endings derived either from the PIE subjunctive or from the PIE verb */bʱuː/ 'to be'. Similarly, the Latin imperfect and pluperfect stem from Italic innovations and are not cognate with the corresponding Greek or Sanskrit forms; and while the Greek and Sanskrit pluperfect tenses appear cognate, there are no parallels in any other Indo-European languages, leading to the conclusion that this tense is either a shared Greek–Sanskrit innovation or separate, coincidental developments in the two languages. In this respect, Proto-Germanic can be said to be characterized by the failure to innovate new synthetic tenses as much as the loss of existing tenses. Later Germanic languages did innovate new tenses, derived through periphrastic constructions, with Modern English likely possessing the most elaborated tense system ("Yes, the house will still be being built a month from now"). On the other hand, even the past tense was later lost (or widely lost) in most High German dialects as well as in Afrikaans. Verbs in Proto-Germanic were divided into two main groups, called "strong" and "weak", according to the way the past tense is formed. Strong verbs use ablaut (i.e. a different vowel in the stem) and/or reduplication (derived primarily from the Proto-Indo-European perfect), while weak verbs use a dental suffix (now generally held to be a reflex of the reduplicated imperfect of PIE *dʰeH₁- originally 'put', in Germanic 'do'). Strong verbs were divided into seven main classes while weak verbs were divided into five main classes (although no attested language has more than four classes of weak verbs). Strong verbs generally have no suffix in the present tense, although some have a -j- suffix that is a direct continuation of the PIE -y- suffix, and a few have an -n- suffix or infix that continues the -n- infix of PIE. Almost all weak verbs have a present-tense suffix, which varies from class to class. An additional small, but very important, group of verbs formed their present tense from the PIE perfect (and their past tense like weak verbs); for this reason, they are known as preterite-present verbs. All three of the previously mentioned groups of verbs—strong, weak and preterite-present—are derived from PIE thematic verbs; an additional very small group derives from PIE athematic verbs, and one verb *wiljaną 'to want' forms its present indicative from the PIE optative mood. Proto-Germanic verbs have three moods: indicative, subjunctive and imperative. The subjunctive mood derives from the PIE optative mood. Indicative and subjunctive moods are fully conjugated throughout the present and past, while the imperative mood existed only in the present tense and lacked first-person forms. Proto-Germanic verbs have two voices, active and passive, the latter deriving from the PIE mediopassive voice. The Proto-Germanic passive existed only in the present tense (an inherited feature, as the PIE perfect had no mediopassive). On the evidence of Gothic—the only Germanic language with a reflex of the Proto-Germanic passive—the passive voice had a significantly reduced inflectional system, with a single form used for all persons of the dual and plural. Note that although Old Norse (like modern Faroese and Icelandic) has an inflected mediopassive, it is not inherited from Proto-Germanic, but is an innovation formed by attaching the reflexive pronoun to the active voice. Although most Proto-Germanic strong verbs are formed directly from a verbal root, weak verbs are generally derived from an existing noun, verb or adjective (so-called denominal, deverbal and deadjectival verbs). For example, a significant subclass of Class I weak verbs are (deverbal) causative verbs. These are formed in a way that reflects a direct inheritance from the PIE causative class of verbs. PIE causatives were formed by adding an accented suffix *-éi̯e/éi̯o to the o-grade of a non-derived verb. In Proto-Germanic, causatives are formed by adding a suffix -j/ij- (the reflex of PIE *-éi̯e/éi̯o) to the past-tense ablaut (mostly with the reflex of PIE o-grade) of a strong verb (the reflex of PIE non-derived verbs), with Verner's Law voicing applied (the reflex of the PIE accent on the *-éi̯e/éi̯o suffix). Examples: As in other Indo-European languages, a verb in Proto-Germanic could have a preverb attached to it, modifying its meaning (cf. e.g. *fra-werþaną 'to perish', derived from *werþaną 'to become'). In Proto-Germanic, the preverb was still a clitic that could be separated from the verb (as also in Gothic, as shown by the behavior of second-position clitics, e.g. diz-uh-þan-sat 'and then he seized', with clitics uh 'and' and þan 'then' interpolated into dis-sat 'he seized') rather than a bound morpheme that is permanently attached to the verb. At least in Gothic, preverbs could also be stacked one on top of the other (similar to Sanskrit, different from Latin), e.g. ga-ga-waírþjan 'to reconcile'. An example verb: *nemaną 'to take' (class 4 strong verb). 1 – Unstressed variant Schleicher's PIE fable rendered into Proto-Germanic August Schleicher wrote a fable in the PIE language he had just reconstructed, which, though it has been updated a few times by others, still bears his name. Below is a rendering of this fable into Proto-Germanic.[citation needed] The first is a direct phonetic evolution of the PIE text. It does not take into account various idiomatic and grammatical shifts that occurred over the period. For example, the original text uses the imperfect tense, which disappeared in Proto-Germanic. The second version takes these differences into account, and is therefore closer to the language the Germanic people would have actually spoken. Reconstructed Proto-Germanic, phonetic evolution derived from reconstructed PIE only Reconstructed Proto-Germanic, with more probable grammar and vocabulary derived from later Germanic languages English See also Notes References Sources External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Fine-tuning_(deep_learning)] | [TOKENS: 1012]
Contents Fine-tuning (deep learning) Fine-tuning (in deep learning) is the process of adapting a model trained for one task (the upstream task) to perform a different, usually more specific, task (the downstream task). It is considered a form of transfer learning, as it reuses knowledge learned from the original training objective. Fine-tuning involves applying additional training (e.g., on new data) to the parameters of a neural network that have been pre-trained. Many variants exist. The additional training can be applied to the entire neural network, or to only a subset of its layers, in which case the layers that are not being fine-tuned are "frozen" (i.e., not changed during backpropagation). A model may also be augmented with "adapters"—lightweight modules inserted into the model's architecture that nudge the embedding space for domain adaptation. These contain far fewer parameters than the original model and can be fine-tuned in a parameter-efficient way by tuning only their weights and leaving the rest of the model's weights frozen. For some architectures, such as convolutional neural networks, it is common to keep the earlier layers (those closest to the input layer) frozen, as they capture lower-level features, while later layers often discern high-level features that can be more related to the task that the model is trained on. Models that are pre-trained on large, general corpora are usually fine-tuned by reusing their parameters as a starting point and adding a task-specific layer trained from scratch. Fine-tuning the full model is also common and often yields better results, but is more computationally expensive. Fine-tuning is typically accomplished via supervised learning, but there are also techniques to fine-tune a model using weak supervision. Fine-tuning can be combined with a reinforcement learning from human feedback-based objective to produce language models such as ChatGPT (a fine-tuned version of GPT models) and Sparrow. Robustness Fine-tuning can degrade a model's robustness to distribution shifts. One mitigation is to linearly interpolate a fine-tuned model's weights with the weights of the original model, which can greatly increase out-of-distribution performance while largely retaining the in-distribution performance of the fine-tuned model. Variants Low-rank adaptation (LoRA) is an adapter-based technique for efficiently fine-tuning models. The basic idea is to design a low-rank matrix that is then added to the original matrix. An adapter, in this context, is a collection of low-rank matrices which, when added to a base model, produces a fine-tuned model. It allows for performance that approaches full-model fine-tuning with lower space requirements. A language model with billions of parameters may be LoRA fine-tuned with only several millions of parameters. LoRA-based fine-tuning has become popular in the Stable Diffusion community. Support for LoRA was integrated into the diffusers library from Hugging Face. Support for LoRA and similar techniques is also available for a wide range of other models through Hugging Face's parameter-efficient fine-tuning (PEFT) package. Representation fine-tuning (ReFT) is a technique developed by researchers at Stanford University aimed at fine-tuning large language models (LLMs) by modifying less than 1% of their representations. Unlike parameter-efficient fine-tuning (PEFT) methods, which mainly focus on updating weights, ReFT targets representations, suggesting that modifying representations might be a more effective strategy than updating weights. ReFT methods operate on a frozen base model and learn task-specific interventions on hidden representations and train interventions that manipulate a small fraction of model representations to steer model behaviors towards solving downstream tasks at inference time. One specific method within the ReFT family is low-rank linear subspace ReFT (LoReFT), which intervenes on hidden representations in the linear subspace spanned by a low-rank projection matrix. LoReFT can be seen as the representation-based equivalent of low-rank adaptation (LoRA). Applications Fine-tuning is common in natural language processing (NLP), especially in the domain of language modeling. Large language models like OpenAI's series of GPT foundation models can be fine-tuned on data for specific downstream NLP tasks (tasks that use a pre-trained model) to improve performance over the unmodified pre-trained model. Platforms such as Semrush's AI Visibility Toolkit and Enterprise AIO exemplify how fine-tuned models are being used for entity-level monitoring; tracking how named entities are referenced and represented within responses generated by large-language-model-based answer engines. Commercial models Commercially-offered large language models can sometimes be fine-tuned if the provider offers a fine-tuning API. As of June 19, 2023, language model fine-tuning APIs are offered by OpenAI and Microsoft Azure's Azure OpenAI Service for a subset of their models, as well as by Google Cloud Platform for some of their PaLM models, and by others. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/List_of_datasets_in_computer_vision_and_image_processing] | [TOKENS: 273]
Contents List of datasets in computer vision and image processing This is a list of datasets for machine learning research. It is part of the list of datasets for machine-learning research. These datasets consist primarily of images or videos for tasks such as object detection, facial recognition, and multi-label classification. Object detection and recognition (V7 : 2022) and (.pkl, .txt, .tsv) label files and (.pkl) label files See (Calli et al, 2015) for a review of 33 datasets of 3D object as of 2015. See (Downs et al., 2022) for a review of more datasets as of 2022. Facial recognition In computer vision, face images have been used extensively to develop facial recognition systems, face detection, and many other projects that use images of faces. See for a curated list of datasets, focused on the pre-2005 period. IJCV FG Images, text Images, text Images, text Action recognition Handwriting and character recognition Documentation 23330, Character Dataset: 76000 text classification Aerial images Aditya Arora, Akshita Gupta, Salman Khan, Guolei Sun, Fahad Shahbaz Khan, Fan Zhu, Ling Shao, Gui-Song Xia, Xiang Bai Underwater images Other images References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Outline_of_machine_learning] | [TOKENS: 230]
Contents Outline of machine learning The following outline is provided as an overview of, and topical guide to, machine learning: Machine learning (ML) is a subfield of artificial intelligence within computer science that evolved from the study of pattern recognition and computational learning theory. In 1959, Arthur Samuel defined machine learning as a "field of study that gives computers the ability to learn without being explicitly programmed". ML involves the study and construction of algorithms that can learn from and make predictions on data. These algorithms operate by building a model from a training set of example observations to make data-driven predictions or decisions expressed as outputs, rather than following strictly static program instructions. How can machine learning be categorized? Applications of machine learning Machine learning hardware Machine learning tools Machine learning methods Dimensionality reduction Ensemble learning Meta-learning Reinforcement learning Supervised learning Bayesian statistics Decision tree algorithm Linear classifier Unsupervised learning Artificial neural network Association rule learning Hierarchical clustering Cluster analysis Anomaly detection Semi-supervised learning Deep learning Machine learning research History of machine learning History of machine learning Machine learning projects Machine learning projects: Machine learning organizations Machine learning publications Persons influential in machine learning See also Further reading References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Syntax] | [TOKENS: 2240]
Contents Syntax In linguistics, syntax (/ˈsɪntæks/ SIN-taks) is the study of how words and morphemes well-formed combine to form larger units such as phrases and sentences. Central concerns with syntax include word order, grammatical relations, hierarchical sentence structure (constituency), agreement, the nature of crosslinguistic variation, and the relationship between form and meaning (semantics). Diverse approaches, such as generative grammar and functional grammar, offer unique perspectives on syntax, reflecting its complexity and centrality to understanding human language. Etymology The word syntax comes from the ancient Greek word σύνταξις, meaning an orderly or systematic arrangement, which consists of σύν- (syn-, "together" or "alike"), and τάξις (táxis, "arrangement"). In Hellenistic Greek, this also specifically developed a use referring to the grammatical order of words, with a slightly altered spelling: συντάσσειν. The English term, which first appeared in 1548, is partly borrowed from Latin (syntaxis) and Greek, though the Latin term developed from Greek. Topics The field of syntax contains a number of various topics that a syntactic theory is often designed to handle. The relation between the topics is treated differently in different theories, and some of them may not be considered to be distinct but instead to be derived from one another (i.e. word order can be seen as the result of movement rules derived from grammatical relations). One basic description of a language's syntax is the sequence in which the subject (S), verb (V), and object (O) usually appear in sentences. Over 85% of languages usually place the subject first, either in the sequence SVO or the sequence SOV. The other possible sequences are VSO, VOS, OVS, and OSV, the last three of which are rare. In most generative theories of syntax, the surface differences arise from a more complex clausal phrase structure, and each order may be compatible with multiple derivations. However, word order can also reflect the semantics or function of the ordered elements. Another description of a language considers the set of possible grammatical relations in a language or in general and how they behave in relation to one another in the morphosyntactic alignment of the language. The description of grammatical relations can also reflect transitivity, passivization, and head-dependent-marking or other agreement. Languages have different criteria for grammatical relations. For example, subjecthood criteria may have implications for how the subject is referred to from a relative clause or coreferential with an element in an infinite clause. Constituency is the feature of being a constituent and how words can work together to form a constituent (or phrase). Constituents are often moved as units, and the constituent can be the domain of agreement. Some languages allow discontinuous phrases in which words belonging to the same constituent are not immediately adjacent but are broken up by other constituents. Constituents may be recursive, as they may consist of other constituents, potentially of the same type. Early history The Aṣṭādhyāyī of Pāṇini, from c. 4th century BC in Ancient India, is often cited as an example of a premodern work that approaches the sophistication of a modern syntactic theory since works on grammar had been written long before modern syntax came about. In the West, the school of thought that came to be known as "traditional grammar" began with the work of Dionysius Thrax. For centuries, a framework known as grammaire générale, first expounded in 1660 by Antoine Arnauld and Claude Lancelot in a book of the same title, dominated work in syntax: as its basic premise the assumption that language is a direct reflection of thought processes and so there is a single most natural way to express a thought. However, in the 19th century, with the development of historical-comparative linguistics, linguists began to realize the sheer diversity of human language and to question fundamental assumptions about the relationship between language and logic. It became apparent that there was no such thing as the most natural way to express a thought and so logic could no longer be relied upon as a basis for studying the structure of language.[citation needed] The Port-Royal grammar modeled the study of syntax upon that of logic. (Indeed, large parts of Port-Royal Logic were copied or adapted from the Grammaire générale.) Syntactic categories were identified with logical ones, and all sentences were analyzed in terms of "subject – copula – predicate". Initially, that view was adopted even by the early comparative linguists such as Franz Bopp. The central role of syntax within theoretical linguistics became clear only in the 20th century, which could reasonably be called the "century of syntactic theory" as far as linguistics is concerned. (For a detailed and critical survey of the history of syntax in the last two centuries, see the monumental work by Giorgio Graffi (2001).) Theories There are a number of theoretical approaches to the discipline of syntax. One school of thought, founded in the works of Derek Bickerton, sees syntax as a branch of biology, since it conceives of syntax as the study of linguistic knowledge as embodied in the human mind. Other linguists (e.g., Gerald Gazdar) take a more Platonistic view since they regard syntax to be the study of an abstract formal system. Yet others (e.g., Joseph Greenberg) consider syntax a taxonomical device to reach broad generalizations across languages. Syntacticians have attempted to explain the causes of word-order variation within individual languages and cross-linguistically. Much of such work has been done within the framework of generative grammar, which holds that syntax depends on a genetic endowment common to the human species. In that framework and in others, linguistic typology and universals have been primary explicanda. Alternative explanations, such as those by functional linguists, have been sought in language processing. It is suggested that the brain finds it easier to parse syntactic patterns that are either right- or left-branching but not mixed. The most-widely held approach is the performance–grammar correspondence hypothesis by John A. Hawkins, who suggests that language is a non-innate adaptation to innate cognitive mechanisms. Cross-linguistic tendencies are considered as being based on language users' preference for grammars that are organized efficiently and on their avoidance of word orderings that cause processing difficulty. Some languages, however, exhibit regular inefficient patterning such as the VO languages Chinese, with the adpositional phrase before the verb, and Finnish, which has postpositions, but there are few other profoundly exceptional languages. More recently, it is suggested that the left- versus right-branching patterns are cross-linguistically related only to the place of role-marking connectives (adpositions and subordinators), which links the phenomena with the semantic mapping of sentences. Theoretical syntactic models Dependency grammar is an approach to sentence structure in which syntactic units are arranged according to the dependency relation, as opposed to the constituency relation of phrase structure grammars. Dependencies are directed links between words. The (finite) verb is seen as the root of all clause structure and all the other words in the clause are either directly or indirectly dependent on this root (i.e. the verb). Some prominent dependency-based theories of syntax are the following: Lucien Tesnière (1893–1954) is widely seen as the father of modern dependency-based theories of syntax and grammar. He argued strongly against the binary division of the clause into subject and predicate that is associated with the grammars of his day (S → NP VP) and remains at the core of most phrase structure grammars. In place of that division, he positioned the verb as the root of all clause structure. Categorial grammar is an approach in which constituents combine as function and argument, according to combinatory possibilities specified in their syntactic categories. For example, other approaches might posit a rule that combines a noun phrase (NP) and a verb phrase (VP), but CG would posit a syntactic category NP and another NP\S, read as "a category that searches to the left (indicated by \) for an NP (the element on the left) and outputs a sentence (the element on the right)." Thus, the syntactic category for an intransitive verb is a complex formula representing the fact that the verb acts as a function word requiring an NP as an input and produces a sentence level structure as an output. The complex category is notated as (NP\S) instead of V. The category of transitive verb is defined as an element that requires two NPs (its subject and its direct object) to form a sentence. That is notated as (NP/(NP\S)), which means, "A category that searches to the right (indicated by /) for an NP (the object) and generates a function (equivalent to the VP) which is (NP\S), which in turn represents a function that searches to the left for an NP and produces a sentence." Tree-adjoining grammar is a categorial grammar that adds in partial tree structures to the categories. Theoretical approaches to syntax that are based upon probability theory are known as stochastic grammars. One common implementation of such an approach makes use of a neural network or connectionism. Functionalist models of grammar study the form–function interaction by performing a structural and a functional analysis. Generative syntax is the study of syntax within the overarching framework of generative grammar. Generative theories of syntax typically propose analyses of grammatical patterns using formal tools such as phrase structure grammars augmented with additional operations such as syntactic movement. Their goal in analyzing a particular language is to specify rules which generate all and only the expressions which are well-formed in that language. In doing so, they seek to identify innate domain-specific principles of linguistic cognition, in line with the wider goals of the generative enterprise. Generative syntax is among the approaches that adopt the principle of the autonomy of syntax by assuming that meaning and communicative intent is determined by the syntax, rather than the other way around. Generative syntax was proposed in the late 1950s by Noam Chomsky, building on earlier work by Zellig Harris, Louis Hjelmslev, and others. Since then, numerous theories have been proposed under its umbrella: Other theories that find their origin in the generative paradigm are: The Cognitive Linguistics framework stems from generative grammar but adheres to evolutionary, rather than Chomskyan, linguistics. Cognitive models often recognise the generative assumption that the object belongs to the verb phrase. Cognitive frameworks include the following: See also References Further reading External links
========================================
[SOURCE: https://www.fast.ai/posts/2023-11-07-dislightenment.html] | [TOKENS: 9847]
AI Safety and the Age of Dislightenment Jeremy Howard July 10, 2023 On this page Abstract Proposals for stringent AI model licensing and surveillance will likely be ineffective or counterproductive, concentrating power in unsustainable ways, and potentially rolling back the societal gains of the Enlightenment. The balance between defending society and empowering society to defend itself is delicate. We should advocate for openness, humility and broad consultation to develop better responses aligned with our principles and values — responses that can evolve as we learn more about this technology with the potential to transform society for good or ill. Executive summary Artificial Intelligence is moving fast, and we don’t know what might turn out to be possible. OpenAI CEO Sam Altman thinks AI might “capture the light cone of all future value in the universe”. But things might go wrong, with some experts warning of “the risk of extinction from AI”. This had led many to propose an approach to regulating AI, including the whitepaper “Frontier AI Regulation: Managing Emerging Risks to Public Safety” (which we’ll refer to as “FAR”), and in the Parliament version of the EU AI Act, that goes as follows: Other experts, however, counter that “There is so much attention flooded onto x-risk (existential risk)… that it ‘takes the air out of more pressing issues’ and insidiously puts social pressure on researchers focused on other current risks.” Important as current risks are, does the threat of human extinction mean we should go ahead with this kind of regulation anyway? Perhaps not. As we’ll see, if AI turns out to be powerful enough to be a catastrophic threat, the proposal may not actually help. In fact it could make things much worse, by creating a power imbalance so severe that it leads to the destruction of society. These concerns apply to all regulations that try to ensure the models themselves (“development”) are safe, rather than just how they’re used. The effects of these regulations may turn out to be impossible to undo, and therefore we should be extremely careful before we legislate them. The kinds of model development that FAR and the AI Act aim to regulate are “foundation models” — general-purpose AI which can handle (to varying degrees of success) nearly any problem you throw at them. There is no way to ensure that any general-purpose device (like, say, a computer, or a pen) can’t ever be used to cause harm. Therefore, the only way to ensure that AI models can’t be misused is to ensure that no one can use them directly. Instead, they must be limited to a tightly controlled narrow service interface (like ChatGPT, an interface to GPT-4). But those with full access to AI models (such as those inside the companies that host the service) have enormous advantages over those limited to “safe” interfaces. If AI becomes extremely powerful, then full access to models will be critical to those who need to remain competitive, as well as to those who wish to cause harm. They can simply train their own models from scratch, or exfiltrate existing ones through blackmail, bribery, or theft. This could lead to a society where only groups with the massive resources to train foundation models, or the moral disregard to steal them, have access to humanity’s most powerful technology. These groups could become more powerful than any state. Historically, large power differentials have led to violence and subservience of whole societies. If we regulate now in a way that increases centralisation of power in the name of “safety”, we risk rolling back the gains made from the Age of Enlightenment, and instead entering a new age: the Age of Dislightenment. Instead, we could maintain the Enlightenment ideas of openness and trust, such as by supporting open-source model development. Open source has enabled huge technological progress through broad participation and sharing. Perhaps open AI models could do the same. Broad participation could allow more people with a wider variety of expertise to help identify and counter threats, thus increasing overall safety — as we’ve previously seen in fields like cyber-security. There are interventions we can make now, including the regulation of “high-risk applications” proposed in the EU AI Act. By regulating applications we focus on real harms and can make those most responsible directly liable. Another useful approach in the AI Act is to regulate disclosure, to ensure that those using models have the information they need to use them appropriately. AI impacts are complex, and as such there is unlikely to be any one panacea. We will not truly understand the impacts of advanced AI until we create it. Therefore we should not be in a rush to regulate this technology, and should be careful to avoid a cure which is worse than the disease. The big problem The rapid development of increasingly capable AI has many people asking to be protected, and many offering that protection. The latest is a white paper titled: “Frontier AI Regulation: Managing Emerging Risks to Public Safety’’ (FAR). Many authors of the paper are connected to OpenAI and Google, and to various organizations funded by investors of OpenAI and Google. FAR claims that “government involvement will be required to ensure that such ‘frontier AI models’ are harnessed in the public interest”. But can we really ensure such a thing? At what cost? There’s one huge, gaping problem which FAR fails to address.1 Anyone with access to the full version of a powerful AI model has far more power than someone that can only access that model through a restricted service. But very few people will have access to the full model. If AI does become enormously powerful, then this huge power differential is unsustainable. While superficially seeming to check off various safety boxes, the regulatory regime being advanced in FAR ultimately leads to a vast amount of power being placed into the entrenched companies (by virtue of them having access to the raw models), giving them an information asymmetry against all other actors - including governments seeking to regulate or constrain them. It may lead to the destruction of society. Here’s why: because these models are general-purpose computing devices, it is impossible to guarantee they can’t be used for harmful applications. That would be like trying to make a computer that can’t be misused (such as for emailing a blackmail threat). The full original model is vastly more powerful than any “ensured safe” service based on it can ever be. The full original model is general-purpose: it can be used for anything. But if you give someone a general-purpose computing device, you can’t be sure they won’t use it to cause harm. So instead, you give them access to a service which provides a small window into the full model. For instance, OpenAI provides public access to a tightly controlled and tuned text-based conversational interface to GPT-4, but does not provide full access to the GPT-4 model itself. If you control a powerful model that mediates all consumption and production of information,2 and it’s a proprietary secret, you can shape what people believe, how people act — and censor whatever you please. The ideas being advanced in FAR ultimately lead to the frontier of AI becoming inaccessible to everyone who doesn’t work at a small number of companies, whose dominance will be enshrined by virtue of these ideas. This is an immensely dangerous and brittle path for society to go down. The race So let’s recap what happens under these regulatory proposals. We have the world’s most powerful technology, rapidly developing all the time, but only a few big companies have access to the most powerful version of that technology that allows it to be used in an unrestricted manner. What happens next? Obviously, everyone who cares about power and money now desperately needs to find a way to get full access to these models. After all, anyone that doesn’t have full access to the most powerful technology in history can’t possibly compete. The good news for them is that the models are, literally, just a bunch of numbers. They can be copied trivially easily, and once you’ve got them, you can pass them around to all your friends for nothing. (FAR has a whole section on this, which it calls “The Proliferation Problem”.) There are plenty of experts on exfiltrating data around, who know how to take advantage of blackmail, bribery, social engineering, and various other methods which experience tells us are highly effective. For those with the discretion not to use such unsavory methods, but with access to resources, they too can join the ranks of the AI-capable by spending $100m or so.3 Even the smallest company on the Fortune Global 2000 has $7 billion annual revenue, making such an expenditure well within their budget. And of course most country governments could also afford such a bill. Of course, none of these organizations could make these models directly available to the public without contravening the requirements of the proposed regulations, but by definition at least some people in each organization will have access to the power of the full model. Those who crave power and wealth, but fail to get access to model weights, now have a new goal: get themselves into positions of power at organizations that have big models, or get themselves into positions of power at the government departments that make these decisions. Organizations that started out as well-meaning attempts to develop AI for societal benefit will soon find themselves part of the corporate profit-chasing machinery that all companies join as they grow, run by people that are experts at chasing profits. The truth is that this entire endeavor, this attempt to control the use of AI, is pointless and ineffective. Not only is “proliferation” of models impossible to control (because digital information is so easy to exfiltrate and copy), it turns out that restrictions on the amount of compute for training models are also impossible to enforce. That’s because it’s now possible for people all over the world to virtually join up and train a model together. For instance, Together Computer has created a fully decentralized, open, scalable cloud for AI, and recent research has shown it is possible to go a long way with this kind of approach. Graphics processing units (GPUs), the actual hardware used for training models, are the exact same hardware used for playing computer games. There is more compute capacity in the world currently deployed for playing games than for AI. Gamers around the world can simply install a small piece of software on their computers to opt into helping train these open-source models. Organizing such a large-scale campaign would be difficult, but not without precedent, as seen in the success of projects such as Folding@Home and SETI@Home. And developers are already thinking about how to ensure that regular people can continue to train these models — for instance, in a recent interview with Lex Fridman, Comma.ai founder George Hotz explained how his new company, Tiny Corp, is working on the “Tiny Rack”, which he explains is powered based on the premise: “What’s the most power you can get into your house without arousing suspicion? And one of the answers is an electric car charger.” So he’s building an AI model training system that uses the same amount of power as a car charger. The AI safety community is well aware of this problem, and has proposed various solutions.4 For instance, one recent influential paper by AI policy expert Yo Shavit, which examines surveillance mechanisms that can be added to computer chips, points out that: “As advanced machine learning systems’ capabilities begin to play a significant role in geopolitics and societal order, it may become imperative that (1) governments be able to enforce rules on the development of advanced ML systems within their borders, and (2) countries be able to verify each other’s compliance with potential future international agreements on advanced ML development.” Any approach to this must ensure that every manufacturer of such chips be required to include that surveillance capability into their chips, since obviously if a single company failed to do so, then everyone that wanted to train their own powerful models would use that company’s chips. Shavit notes that “exhaustively enforcing such rules at the hardware-level would require surveilling and policing individual citizens’ use of their personal computers, which would be highly unacceptable on ethical grounds”. The reality is however that such rules would be required for centralization and control to be effective, since personal computers can be used to train large models by simply connecting them over the internet. When the self-described pioneer of the AI Safety movement, Eliezer Yudkowsky, proposed airstrikes on unauthorized data centers and the threat of nuclear war to ensure compliance from states failing to control unauthorized use of computation capability, many were shocked. But bombing data centers and global surveillance of all computers is the only way to ensure the kind of safety compliance that FAR proposes.5 Regulate usage, not development Alex Engler points out an alternative approach to enforced safety standards or licensing of models, which is to “regulate risky and harmful applications, not open-source AI models’’. This is how most regulations work: through liability. If someone does something bad, then they get in trouble. If someone creates a general-purpose tool that someone else uses to do something bad, the tool-maker doesn’t get in trouble. “Dual use” technologies like the internet, computers, and pen and paper, are not restricted to only be available to big companies, and anyone is allowed to build a computer, or make their own paper. They don’t have to ensure that what they build can only be used for societal benefit. This is a critical distinction: the distinction between regulating usage (that is, actually putting a model into use by making it part of a system — especially a high risk system like medicine), vs development (that is, the process of training the model). The reason this distinction is critical is because these models are, in fact, nothing but mathematical functions. They take as input a bunch of numbers, and calculate and return a different bunch of numbers. They don’t do anything themselves — they can only calculate numbers. However, those calculations can be very useful! In fact, computers themselves are merely calculating machines (hence their name: “computers”). They are useful at the point they are used — that is, connected to some system that can actually do something. FAR addresses this distinction, claiming “Improvements in AI capabilities can be unpredictable, and are often difficult to fully understand without intensive testing. Regulation that does not require models to go through sufficient testing before deployment may therefore fail to reliably prevent deployed models from posing severe risks.” This is a non-sequitur. Because models cannot cause harm without being used, developing a model cannot be a harmful activity.6 Furthermore, because we are discussing general-purpose models, we cannot ensure safety of the model itself — it’s only possible to try to secure the use of a model. Another useful approach to regulation is to consider securing access to sensitive infrastructure, such as chemical labs. FAR briefly considers this idea, saying “for frontier AI development, sector-specific regulations can be valuable, but will likely leave a subset of the high severity and scale risks unaddressed.” But it does not study it further, resting on the assumption of an assumed “likely” subset of remaining risks to promote an approach which, as we’ve seen, could undo centuries of cultural, societal, and political development. If we are able to build advanced AI, we should expect that it could at least help us identify the sensitive infrastructure that needs hardening. If it’s possible to use such infrastructure to cause harm then it seems very likely that it can be identified — if AI can’t identify it, then it can’t use it. Now of course, actually dealing with an identified threat might not be straightforward; if it turns out, for instance, that a benchtop DNA printer could be used to produce a dangerous pathogen, then hardening all those devices is going to be a big job. But it’s a much smaller and less invasive job than restricting all the world’s computing devices. This leads us to another useful regulatory path: deployment disclosure. If you’re considering connecting an automated system which uses AI to any kind of sensitive infrastructure, then we should require disclosure of this fact. Furthermore, certain types of connection and infrastructure should require careful safety checks and auditing in advance. The path to centralization Better AI can be used to improve AI. This has already been seen many times, even in the earlier era of less capable and well-resourced algorithms. Google has used AI to improve how data centers use energy, to create better neural network architectures, and to create better methods for optimizing the parameters in those networks. Model outputs have been used to create the prompts used to train new models, and to create the model answers for these prompts, and to explain the reasoning for answers. As models get more powerful, researchers will find more ways to use them to improve the data, models, and training process. There is no reason to believe that we are anywhere near the limits of the technology. There is no data which we can use to make definitive predictions about how far this can go, or what happens next. Those with access to the full models can build new models faster and better than those without. One reason is that they can fully utilize powerful features like fine-tuning, activations, and the ability to directly study and modify weights.7 One recent paper, for instance, found that fine-tuning allows models to solve challenging problems with orders of magnitude fewer parameters than foundation models. This kind of feedback loop results in centralization: the big companies get bigger, and other players can’t compete. This results in centralization, less competition, and as a result higher prices, less innovation, and lower safety (since there’s a single point of failure, and a larger profit motive which encourages risky behavior). There are other powerful forces towards centralization. Consider Google, for instance. Google has more data than anyone else on the planet. More data leads directly to better foundation models. Furthermore, as people use their AI services, they are getting more and more data about these interactions. They use AI to improve their products, making them more “sticky” for their users and encouraging more people to use them, resulting in them getting still more data, which improves their models and products based on them further. Also, they are increasingly vertically integrated, so they have few powerful suppliers. They create their own AI chips (TPUs), run their own data centers, and develop their own software. Regulation of frontier model development encourages greater centralization. Licensing, in particular, is an approach proposed in FAR which is a potent centralization force. Licensing the development of frontier models requires that new entrants must apply for permission before being allowed to develop a model as good, or better, than the current state of the art. This makes it even harder to compete with entrenched players. And it opens up an extremely strong path to regulatory capture, since it results in an undemocratic licensing board having the final say in who has access to build the most powerful technology on the planet. Such a body would be, as a result, potentially the most powerful group in the world. Open source, and a new era of AI enlightenment The alternative to craving the safety and certainty of control and centralization is to once again take the risk we took hundreds of years ago: the risk of believing in the power and good of humanity and society. Just as thinkers of the Enlightenment asked difficult questions like “What if everyone got an education? What if everyone got the vote?”, we should ask the question “What if everyone got access to the full power of AI?” To be clear: asking such questions may not be popular. The counter-enlightenment was a powerful movement for a hundred years, pushing back against “the belief in progress, the rationality of all humans, liberal democracy, and the increasing secularization of society”. It relied on a key assumption, as expounded by French philosopher Joseph de Maistre, that “Man in general, if reduced to himself, is too wicked to be free.” We can see from the results of the Enlightenment that this premise is simply wrong. But it’s an idea that just won’t go away. Sociologists have for decades studied and documented “elite panic” — the tendency of elites to assume that regular people will respond badly to disasters and that they must therefore be controlled. But that’s wrong too. In fact, it’s more than wrong, as Rebecca Solnit explains: “I see these moments of crisis as moments of popular power and positive social change. The major example in my book is Mexico City, where the ’85 earthquake prompted public disaffection with the one-party system and, therefore, the rebirth of civil society.” What does it look like to embrace the belief in progress and the rationality of all humans when we respond to the threat of AI mis-use? One idea which many experts are now studying is that open source models may be the key. Models are just software — they are mathematical functions embodied as code. When we copy software, we don’t usually call it “proliferation” (as FAR does). That word is generally associated with nuclear weapons. When we copy software, we call it “installing”, or “deploying”, or “sharing”. Because software can be freely copied, it has inspired a huge open source movement which considers this sharing a moral good. When all can benefit, why restrict value to a few? This idea has been powerful. Today, nearly every website you use is running an open source web server (such as Apache), which in turn is installed on an open source operating system (generally Linux). Most programs are compiled with open source compilers, and written with open source editors. Open source documents like Wikipedia have been transformative. Initially, these were seen as crazy ideas that had plenty of skeptics, but in the end, they proved to be right. Quite simply, much of the world of computers and the internet that you use today would not exist without open source. What if the most powerful AI models were open source? There will still be Bad Guys looking to use them to hurt others or unjustly enrich themselves. But most people are not Bad Guys. Most people will use these models to create, and to protect. How better to be safe than to have the massive diversity and expertise of human society at large doing their best to identify and respond to threats, with the full power of AI behind them? How much safer would you feel if the world’s top cyber-security, bio-weapons, and social engineering academics were working with the benefits of AI to study AI safety, and that you could access and use all of their work yourself, compared to if only a handful of people at a for-profit company had full access to AI models? In order to gain access to the better features of full model access, and reduce the level of commercial control of what has previously been an open research community with a culture of sharing, the open-source community has recently stepped in and trained a number of quite capable language models. As of July 2023, the best of these are at a similar level to the second-tier cheaper commercial models, but not as good as GPT-4 or Claude. They are rapidly increasing in capability, and are attracting increasing investment from wealthy donors, governments, universities, and companies that are seeking to avoid concentration of power and ensure access to high quality AI models. However, the proposals for safety guarantees in FAR are incompatible with open source frontier models. FAR proposes “it may be prudent to avoid potentially dangerous capabilities of frontier AI models being open sourced until safe deployment is demonstrably feasible”. But even if an open-source model is trained in the exact same way from the exact same data as a regulatorily-approved closed commercial model, it can still never provide the same safety guarantees. That’s because, as a general-purpose computing device, anybody could use it for anything they want — including fine-tuning it using new datasets and for new tasks. Open source is not a silver bullet. This still requires care, cooperation, and deep and careful study. By making the systems available to all, we ensure that all of society can both benefit from their capabilities, but can also work to understand and counter their potential harms. Stanford and Princeton’s top AI and policy groups teamed up to respond to the US government’s request for comment on AI accountability, stating that: “For foundation models to advance the public interest, their development and deployment should ensure transparency, support innovation, distribute power, and minimize harm… We argue open-source foundation models can achieve all four of these objectives, in part due to inherent merits of open-source (pro-transparency, pro-innovation, anti-concentration)” Furthermore they warn that: “If closed-source models cannot be examined by researchers and technologists, security vulnerabilities might not be identified before they cause harm… On the other hand, experts across domains can examine and analyze open-source models, which makes security vulnerabilities easier to find and address. In addition, restricting who can create FMs would reduce the diversity of capable FMs and may result in single points of failure in complex systems.” The idea that access to the best AI models is critical to studying AI safety is, in fact, fundamental to the origin story of two of the most advanced AI companies today: OpenAI, and Anthropic. Many have expressed surprise that the executives of these companies have loudly warned of the potential existential risks of AI, yet they’re building those very models themselves. But there’s no conflict here — they’ve explained that the reason they do this is because they don’t believe it’s possible to properly understand and mitigate AI risks if you don’t have access to the best available models. Access to open source models is at grave risk today. The European AI Act may effectively ban open source foundation models, based on similar principles to those in FAR. Technology innovation policy analyst Alex Engler, in his article “The EU’s attempt to regulate open-source AI is counterproductive”, writes: “The Council’s attempt to regulate open-source could create a convoluted set of requirements that endangers open-source AI contributors, likely without improving use of GPAI. Open-source AI models deliver tremendous societal value by challenging the domination of GPAI by large technology companies and enabling public knowledge about the function of AI.” First, do no harm FAR concludes that “Uncertainty about the optimal regulatory approach to address the challenges posed by frontier AI models should not impede immediate action”. But perhaps they should. Indeed, AI policy experts Patrick Grady and Daniel Castro recommend exactly this — don’t be in a hurry to take regulatory action: ‘The fears around new technologies follow a predictable trajectory called “the Tech Panic Cycle.” Fears increase, peak, then decline over time as the public becomes familiar with the technology and its benefits. Indeed, other previous “generative” technologies in the creative sector such as the printing press, the phonograph, and the Cinématographe followed this same course. But unlike today, policymakers were unlikely to do much to regulate and restrict these technologies. As the panic over generative AI enters its most volatile stage, policymakers should take a deep breath, recognize the predictable cycle we are in, and put any regulation efforts directly aimed at generative AI temporarily on hold.’ Instead, perhaps regulators should consider the medical guidance of Hippocrates: “do no harm”. Medical interventions can have side effects, and the cure can sometimes be worse than the disease. Some medicines may even damage immune response, leaving a body too weakened to be able to fight off infection. So too with regulatory interventions. Not only can the centralisation and regulatory capture impacts of “ensuring safety” cause direct harm to society, but they can even result in decreased safety. If just one big organization holds the keys to vast technological power, we find ourselves in a fragile situation where the rest of society does not have access to the same power to protect ourselves. A fight for power could even be the trigger for the kind of misuse of AI that triggers societal destruction. The impact of AI regulations will be nuanced, complex, and hard to predict. The balance between defending society and empowering society to defend itself is precariously delicate. Rushing to regulate seems unlikely to walk that tight-rope successfully. We have time. The combined capabilities of all of human society are enormous, and for AI to surpass that capability is a big task. Ted Sanders, an OpenAI technical expert who has won numerous technology forecasting competitions, along with Ari Allyn-Feuer, Director of AI at GSK, completed an in-depth 114 page analysis of the timeframes associated with AI development, concluding that “we estimate the likelihood of transformative artificial general intelligence (AGI) by 2043 and find it to be <1%”. Importantly, the more time passes, the more we learn. Not just about the technology, but how society responds to it. We should not rush to implement regulatory changes which put society on a dystopian path that may be impossible to get off. Concerns about AI safety of advanced language models are not new. In early 2019 I wrote “Some thoughts on zero-day threats in AI, and OpenAI’s GPT-2”, a reaction to OpenAI’s controversial and (at the time) unusual decision to not release the weight of their new language model. In considering this decision, I pointed out that: The most in-depth analysis of this topic is the paper The Malicious Use of Artificial Intelligence. The lead author of this paper now works at OpenAI, and was heavily involved in the decision around the model release. Let’s take a look at the recommendations of that paper: “The Malicious Use of Artificial Intelligence” was written by 26 authors from 14 institutions, spanning academia, civil society, and industry. The lead author is today the Head of Policy at OpenAI. It’s interesting to see how far OpenAI, as co-creators of FAR, has moved from these original ideas. The four recommendations from the Malicious Use paper are full of humility — they recognise that effective responses to risks involve “proactively reaching out to relevant actors”, learning from “research areas with more mature methods for addressing dual-use concerns, such as computer security”, and “expand the range of stakeholders and domain experts involved in discussions”. The focus was not in centralization and control, but outreach and cooperation. The idea that the robot apocalypse may be coming is a striking and engaging idea. FAR warns that we must “guard against models potentially being situationally aware and deceptive”, linking to an article claiming that our current path “is likely to eventually lead to a full-blown AI takeover (i.e. a possibly violent uprising or coup by AI systems)”. It’s the kind of idea that can push us to something, anything, that makes us feel more safe. To push back against this reaction requires maturity and a cool head. The ancient Greeks taught us about the dangers of Hubris: excessive pride, arrogance, or overconfidence. When we are over-confident that we know what the future has in store for us, we may well over-react and create the very future we try to avoid. What if, in our attempts to avoid an AI apocalypse, we centralize control of the world’s most powerful technology, dooming future society to a return to a feudal state in which the most valuable commodity, compute, is owned by an elite few. We would be like King Oedipus, prophesied to kill his father and marry his mother, who ends up doing exactly that as a result of actions designed to avoid that fate. Or Phaethon, so confident in his ability to control the chariot of the sun that he avoids the middle path laid out by Helios, his father, and in the process nearly destroys Earth. “The Malicious Use of Artificial Intelligence” points towards a different approach, based on humility: one of consultation with experts across many fields, cooperation with those impacted by technology, in an iterative process that learns from experience. If we did take their advice and learn from computer security experts, for instance, we would learn that a key idea from that field is that “security through obscurity” — that is, hiding secrets as a basis for safety and security — is ineffective and dangerous. Cyber-security experts Arvind Narayanan, director of Princeton’s Center for Information Technology Policy, and Sayash Kapoor, in a recent analysis detailed five “major AI risks” that would be caused by licensing and similar regulations where “only a handful of companies would be able to develop state-of-the-art AI”: How did we get here? Everyone I know who has spent time using tools like GPT-4 and Bard has been blown away by their capabilities — including me! Despite their many mistakes (aka “hallucinations”), they can provide all kinds of help on nearly any topic. I use them daily for everything from coding help to playtime ideas for my daughter. As FAR explains: “Foundation models, such as large language models (LLMs), are trained on large, broad corpora of natural language and other text (e.g., computer code), usually starting with the simple objective of predicting the next “token.” This relatively simple approach produces models with surprisingly broad capabilities. These models thus possess more general-purpose functionality than many other classes of AI models” It goes on to say: “In focusing on foundation models which could have dangerous, emergent capabilities, our definition of frontier AI excludes narrow models, even when these models could have sufficiently dangerous capabilities. For example, models optimizing for the toxicity of compounds or the virulence of pathogens could lead to intended (or at least foreseen) harms and thus may be more appropriately covered with more targeted regulation. Our definition focuses on models that could — rather than just those that do — possess dangerous capabilities” Therefore, the authors propose “safety standards for responsible frontier AI development and deployment” and “empowering a supervisory authority to identify and sanction non-compliance; or by licensing the deployment and potentially the development of frontier AI”. They propose doing this in order to “ensure that” models “are harnessed in the public interest”. Let’s say these proposals are accepted and this regulation is created. What happens next? Well, there are two possibilities: In the case of (1), there’s little more to discuss. The regulations proposed in FAR would, at worst, be unnecessary, and perhaps lead to some regulatory capture of a fairly valuable product space. That would be a shame, but we can live with it. But this isn’t the case that FAR’s proposals are designed to handle — for the risks of misuse of regular technology like that we already have plenty of simple, well-understood approaches, generally based on liability for misuse (that is, if you do something bad using some technology, you get in trouble; the folks that made the technology don’t generally get in trouble too, unless they were negligent or otherwise clearly and directly contributed to the bad thing). Therefore we should focus on (2) — the case where AI turns out to be a very big deal indeed. To be clear, no one is certain this is going to happen, but plenty of people that have studied AI for a long time think it’s a real possibility. Humanity’s most powerful technology We are now in the era of “general-purpose artificial intelligence” (GPAI) thanks to “universal” or “foundation” models, such as OpenAI’s GPT-4, Google’s Bard, and Anthropic’s Claude. These models are general-purpose computing devices. They can answer (with varying degrees of success) nearly any question you can throw at them. As foundation models get more powerful, we should expect researchers to find more ways to use them to improve the data, models, and training process. Current models, dataset creation techniques, and training methods are all quite simple – the basic ideas fit in a few lines of code. There are a lot of fairly obvious paths to greatly improve them, and no reason to believe that we are anywhere near the limits of the technology. So we should expect to see increasingly fast cycles of technological development over the coming months and years. There is no data which we can use to make definitive predictions about how far this can go, or what happens next. Many researchers and AI company executives believe that there may be no practical limit. But these models are expensive to train. Thanks to technological advances, they’re getting cheaper to train the same sized model, but the models are getting bigger and bigger. GPT-4 may have cost around $100m to train. All the most powerful current models, GPT-4, Bard, and Claude, have been trained by large companies in the US (OpenAI, Google, and Anthropic respectively) and China. Building together There are already a great many regulatory initiatives in place, including The White House Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights, National Institutes of Standards and Technology’s AI Risk Management Framework, and Biden’s Executive Order 14091 to protect Americans against algorithmic discrimination. The AI community has also developed effective mechanisms for sharing important information, such as Datasheets for Datasets, Model Cards for Model Reporting, and Ecosystem Graphs. Regulation could require that datasets and models include information about how they were built or trained, to help users deploy them more effectively and safely. This is analogous to nutrition labels: whilst we don’t ban people from eating too much junk food, we endeavor to give them the information they need to make good choices. The proposed EU AI Act already includes requirements for exactly this kind of information. Whilst there is a lot of good work we can build on, there’s still much more to be done. The world of AI is moving fast, and we’re learning every day. Therefore, it’s important that we ensure the choices we make preserve optionality in the future. It’s far too early for us to pick a single path and decide to hurtle down it with unstoppable momentum. Instead, we need to be able, as a society, to respond rapidly and in an informed way to new opportunities and threats as they arise. That means involving a broad cross-section of experts from all relevant domains, along with members of impacted communities. The more we can build capacity in our policy making bodies, the better. Without a deep understanding of AI amongst decision makers, they have little choice but to defer to industry. But as Marietje Schaake, international policy director at Stanford University’s Cyber Policy Center, said, “We need to keep CEOs away from AI regulation”: “Imagine the chief executive of JPMorgan explaining to Congress that because financial products are too complex for lawmakers to understand, banks should decide for themselves how to prevent money laundering, enable fraud detection and set liquidity to loan ratios. He would be laughed out of the room. Angry constituents would point out how well self-regulation panned out in the global financial crisis. From big tobacco to big oil, we have learnt the hard way that businesses cannot set disinterested regulations. They are neither independent nor capable of creating countervailing powers to their own.” We should also be careful to not allow engaging and exciting sci-fi scenarios to distract us from immediate real harms. Aiden Gomez, the co-creator of the transformers neural network architecture, which powers all the top language models including GPT 4, warns: “*There are real risks with this technology. There are reasons to fear this technology, and who uses it, and how. So, to spend all of our time debating whether our species is going to go extinct because of a takeover by a superintelligent AGI is an absurd use of our time and the public’s mindspace… I would really hope that the public knows some of the more fantastical stories about risk [are unfounded]. They’re distractions from the conversations that should be going on.” The dislightenment What if, faced with a new power, with uncertainty, with a threat to our safety, we withdraw to the certainty of centralization, of control, of limiting power to a select few? This is the Dislightenment. The roll-back of the principles that brought us the Age of Enlightenment. We would create a world of “haves” and “have-nots”. The “haves” (big companies, organized crime, governments, and everyone that convinces their friends and family members to get a copy of the weights for them, and everyone that accesses darknet sites where hackers distribute those weights, and everyone that copies them…) can build better and better models, models which can (according to FAR) be used for mass propaganda, bio and cyber threat development, or simply for the purpose of ensuring you beat all of your competition and monopolize the most strategic and profitable industries. The “have-nots” would provide little value to society, since they can only access AI through narrow portals which provide limited (but “safe”) applications. The push for commercial control of AI capability is dangerous. Naomi Klein, who coined the term “shock doctrine” as “the brutal tactic of using the public’s disorientation following a collective shock… to push through radical pro-corporate measures”, is now warning that AI is “likely to become a fearsome tool of further dispossession and despoilation”. Once we begin down this path, it’s very hard to turn back. It may, indeed, be impossible. Technology policy experts Anja Kaspersen, Kobi Leins, and Wendell Wallach, in their article “Are We Automating the Banality and Radicality of Evil?”, point out that deploying bad solutions (such as poorly designed regulation) can take decades to undo, if the bad solution turns out to be profitable to some: “The rapid deployment of AI-based tools has strong parallels with that of leaded gasoline. Lead in gasoline solved a genuine problem—engine knocking. Thomas Midgley, the inventor of leaded gasoline, was aware of lead poisoning because he suffered from the disease. There were other, less harmful ways to solve the problem, which were developed only when legislators eventually stepped in to create the right incentives to counteract the enormous profits earned from selling leaded gasoline.” With centralization, we will create “haves” and “have-nots”, and the “haves” will have access to a technology that makes them vastly more powerful than everyone else. When massive power and wealth differentials are created, they are captured by those that most want power and wealth, and history tells us violence is the only way such differentials can be undone. As John F Kennedy said, “Those who make peaceful revolution impossible will make violent revolution inevitable.” Perhaps, with the power of AI and the creation of the surveillance needed to maintain control, even violence will be an ineffective solution. If we do start in this direction, let’s do it with eyes open, understanding where it takes us. The fragility of the Age of Enlightenment Through most of human history, the future was scary. It was unsafe. It was unknown. And we responded in the most simple and obvious way: by collectively placing our trust in others more powerful than us to keep us safe. Most societies restricted dangerous tools like education and power to an elite few. But then something changed. A new idea took hold in the West. What if there is another way to be safe: to trust in the overall good of society at large, rather put faith in a powerful elite? What if everyone had access to education? To the vote? To technology? This—though it would take a couple more centuries of progress for its promises to be fully realized—was the Age of Enlightenment. Now that so many of us live in liberal democracies it’s easy to forget how fragile and rare this is. But we can see nations around the world now sliding into the arms of authoritarian leaders. As Hermann Göring said, “The people can always be brought to the bidding of the leaders. That is easy. All you have to do is tell them they are being attacked…” Let’s be clear: we are not being attacked. Now is not the time to give up the hard-won progress we’ve made towards equality and opportunity. No one can guarantee your safety, but together we can work to build a society, with AI, that works for all of us. Appendix: Background This document started out as a red team review of Frontier AI Regulation: Managing Emerging Risks to Public Safety. Although red-teaming isn’t common for policy proposals (it’s mainly used in computer security) it probably should be, since they can have risks that are difficult to foresee without careful analysis. Following the release of the Parliament Version of the EU AI Act (which included sweeping new regulation of foundation model development), along with other similar private regulatory proposals from other jurisdictions that I was asked to review, I decided to expand our analysis to cover regulation of model development more generally. I’ve discussed these issues during the development of this review with over 70 experts from the regulatory, policy, AI safety, AI capabilities, cyber-security, economics, and technology transition communities, and have looked at over 300 academic papers. Eric Ries and I recorded a number of expert interviews together, which we will be releasing in the coming weeks. Our view is that the most important foundation for society to successfully transition to an AI future is for all of society to be involved, engaged, and informed. Therefore, we are working to build a cross-disciplinary community resource, to help those working on responses to the potential opportunities and threats of advanced AI. This resource will be called “AI Answers”. The review you’re reading now is the first public artifact to come out of the development of this project. If you’re a policy maker or decision maker in this field, or do research in any area that you feel has results possibly useful to this field, we want to hear from you! Acknowledgments Eric Ries has been my close collaborator throughout the development of this article and I’m profoundly appreciative of his wisdom, patience, and tenacity. Many thanks to the detailed feedback from our kind reviewers: Percy Liang, Marietje Schakke, Jack Clark, Andrew Maynard, Vijay Sundaram, and Brian Christian. Particularly special thanks to Yo Shavit, one of the authors of FAR, who was very generous in his time in helping me strengthen this critique of his own paper! I’m also grateful for the many deep conversations with Andy Matuschak, whose thoughtful analysis was critical in developing the ideas in this article. I’d also like to acknowledge Arvind Narayanan, Sayash Kapoor, Seth Lazar, and Rich Harang for the fascinating conversations that Eric and I had with them. Thank you to Jade Leung from OpenAI and Markus Anderljung from Governance.ai for agreeing to the review process and for providing pre-release versions of FAR for us to study. Footnotes Although to be fair to the authors of the paper — it’s not a problem I’ve seen mentioned or addressed anywhere.↩︎ As will happen if AI continues to develop in capability, without limit.↩︎ The cost of frontier models may continue to rise. Generative AI startup inflection.ai recently raised $1.3 billion, and plans to spend most of it on GPUs. But hundreds of companies could still afford to train a model even at that cost. (And even if they couldn’t, the implication is that theft then becomes the only way to compete. It doesn’t mean that models won’t proliferate.)↩︎ Although they are not discussed in FAR.↩︎ At least, in the case that AI turns out to be powerful enough that such regulation is justified in the first place↩︎ This doesn’t mean that model development shouldn’t be done without consideration of ethics or impact. Concepts like open source, responsible innovation, informed dialogue and democratic decision making are all an important part of model development. But it does mean we do not need to ensure safety at the point of development.↩︎ The only commercially available models that provide fine-tuning and activations, as at July 2023 are older, less capable models, and weights are not available for any major commercial model. OpenAI plans to provide some fine-tuning and activations features for GPT 4 down the track, but they will have had over a year headstart over everyone else at that point. Regardless, without access to the weights, developers’ ability to fully customize and tune models remains limited.↩︎
========================================
[SOURCE: https://en.wikipedia.org/wiki/Semantics] | [TOKENS: 9248]
Contents Semantics Semantics is the study of linguistic meaning. It examines what meaning is, how words get their meaning, and how the meaning of a complex expression depends on its parts. Part of this process involves the distinction between sense and reference. Sense is given by the ideas and concepts associated with an expression while reference is the object to which an expression points. Semantics contrasts with syntax, which studies the rules that dictate how to create grammatically correct sentences, and pragmatics, which investigates how people use language in communication. Semantics, together with syntactics and pragmatics, is a part of semiotics. Lexical semantics is the branch of semantics that studies word meaning. It examines whether words have one or several meanings and in what lexical relations they stand to one another. Phrasal semantics studies the meaning of sentences by exploring the phenomenon of compositionality or how new meanings can be created by arranging words. Formal semantics relies on logic and mathematics to provide precise frameworks of the relation between language and meaning. Cognitive semantics examines meaning from a psychological perspective and assumes a close relation between language ability and the conceptual structures used to understand the world. Other branches of semantics include conceptual semantics, computational semantics, and cultural semantics. Theories of meaning are general explanations of the nature of meaning and how expressions are endowed with it. According to referential theories, the meaning of an expression is the part of reality to which it points. Ideational theories identify meaning with mental states like the ideas that an expression evokes in the minds of language users. According to causal theories, meaning is determined by causes and effects, which behaviorist semantics analyzes in terms of stimulus and response. Further theories of meaning include truth-conditional semantics, verificationist theories, the use theory, and inferentialist semantics. The study of semantic phenomena began during antiquity but was not recognized as an independent field of inquiry until the 19th century. Semantics is relevant to the fields of formal logic, computer science, and psychology. Definition and related fields Semantics is the study of meaning in languages. It is a systematic inquiry that examines what linguistic meaning is and how it arises. It investigates how expressions are built up from different layers of constituents, like morphemes, words, clauses, sentences, and texts, and how the meanings of the constituents affect one another. Semantics can focus on a specific language, like English, but in its widest sense, it investigates meaning structures relevant to all languages.[a][b] As a descriptive discipline, it aims to determine how meaning works without prescribing what meaning people should associate with particular expressions. Some of its key questions are "How do the meanings of words combine to create the meanings of sentences?", "How do meanings relate to the minds of language users, and to the things words refer to?", and "What is the connection between what a word means, and the contexts in which it is used?". The main disciplines engaged in semantics are linguistics, semiotics, and philosophy. Besides its meaning as a field of inquiry, semantics can also refer to theories within this field, like truth-conditional semantics, and to the meaning of particular expressions, like the semantics of the word fairy. As a field of inquiry, semantics has both an internal and an external side. The internal side is interested in the connection between words and the mental phenomena they evoke, like ideas and conceptual representations. The external side examines how words refer to objects in the world and under what conditions a sentence is true. Many related disciplines investigate language and meaning. Semantics contrasts with other subfields of linguistics focused on distinct aspects of language. Phonology studies the different types of sounds used in languages and how sounds are connected to form words while syntax examines the rules that dictate how to arrange words to create sentences. These divisions are reflected in the fact that it is possible to master some aspects of a language while lacking others, like when a person knows how to pronounce a word without knowing its meaning. As a subfield of semiotics, semantics has a more narrow focus on meaning in language while semiotics studies both linguistic and non-linguistic signs. Semiotics investigates additional topics like the meaning of non-verbal communication, conventional symbols, and natural signs independent of human interaction. Examples include nodding to signal agreement, stripes on a uniform signifying rank, and the presence of vultures indicating a nearby animal carcass. Semantics further contrasts with pragmatics, which is interested in how people use language in communication. An expression like "That's what I'm talking about" can mean many things depending on who says it and in what situation. Semantics is interested in the possible meanings of expressions: what they can and cannot mean in general. In this regard, it is sometimes defined as the study of context-independent meaning. Pragmatics examines which of these possible meanings is relevant in a particular case. In contrast to semantics, it is interested in actual performance rather than in the general linguistic competence underlying this performance. This includes the topic of additional meaning that can be inferred even though it is not literally expressed, like what it means if a speaker remains silent on a certain topic. A closely related distinction by the semiotician Charles W. Morris holds that semantics studies the relation between words and the world, pragmatics examines the relation between words and users, and syntax focuses on the relation between different words. Semantics is related to etymology, which studies how words and their meanings changed in the course of history. Another connected field is hermeneutics, which is the art or science of interpretation and is concerned with the right methodology of interpreting text in general and scripture in particular. Metasemantics examines the metaphysical foundations of meaning and aims to explain where it comes from or how it arises. The word semantics originated from the Ancient Greek adjective semantikos, meaning 'relating to signs', which is a derivative of sēmeion, the noun for 'sign'. It was initially used for medical symptoms and only later acquired its wider meaning regarding any type of sign, including linguistic signs. The word semantics entered the English language from the French term semantique, which the linguist Michel Bréal first introduced at the end of the 19th century. Basic concepts Semantics studies meaning in language, which is limited to the meaning of linguistic expressions. It concerns how signs are interpreted and what information they contain. An example is the meaning of words provided in dictionary definitions by giving synonymous expressions or paraphrases, like defining the meaning of the term ram as adult male sheep. There are many forms of non-linguistic meaning that are not examined by semantics. Actions and policies can have meaning in relation to the goal they serve. Fields like religion and spirituality are interested in the meaning of life, which is about finding a purpose in life or the significance of existence in general. Linguistic meaning can be analyzed on different levels. Word meaning is studied by lexical semantics and investigates the denotation of individual words. It is often related to concepts of entities, like how the word dog is associated with the concept of the four-legged domestic animal. Sentence meaning falls into the field of phrasal semantics and concerns the denotation of full sentences. It usually expresses a concept applying to a type of situation, as in the sentence "the dog has ruined my blue skirt". The meaning of a sentence is often referred to as a proposition. Different sentences can express the same proposition, like the English sentence "the tree is green" and the German sentence "der Baum ist grün". Utterance meaning is studied by pragmatics and is about the meaning of an expression on a particular occasion. Sentence meaning and utterance meaning come apart in cases where expressions are used in a non-literal way, as is often the case with irony. Semantics is primarily interested in the public meaning that expressions have, like the meaning found in general dictionary definitions. Speaker meaning, by contrast, is the private or subjective meaning that individuals associate with expressions. It can diverge from the literal meaning, like when a person associates the word needle with pain or drugs. Meaning is often analyzed in terms of sense and reference, also referred to as intension and extension or connotation and denotation. The referent of an expression is the object to which the expression points. The sense of an expression is the way in which it refers to that object or how the object is interpreted. For example, the expressions morning star and evening star refer to the same planet, just like the expressions 2 + 2 and 3 + 1 refer to the same number. The meanings of these expressions differ not on the level of reference but on the level of sense. Sense is sometimes understood as a mental phenomenon that helps people identify the objects to which an expression refers. Some semanticists focus primarily on sense or primarily on reference in their analysis of meaning. To grasp the full meaning of an expression, it is usually necessary to understand both to what entities in the world it refers and how it describes them. The distinction between sense and reference can explain identity statements, which can be used to show how two expressions with a different sense have the same referent. For instance, the sentence "the morning star is the evening star" is informative and people can learn something from it. The sentence "the morning star is the morning star", by contrast, is an uninformative tautology since the expressions are identical not only on the level of reference but also on the level of sense. Compositionality is a key aspect of how languages construct meaning. It is the idea that the meaning of a complex expression is a function of the meanings of its parts. It is possible to understand the meaning of the sentence "Zuzana owns a dog" by understanding what the words Zuzana, owns, a and dog mean and how they are combined. In this regard, the meaning of complex expressions like sentences is different from word meaning since it is normally not possible to deduce what a word means by looking at its letters and one needs to consult a dictionary instead. Compositionality is often used to explain how people can formulate and understand an almost infinite number of meanings even though the amount of words and cognitive resources is finite. Many sentences that people read are sentences that they have never seen before and they are nonetheless able to understand them. When interpreted in a strong sense, the principle of compositionality states that the meaning of a complex expression is not just affected by its parts and how they are combined but fully determined this way. It is controversial whether this claim is correct or whether additional aspects influence meaning. For example, context may affect the meaning of expressions; idioms like "kick the bucket" carry figurative or non-literal meanings that are not directly reducible to the meanings of their parts. Truth is a property of statements that accurately present the world and true statements are in accord with reality. Whether a statement is true usually depends on the relation between the statement and the rest of the world. The truth conditions of a statement are the way the world needs to be for the statement to be true. For example, it belongs to the truth conditions of the sentence "it is raining outside" that raindrops are falling from the sky. The sentence is true if it is used in a situation in which the truth conditions are fulfilled, i.e., if there is actually rain outside. Truth conditions play a central role in semantics and some theories rely exclusively on truth conditions to analyze meaning. To understand a statement usually implies that one has an idea about the conditions under which it would be true. This can happen even if one does not know whether the conditions are fulfilled. The semiotic triangle, also called the triangle of meaning, is a model used to explain the relation between language, language users, and the world, represented in the model as Symbol, Thought or Reference, and Referent. The symbol is a linguistic signifier, either in its spoken or written form. The central idea of the model is that there is no direct relation between a linguistic expression and what it refers to, as was assumed by earlier dyadic models. This is expressed in the diagram by the dotted line between symbol and referent. The model holds instead that the relation between the two is mediated through a third component. For example, the term apple stands for a type of fruit but there is no direct connection between this string of letters and the corresponding physical object. The relation is only established indirectly through the mind of the language user. When they see the symbol, it evokes a mental image or a concept, which establishes the connection to the physical object. This process is only possible if the language user learned the meaning of the symbol before. The meaning of a specific symbol is governed by the conventions of a particular language. The same symbol may refer to one object in one language, to another object in a different language, and to no object in another language. Many other concepts are used to describe semantic phenomena. The semantic role of an expression is the function it fulfills in a sentence. In the sentence "the boy kicked the ball", the boy has the role of the agent who performs an action. The ball is the theme or patient of this action as something that does not act itself but is involved in or affected by the action. The same entity can be both agent and patient, like when someone cuts themselves. An entity has the semantic role of an instrument if it is used to perform the action, for instance, when cutting something with a knife then the knife is the instrument. For some sentences, no action is described but an experience takes place, like when a girl sees a bird. In this case, the girl has the role of the experiencer. Other common semantic roles are location, source, goal, beneficiary, and stimulus. Lexical relations describe how words stand to one another. Two words are synonyms if they share the same or a very similar meaning, like car and automobile or buy and purchase. Antonyms have opposite meanings, such as the contrast between alive and dead or fast and slow.[c] One term is a hyponym of another term if the meaning of the first term is included in the meaning of the second term. For example, ant is a hyponym of insect. A prototype is a hyponym that has characteristic features of the type it belongs to. A robin is a prototype of a bird but a penguin is not. Two words with the same pronunciation are homophones like flour and flower, while two words with the same spelling are homonyms, like a bank of a river in contrast to a bank as a financial institution.[d] Hyponymy is closely related to meronymy, which describes the relation between part and whole. For instance, wheel is a meronym of car. An expression is ambiguous if it has more than one possible meaning. In some cases, it is possible to disambiguate them to discern the intended meaning. The term polysemy is used if the different meanings are closely related to one another, like the meanings of the word head, which can refer to the topmost part of the human body or the top-ranking person in an organization. The meaning of words can often be subdivided into meaning components called semantic features. The word horse has the semantic feature animate but lacks the semantic feature human. It may not always be possible to fully reconstruct the meaning of a word by identifying all its semantic features. A semantic or lexical field is a group of words that are all related to the same activity or subject. For instance, the semantic field of cooking includes words like bake, boil, spice, and pan. The context of an expression refers to the situation or circumstances in which it is used and includes time, location, speaker, and audience. It also encompasses other passages in a text that come before and after it. Context affects the meaning of various expressions, like the deictic expression here and the anaphoric expression she. A syntactic environment is extensional or transparent if it is always possible to exchange expressions with the same reference without affecting the truth value of the sentence. For example, the environment of the sentence "the number 8 is even" is extensional because replacing the expression "the number 8" with "the number of planets in the Solar System" does not change its truth value. For intensional or opaque contexts, this type of substitution is not always possible. For instance, the embedded clause in "Paco believes that the number 8 is even" is intensional since Paco may not know that the number of planets in the solar system is 8. Semanticists commonly distinguish the language they study, called object language, from the language they use to express their findings, called metalanguage. When a professor uses Japanese to teach their student how to interpret the language of first-order logic then the language of first-order logic is the object language and Japanese is the metalanguage. The same language may occupy the role of object language and metalanguage at the same time. This is the case in monolingual English dictionaries, in which both the entry term belonging to the object language and the definition text belonging to the metalanguage are taken from the English language. Branches Lexical semantics is the sub-field of semantics that studies word meaning. It examines semantic aspects of individual words and the vocabulary as a whole. This includes the study of lexical relations between words, such as whether two terms are synonyms or antonyms. Lexical semantics categorizes words based on semantic features they share and groups them into semantic fields unified by a common subject. This information is used to create taxonomies to organize lexical knowledge, for example, by distinguishing between physical and abstract entities and subdividing physical entities into stuff and individuated entities. Further topics of interest are polysemy, ambiguity, and vagueness. Lexical semantics is sometimes divided into two complementary approaches: semasiology and onomasiology. Semasiology starts from words and examines what their meaning is. It is interested in whether words have one or several meanings and how those meanings are related to one another. Instead of going from word to meaning, onomasiology goes from meaning to word. It starts with a concept and examines what names this concept has or how it can be expressed in a particular language. Some semanticists also include the study of lexical units other than words in the field of lexical semantics. Compound expressions like being under the weather have a non-literal meaning that acts as a unit and is not a direct function of its parts. Another topic concerns the meaning of morphemes that make up words, for instance, how negative prefixes like in- and dis- affect the meaning of the words they are part of, as in inanimate and dishonest. Phrasal semantics studies the meaning of sentences. It relies on the principle of compositionality to explore how the meaning of complex expressions arises from the combination of their parts.[e] The different parts can be analyzed as subject, predicate, or argument. The subject of a sentence usually refers to a specific entity while the predicate describes a feature of the subject or an event in which the subject participates. Arguments provide additional information to complete the predicate. For example, in the sentence "Mary hit the ball", Mary is the subject, hit is the predicate, and the ball is an argument. A more fine-grained categorization distinguishes between different semantic roles of words, such as agent, patient, theme, location, source, and goal. Verbs usually function as predicates and often help to establish connections between different expressions to form a more complex meaning structure. In the expression "Beethoven likes Schubert", the verb like connects a liker to the object of their liking. Other sentence parts modify meaning rather than form new connections. For instance, the adjective red modifies the color of another entity in the expression red car. A further compositional device is variable binding, which is used to determine the reference of a term. For example, the last part of the expression "the woman who likes Beethoven" specifies which woman is meant. Parse trees can be used to show the underlying hierarchy employed to combine the different parts. Various grammatical devices, like the gerund form, also contribute to meaning and are studied by grammatical semantics. Formal semantics uses formal tools from logic and mathematics to analyze meaning in natural languages.[f] It aims to develop precise logical formalisms to clarify the relation between expressions and their denotation. One of its key tasks is to provide frameworks of how language represents the world, for example, using ontological models to show how linguistic expressions map to the entities of that model. A common idea is that words refer to individual objects or groups of objects while sentences relate to events and states. Sentences are mapped to a truth value based on whether their description of the world is in correspondence with its ontological model. Formal semantics further examines how to use formal mechanisms to represent linguistic phenomena such as quantification, intensionality, noun phrases, plurals, mass terms, tense, and modality. Montague semantics is an early and influential theory in formal semantics that provides a detailed analysis of how the English language can be represented using mathematical logic. It relies on higher-order logic, lambda calculus, and type theory to show how meaning is created through the combination of expressions belonging to different syntactic categories. Dynamic semantics is a subfield of formal semantics that focuses on how information grows over time. According to it, "meaning is context change potential": the meaning of a sentence is not given by the information it contains but by the information change it brings about relative to a context. Cognitive semantics studies the problem of meaning from a psychological perspective or how the mind of the language user affects meaning. As a subdiscipline of cognitive linguistics, it sees language as a wide cognitive ability that is closely related to the conceptual structures used to understand and represent the world.[g] Cognitive semanticists do not draw a sharp distinction between linguistic knowledge and knowledge of the world and see them instead as interrelated phenomena. They study how the interaction between language and human cognition affects the conceptual organization in very general domains like space, time, causation, and action. The contrast between profile and base is sometimes used to articulate the underlying knowledge structure. The profile of a linguistic expression is the aspect of the knowledge structure that it brings to the foreground while the base is the background that provides the context of this aspect without being at the center of attention. For example, the profile of the word hypotenuse is a straight line while the base is a right-angled triangle of which the hypotenuse forms a part.[h] Cognitive semantics further compares the conceptual patterns and linguistic typologies across languages and considers to what extent the cognitive conceptual structures of humans are universal or relative to their linguistic background. Another research topic concerns the psychological processes involved in the application of grammar. Other investigated phenomena include categorization, which is understood as a cognitive heuristic to avoid information overload by regarding different entities in the same way, and embodiment, which concerns how the language user's bodily experience affects the meaning of expressions. Frame semantics is an important subfield of cognitive semantics. Its central idea is that the meaning of terms cannot be understood in isolation from each other but needs to be analyzed on the background of the conceptual structures they depend on. These structures are made explicit in terms of semantic frames. For example, words like bride, groom, and honeymoon evoke in the mind the frame of marriage. Conceptual semantics shares with cognitive semantics the idea of studying linguistic meaning from a psychological perspective by examining how humans conceptualize and experience the world. It holds that meaning is not about the objects to which expressions refer but about the cognitive structure of human concepts that connect thought, perception, and action. Conceptual semantics differs from cognitive semantics by introducing a strict distinction between meaning and syntax and by relying on various formal devices to explore the relation between meaning and cognition. Computational semantics examines how the meaning of natural language expressions can be represented and processed on computers. It often relies on the insights of formal semantics and applies them to problems that can be computationally solved. Some of its key problems include computing the meaning of complex expressions by analyzing their parts, handling ambiguity, vagueness, and context-dependence, and using the extracted information in automatic reasoning. It forms part of computational linguistics, artificial intelligence, and cognitive science. Its applications include machine learning and machine translation. Cultural semantics studies the relation between linguistic meaning and culture. It compares conceptual structures in different languages and is interested in how meanings evolve and change because of cultural phenomena associated with politics, religion, and customs. For example, address practices encode cultural values and social hierarchies, as in the difference of politeness of expressions like tu and usted in Spanish or du and Sie in German in contrast to English, which lacks these distinctions and uses the pronoun you in either case. Closely related fields are intercultural semantics, cross-cultural semantics, and comparative semantics. Pragmatic semantics studies how the meaning of an expression is shaped by the situation in which it is used. It is based on the idea that communicative meaning is usually context-sensitive and depends on who participates in the exchange, what information they share, and what their intentions and background assumptions are. It focuses on communicative actions, of which linguistic expressions only form one part. Some theorists include these topics within the scope of semantics while others consider them part of the distinct discipline of pragmatics. Theories of meaning Theories of meaning explain what meaning is, what meaning an expression has, and how the relation between expression and meaning is established. Referential theories state that the meaning of an expression is the entity to which it points. The meaning of singular terms like names is the individual to which they refer. For example, the meaning of the name George Washington is the person with this name. General terms refer not to a single entity but to the set of objects to which this term applies. In this regard, the meaning of the term cat is the set of all cats. Similarly, verbs usually refer to classes of actions or events and adjectives refer to properties of individuals and events. Simple referential theories face problems for meaningful expressions that have no clear referent. Names like Pegasus and Santa Claus have meaning even though they do not point to existing entities. Other difficulties concern cases in which different expressions are about the same entity. For instance, the expressions Roger Bannister and the first man to run a four-minute mile refer to the same person but do not mean exactly the same thing. This is particularly relevant when talking about beliefs since a person may understand both expressions without knowing that they point to the same entity. A further problem is given by expressions whose meaning depends on the context, like the deictic terms here and I. To avoid these problems, referential theories often introduce additional devices. Some identify meaning not directly with objects but with functions that point to objects. This additional level has the advantage of taking the context of an expression into account since the same expression may point to one object in one context and to another object in a different context. For example, the reference of the word here depends on the location in which it is used. A closely related approach is possible world semantics, which allows expressions to refer not only to entities in the actual world but also to entities in other possible worlds.[i] According to this view, expressions like the first man to run a four-minute mile refer to different persons in different worlds. This view can also be used to analyze sentences that talk about what is possible or what is necessary: possibility is what is true in some possible worlds while necessity is what is true in all possible worlds. Ideational theories, also called mentalist theories, are not primarily interested in the reference of expressions and instead explain meaning in terms of the mental states of language users. One historically influential approach articulated by John Locke holds that expressions stand for ideas in the speaker's mind. According to this view, the meaning of the word dog is the idea that people have of dogs. Language is seen as a medium used to transfer ideas from the speaker to the audience. After having learned the same meaning of signs, the speaker can produce a sign that corresponds to the idea in their mind and the perception of this sign evokes the same idea in the mind of the audience. A closely related theory focuses not directly on ideas but on intentions. This view is particularly associated with Paul Grice, who observed that people usually communicate to cause some reaction in their audience. He held that the meaning of an expression is given by the intended reaction. This means that communication is not just about decoding what the speaker literally said but requires an understanding of their intention or why they said it. For example, telling someone looking for petrol that "there is a garage around the corner" has the meaning that petrol can be obtained there because of the speaker's intention to help. This goes beyond the literal meaning, which has no explicit connection to petrol. Causal theories hold that the meaning of an expression depends on the causes and effects it has. According to behaviorist semantics, also referred to as stimulus-response theory, the meaning of an expression is given by the situation that prompts the speaker to use it and the response it provokes in the audience. For instance, the meaning of yelling "Fire!" is given by the presence of an uncontrolled fire and attempts to control it or seek safety. Behaviorist semantics relies on the idea that learning a language consists in adopting behavioral patterns in the form of stimulus-response pairs. One of its key motivations is to avoid private mental entities and define meaning instead in terms of publicly observable language behavior. Another causal theory focuses on the meaning of names and holds that a naming event is required to establish the link between name and named entity. This naming event acts as a form of baptism that establishes the first link of a causal chain in which all subsequent uses of the name participate. According to this view, the name Plato refers to an ancient Greek philosopher because, at some point, he was originally named this way and people kept using this name to refer to him. This view was originally formulated by Saul Kripke to apply to names only but has been extended to cover other types of speech as well. Truth-conditional semantics analyzes the meaning of sentences in terms of their truth conditions. According to this view, to understand a sentence means to know what the world needs to be like for the sentence to be true. Truth conditions can themselves be expressed through possible worlds. For example, the sentence "Hillary Clinton won the 2016 American presidential election" is false in the actual world but there are some possible worlds in which it is true. The extension of a sentence can be interpreted as its truth value while its intension is the set of all possible worlds in which it is true. Truth-conditional semantics is closely related to verificationist theories, which introduce the additional idea that there should be some kind of verification procedure to assess whether a sentence is true. They state that the meaning of a sentence consists in the method to verify it or in the circumstances that justify it. For instance, scientific claims often make predictions, which can be used to confirm or disconfirm them using observation. According to verificationism, sentences that can neither be verified nor falsified are meaningless. The use theory states that the meaning of an expression is given by the way it is utilized. This view was first introduced by Ludwig Wittgenstein, who understood language as a collection of language games. The meaning of expressions depends on how they are used inside a game and the same expression may have different meanings in different games. Some versions of this theory identify meaning directly with patterns of regular use. Others focus on social norms and conventions by additionally taking into account whether a certain use is considered appropriate in a given society. Inferentialist semantics, also called conceptual role semantics, holds that the meaning of an expression is given by the role it plays in the premises and conclusions of good inferences. For example, one can infer from "x is a male sibling" that "x is a brother" and one can infer from "x is a brother" that "x has parents". According to inferentialist semantics, the meaning of the word brother is determined by these and all similar inferences that can be drawn. History Semantics was established as an independent field of inquiry in the 19th century but the study of semantic phenomena began as early as the ancient period as part of philosophy and logic.[j] In ancient Greece, Plato (427–347 BCE) explored the relation between names and things in his dialogue Cratylus. It considers the positions of naturalism, which holds that things have their name by nature, and conventionalism, which states that names are related to their referents by customs and conventions among language users. The book On Interpretation by Aristotle (384–322 BCE) introduced various conceptual distinctions that greatly influenced subsequent works in semantics. He developed an early form of the semantic triangle by holding that spoken and written words evoke mental concepts, which refer to external things by resembling them. For him, mental concepts are the same for all humans, unlike the conventional words they associate with those concepts. The Stoics incorporated many of the insights of their predecessors to develop a complex theory of language through the perspective of logic. They discerned different kinds of words by their semantic and syntactic roles, such as the contrast between names, common nouns, and verbs. They also discussed the difference between statements, commands, and prohibitions. In ancient India, the orthodox school of Nyaya held that all names refer to real objects. It explored how words lead to an understanding of the thing meant and what consequence this relation has to the creation of knowledge. Philosophers of the orthodox school of Mīmāṃsā discussed the relation between the meanings of individual words and full sentences while considering which one is more basic. The book Vākyapadīya by Bhartṛhari (4th–5th century CE) distinguished between different types of words and considered how they can carry different meanings depending on how they are used. In ancient China, the Mohists argued that names play a key role in making distinctions to guide moral behavior. They inspired the School of Names, which explored the relation between names and entities while examining how names are required to identify and judge entities. In the Middle Ages, Augustine of Hippo (354–430) developed a general conception of signs as entities that stand for other entities and convey them to the intellect. He was the first to introduce the distinction between natural and linguistic signs as different types belonging to a common genus. Boethius (480–528) wrote a translation of and various comments on Aristotle's book On Interpretation, which popularized its main ideas and inspired reflections on semantic phenomena in the scholastic tradition. An innovation in the semantics of Peter Abelard (1079–1142) was his interest in propositions or the meaning of sentences in contrast to the focus on the meaning of individual words by many of his predecessors. He further explored the nature of universals, which he understood as mere semantic phenomena of common names caused by mental abstractions that do not refer to any entities. In the Arabic tradition, Ibn Faris (920–1004) identified meaning with the intention of the speaker while Abu Mansur al-Azhari (895–980) held that meaning resides directly in speech and needs to be extracted through interpretation. An important topic towards the end of the Middle Ages was the distinction between categorematic and syncategorematic terms. Categorematic terms have an independent meaning and refer to some part of reality, like horse and Socrates. Syncategorematic terms lack independent meaning and fulfill other semantic functions, such as modifying or quantifying the meaning of other expressions, like the words some, not, and necessarily. An early version of the causal theory of meaning was proposed by Roger Bacon (c. 1219/20 – c. 1292), who held that things get names similar to how people get names through some kind of initial baptism. His ideas inspired the tradition of the speculative grammarians, who proposed that there are certain universal structures found in all languages. They arrived at this conclusion by drawing an analogy between the modes of signification on the level of language, the modes of understanding on the level of mind, and the modes of being on the level of reality. In the early modern period, Thomas Hobbes (1588–1679) distinguished between marks, which people use privately to recall their own thoughts, and signs, which are used publicly to communicate their ideas to others. In their Port-Royal Logic, Antoine Arnauld (1612–1694) and Pierre Nicole (1625–1695) developed an early precursor of the distinction between intension and extension. The Essay Concerning Human Understanding by John Locke (1632–1704) presented an influential version of the ideational theory of meaning, according to which words stand for ideas and help people communicate by transferring ideas from one mind to another. Gottfried Wilhelm Leibniz (1646–1716) understood language as the mirror of thought and tried to conceive the outlines of a universal formal language to express scientific and philosophical truths. This attempt inspired theorists Christian Wolff (1679–1754), Georg Bernhard Bilfinger (1693–1750), and Johann Heinrich Lambert (1728–1777) to develop the idea of a general science of sign systems. Étienne Bonnot de Condillac (1715–1780) accepted and further developed Leibniz's idea of the linguistic nature of thought. Against Locke, he held that language is involved in the creation of ideas and is not merely a medium to communicate them. In the 19th century, semantics emerged and solidified as an independent field of inquiry. Christian Karl Reisig (1792–1829) is sometimes credited as the father of semantics since he clarified its concept and scope while also making various contributions to its key ideas. Michel Bréal (1832–1915) followed him in providing a broad conception of the field, for which he coined the French term sémantique. John Stuart Mill (1806–1873) gave great importance to the role of names to refer to things. He distinguished between the connotation and denotation of names and held that propositions are formed by combining names. Charles Sanders Peirce (1839–1914) conceived semiotics as a general theory of signs with several subdisciplines, which were later identified by Charles W. Morris (1901–1979) as syntactics, semantics, and pragmatics. In his pragmatist approach to semantics, Peirce held that the meaning of conceptions consists in the entirety of their practical consequences. The philosophy of Gottlob Frege (1848–1925) contributed to semantics on many different levels. Frege first introduced the distinction between sense and reference, and his development of predicate logic and the principle of compositionality formed the foundation of many subsequent developments in formal semantics. Edmund Husserl (1859–1938) explored meaning from a phenomenological perspective by considering the mental acts that endow expressions with meaning. He held that meaning always implies reference to an object and expressions that lack a referent, like green is or, are meaningless. In the 20th century, Alfred Tarski (1901–1983) defined truth in formal languages through his semantic theory of truth, which was influential in the development of truth-conditional semantics by Donald Davidson (1917–2003). Tarski's student Richard Montague (1930–1971) formulated a complex formal framework of the semantics of the English language, which was responsible for establishing formal semantics as a major area of research. According to structural semantics,[k] which was inspired by the structuralist philosophy of Ferdinand de Saussure (1857–1913), language is a complex network of structural relations and the meanings of words are not fixed individually but depend on their position within this network. The theory of general semantics was developed by Alfred Korzybski (1879–1950) as an inquiry into how language represents reality and affects human thought. The contributions of George Lakoff (1941–present) and Ronald Langacker (1942–present) provided the foundation of cognitive semantics. Charles J. Fillmore (1929–2014) developed frame semantics as a major approach in this area. The closely related field of conceptual semantics was inaugurated by Ray Jackendoff (1945–present). In various disciplines Logicians study correct reasoning and often develop formal languages to express arguments and assess their correctness. One part of this process is to provide a semantics for a formal language to precisely define what its terms mean. A semantics of a formal language is a set of rules, usually expressed as a mathematical function, that assigns meanings to formal language expressions. For example, the language of first-order logic uses lowercase letters for individual constants and uppercase letters for predicates. To express the sentence "Bertie is a dog", the formula D ( b ) {\displaystyle D(b)} can be used where b {\displaystyle b} is an individual constant for Bertie and D {\displaystyle D} is a predicate for dog. Classical model-theoretic semantics assigns meaning to these terms by defining an interpretation function that maps individual constants to specific objects and predicates to sets of objects or tuples. The function maps b {\displaystyle b} to Bertie and D {\displaystyle D} to the set of all dogs. This way, it is possible to calculate the truth value of the sentence: it is true if Bertie is a member of the set of dogs and false otherwise. Formal logic aims to determine whether arguments are deductively valid, that is, whether the premises entail the conclusion. Entailment can be defined in terms of syntax or in terms of semantics. Syntactic entailment, expressed with the symbol ⊢ {\displaystyle \vdash } , relies on rules of inference, which can be understood as procedures to transform premises and arrive at a conclusion. These procedures only take the logical form of the premises on the level of syntax into account and ignore what meaning they express. Semantic entailment, expressed with the symbol ⊨ {\displaystyle \vDash } , looks at the meaning of the premises, in particular, at their truth value. A conclusion follows semantically from a set of premises if the truth of the premises ensures the truth of the conclusion, that is, if any semantic interpretation function that assigns the premises the value true also assigns the conclusion the value true. In computer science, the semantics of a program is how it behaves when a computer runs it. Semantics contrasts with syntax, which is the particular form in which instructions are expressed. The same behavior can usually be described with different forms of syntax. In JavaScript, this is the case for the commands i += 1 and i = i + 1, which are syntactically different expressions to increase the value of the variable i by one. This difference is also reflected in different programming languages since they rely on different syntax but can usually be employed to create programs with the same behavior on the semantic level. Static semantics focuses on semantic aspects that affect the compilation of a program. In particular, it is concerned with detecting errors of syntactically correct programs, such as type errors, which arise when an operation receives an incompatible data type. This is the case, for instance, if a function performing a numerical calculation is given a string instead of a number as an argument. Dynamic semantics focuses on the run time behavior of programs, that is, what happens during the execution of instructions. The main approaches to dynamic semantics are denotational, axiomatic, and operational semantics. Denotational semantics relies on mathematical formalisms to describe the effects of each element of the code. Axiomatic semantics uses deductive logic to analyze which conditions must be in place before and after the execution of a program. Operational semantics interprets the execution of a program as a series of steps, each involving the transition from one state to another state. Psychological semantics examines psychological aspects of meaning. It is concerned with how meaning is represented on a cognitive level and what mental processes are involved in understanding and producing language. It further investigates how meaning interacts with other mental processes, such as the relation between language and perceptual experience.[l] Other issues concern how people learn new words and relate them to familiar things and concepts, how they infer the meaning of compound expressions they have never heard before, how they resolve ambiguous expressions, and how semantic illusions lead them to misinterpret sentences. One key topic is semantic memory, which is a form of general knowledge of meaning that includes the knowledge of language, concepts, and facts. It contrasts with episodic memory, which records events that a person experienced in their life. The comprehension of language relies on semantic memory and the information it carries about word meanings. According to a common view, word meanings are stored and processed in relation to their semantic features. The feature comparison model states that sentences like "a robin is a bird" are assessed on a psychological level by comparing the semantic features of the word robin with the semantic features of the word bird. The assessment process is fast if their semantic features are similar, which is the case if the example is a prototype of the general category. For atypical examples, as in the sentence "a penguin is a bird", there is less overlap in the semantic features and the psychological process is significantly slower. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Automatic_programming] | [TOKENS: 739]
Contents Automatic programming In computer science, automatic programming is a type of computer programming in which some mechanism generates a computer program, to allow human programmers to write the code at a higher abstraction level. There has been little agreement on the precise definition of automatic programming, mostly because its meaning has changed over time. David Parnas, tracing the history of "automatic programming" in published research, noted that in the 1940s it described automation of the manual process of punching paper tape. Later it referred to translation of high-level programming languages like Fortran and ALGOL. In fact, one of the earliest programs identifiable as a compiler was called Autocode. Parnas concluded that "automatic programming has always been a euphemism for programming in a higher-level language than was then available to the programmer." Program synthesis is one type of automatic programming where a procedure is created from scratch, based on mathematical requirements. Origin Mildred Koss, an early UNIVAC programmer, explains: "Writing machine code involved several tedious steps—breaking down a process into discrete instructions, assigning specific memory locations to all the commands, and managing the I/O buffers. After following these steps to implement mathematical routines, a sub-routine library, and sorting programs, our task was to look at the larger programming process. We needed to understand how we might reuse tested code and have the machine help in programming. As we programmed, we examined the process and tried to think of ways to abstract these steps to incorporate them into higher-level language. This led to the development of interpreters, assemblers, compilers, and generators—programs designed to operate on or produce other programs, that is, automatic programming." Generative programming Generative programming and the related term meta-programming are concepts whereby programs can be written "to manufacture software components in an automated way" just as automation has improved "production of traditional commodities such as garments, automobiles, chemicals, and electronics." The goal is to improve programmer productivity. It is often related to code-reuse topics such as component-based software engineering. Source-code generation Source-code generation is the process of generating source code based on a description of the problem or an ontological model such as a template and is accomplished with a programming tool such as a template processor or an integrated development environment (IDE). These tools allow the generation of source code through any of various means. Modern programming languages are well supported by tools like Json4Swift (Swift) and Json2Kotlin (Kotlin). Programs that could generate COBOL code include: These application generators supported COBOL inserts and overrides. A macro processor, such as the C preprocessor, which replaces patterns in source code according to relatively simple rules, is a simple form of source-code generator. Source-to-source code generation tools also exist. Large language models such as ChatGPT are capable of generating a program's source code from a description of the program given in a natural language. Many relational database systems provide a function that will export the content of the database as SQL data definition queries, which may then be executed to re-import the tables and their data, or migrate them to another RDBMS. Some languages use "annotations" to generate source code and inject it. For example, this is done in Java and Kotlin using annotations, for example the Project Lombok library which runs at compile time with an annotation processor. There has been a C++ proposal to add token sequence injection using compile-time reflection. Low-code applications A low-code development platform (LCDP) is software that provides an environment programmers use to create application software through graphical user interfaces and configuration instead of traditional computer programming. See also Notes References External links
========================================
[SOURCE: https://www.fast.ai/posts/2023-11-07-dislightenment.html] | [TOKENS: 9847]
AI Safety and the Age of Dislightenment Jeremy Howard July 10, 2023 On this page Abstract Proposals for stringent AI model licensing and surveillance will likely be ineffective or counterproductive, concentrating power in unsustainable ways, and potentially rolling back the societal gains of the Enlightenment. The balance between defending society and empowering society to defend itself is delicate. We should advocate for openness, humility and broad consultation to develop better responses aligned with our principles and values — responses that can evolve as we learn more about this technology with the potential to transform society for good or ill. Executive summary Artificial Intelligence is moving fast, and we don’t know what might turn out to be possible. OpenAI CEO Sam Altman thinks AI might “capture the light cone of all future value in the universe”. But things might go wrong, with some experts warning of “the risk of extinction from AI”. This had led many to propose an approach to regulating AI, including the whitepaper “Frontier AI Regulation: Managing Emerging Risks to Public Safety” (which we’ll refer to as “FAR”), and in the Parliament version of the EU AI Act, that goes as follows: Other experts, however, counter that “There is so much attention flooded onto x-risk (existential risk)… that it ‘takes the air out of more pressing issues’ and insidiously puts social pressure on researchers focused on other current risks.” Important as current risks are, does the threat of human extinction mean we should go ahead with this kind of regulation anyway? Perhaps not. As we’ll see, if AI turns out to be powerful enough to be a catastrophic threat, the proposal may not actually help. In fact it could make things much worse, by creating a power imbalance so severe that it leads to the destruction of society. These concerns apply to all regulations that try to ensure the models themselves (“development”) are safe, rather than just how they’re used. The effects of these regulations may turn out to be impossible to undo, and therefore we should be extremely careful before we legislate them. The kinds of model development that FAR and the AI Act aim to regulate are “foundation models” — general-purpose AI which can handle (to varying degrees of success) nearly any problem you throw at them. There is no way to ensure that any general-purpose device (like, say, a computer, or a pen) can’t ever be used to cause harm. Therefore, the only way to ensure that AI models can’t be misused is to ensure that no one can use them directly. Instead, they must be limited to a tightly controlled narrow service interface (like ChatGPT, an interface to GPT-4). But those with full access to AI models (such as those inside the companies that host the service) have enormous advantages over those limited to “safe” interfaces. If AI becomes extremely powerful, then full access to models will be critical to those who need to remain competitive, as well as to those who wish to cause harm. They can simply train their own models from scratch, or exfiltrate existing ones through blackmail, bribery, or theft. This could lead to a society where only groups with the massive resources to train foundation models, or the moral disregard to steal them, have access to humanity’s most powerful technology. These groups could become more powerful than any state. Historically, large power differentials have led to violence and subservience of whole societies. If we regulate now in a way that increases centralisation of power in the name of “safety”, we risk rolling back the gains made from the Age of Enlightenment, and instead entering a new age: the Age of Dislightenment. Instead, we could maintain the Enlightenment ideas of openness and trust, such as by supporting open-source model development. Open source has enabled huge technological progress through broad participation and sharing. Perhaps open AI models could do the same. Broad participation could allow more people with a wider variety of expertise to help identify and counter threats, thus increasing overall safety — as we’ve previously seen in fields like cyber-security. There are interventions we can make now, including the regulation of “high-risk applications” proposed in the EU AI Act. By regulating applications we focus on real harms and can make those most responsible directly liable. Another useful approach in the AI Act is to regulate disclosure, to ensure that those using models have the information they need to use them appropriately. AI impacts are complex, and as such there is unlikely to be any one panacea. We will not truly understand the impacts of advanced AI until we create it. Therefore we should not be in a rush to regulate this technology, and should be careful to avoid a cure which is worse than the disease. The big problem The rapid development of increasingly capable AI has many people asking to be protected, and many offering that protection. The latest is a white paper titled: “Frontier AI Regulation: Managing Emerging Risks to Public Safety’’ (FAR). Many authors of the paper are connected to OpenAI and Google, and to various organizations funded by investors of OpenAI and Google. FAR claims that “government involvement will be required to ensure that such ‘frontier AI models’ are harnessed in the public interest”. But can we really ensure such a thing? At what cost? There’s one huge, gaping problem which FAR fails to address.1 Anyone with access to the full version of a powerful AI model has far more power than someone that can only access that model through a restricted service. But very few people will have access to the full model. If AI does become enormously powerful, then this huge power differential is unsustainable. While superficially seeming to check off various safety boxes, the regulatory regime being advanced in FAR ultimately leads to a vast amount of power being placed into the entrenched companies (by virtue of them having access to the raw models), giving them an information asymmetry against all other actors - including governments seeking to regulate or constrain them. It may lead to the destruction of society. Here’s why: because these models are general-purpose computing devices, it is impossible to guarantee they can’t be used for harmful applications. That would be like trying to make a computer that can’t be misused (such as for emailing a blackmail threat). The full original model is vastly more powerful than any “ensured safe” service based on it can ever be. The full original model is general-purpose: it can be used for anything. But if you give someone a general-purpose computing device, you can’t be sure they won’t use it to cause harm. So instead, you give them access to a service which provides a small window into the full model. For instance, OpenAI provides public access to a tightly controlled and tuned text-based conversational interface to GPT-4, but does not provide full access to the GPT-4 model itself. If you control a powerful model that mediates all consumption and production of information,2 and it’s a proprietary secret, you can shape what people believe, how people act — and censor whatever you please. The ideas being advanced in FAR ultimately lead to the frontier of AI becoming inaccessible to everyone who doesn’t work at a small number of companies, whose dominance will be enshrined by virtue of these ideas. This is an immensely dangerous and brittle path for society to go down. The race So let’s recap what happens under these regulatory proposals. We have the world’s most powerful technology, rapidly developing all the time, but only a few big companies have access to the most powerful version of that technology that allows it to be used in an unrestricted manner. What happens next? Obviously, everyone who cares about power and money now desperately needs to find a way to get full access to these models. After all, anyone that doesn’t have full access to the most powerful technology in history can’t possibly compete. The good news for them is that the models are, literally, just a bunch of numbers. They can be copied trivially easily, and once you’ve got them, you can pass them around to all your friends for nothing. (FAR has a whole section on this, which it calls “The Proliferation Problem”.) There are plenty of experts on exfiltrating data around, who know how to take advantage of blackmail, bribery, social engineering, and various other methods which experience tells us are highly effective. For those with the discretion not to use such unsavory methods, but with access to resources, they too can join the ranks of the AI-capable by spending $100m or so.3 Even the smallest company on the Fortune Global 2000 has $7 billion annual revenue, making such an expenditure well within their budget. And of course most country governments could also afford such a bill. Of course, none of these organizations could make these models directly available to the public without contravening the requirements of the proposed regulations, but by definition at least some people in each organization will have access to the power of the full model. Those who crave power and wealth, but fail to get access to model weights, now have a new goal: get themselves into positions of power at organizations that have big models, or get themselves into positions of power at the government departments that make these decisions. Organizations that started out as well-meaning attempts to develop AI for societal benefit will soon find themselves part of the corporate profit-chasing machinery that all companies join as they grow, run by people that are experts at chasing profits. The truth is that this entire endeavor, this attempt to control the use of AI, is pointless and ineffective. Not only is “proliferation” of models impossible to control (because digital information is so easy to exfiltrate and copy), it turns out that restrictions on the amount of compute for training models are also impossible to enforce. That’s because it’s now possible for people all over the world to virtually join up and train a model together. For instance, Together Computer has created a fully decentralized, open, scalable cloud for AI, and recent research has shown it is possible to go a long way with this kind of approach. Graphics processing units (GPUs), the actual hardware used for training models, are the exact same hardware used for playing computer games. There is more compute capacity in the world currently deployed for playing games than for AI. Gamers around the world can simply install a small piece of software on their computers to opt into helping train these open-source models. Organizing such a large-scale campaign would be difficult, but not without precedent, as seen in the success of projects such as Folding@Home and SETI@Home. And developers are already thinking about how to ensure that regular people can continue to train these models — for instance, in a recent interview with Lex Fridman, Comma.ai founder George Hotz explained how his new company, Tiny Corp, is working on the “Tiny Rack”, which he explains is powered based on the premise: “What’s the most power you can get into your house without arousing suspicion? And one of the answers is an electric car charger.” So he’s building an AI model training system that uses the same amount of power as a car charger. The AI safety community is well aware of this problem, and has proposed various solutions.4 For instance, one recent influential paper by AI policy expert Yo Shavit, which examines surveillance mechanisms that can be added to computer chips, points out that: “As advanced machine learning systems’ capabilities begin to play a significant role in geopolitics and societal order, it may become imperative that (1) governments be able to enforce rules on the development of advanced ML systems within their borders, and (2) countries be able to verify each other’s compliance with potential future international agreements on advanced ML development.” Any approach to this must ensure that every manufacturer of such chips be required to include that surveillance capability into their chips, since obviously if a single company failed to do so, then everyone that wanted to train their own powerful models would use that company’s chips. Shavit notes that “exhaustively enforcing such rules at the hardware-level would require surveilling and policing individual citizens’ use of their personal computers, which would be highly unacceptable on ethical grounds”. The reality is however that such rules would be required for centralization and control to be effective, since personal computers can be used to train large models by simply connecting them over the internet. When the self-described pioneer of the AI Safety movement, Eliezer Yudkowsky, proposed airstrikes on unauthorized data centers and the threat of nuclear war to ensure compliance from states failing to control unauthorized use of computation capability, many were shocked. But bombing data centers and global surveillance of all computers is the only way to ensure the kind of safety compliance that FAR proposes.5 Regulate usage, not development Alex Engler points out an alternative approach to enforced safety standards or licensing of models, which is to “regulate risky and harmful applications, not open-source AI models’’. This is how most regulations work: through liability. If someone does something bad, then they get in trouble. If someone creates a general-purpose tool that someone else uses to do something bad, the tool-maker doesn’t get in trouble. “Dual use” technologies like the internet, computers, and pen and paper, are not restricted to only be available to big companies, and anyone is allowed to build a computer, or make their own paper. They don’t have to ensure that what they build can only be used for societal benefit. This is a critical distinction: the distinction between regulating usage (that is, actually putting a model into use by making it part of a system — especially a high risk system like medicine), vs development (that is, the process of training the model). The reason this distinction is critical is because these models are, in fact, nothing but mathematical functions. They take as input a bunch of numbers, and calculate and return a different bunch of numbers. They don’t do anything themselves — they can only calculate numbers. However, those calculations can be very useful! In fact, computers themselves are merely calculating machines (hence their name: “computers”). They are useful at the point they are used — that is, connected to some system that can actually do something. FAR addresses this distinction, claiming “Improvements in AI capabilities can be unpredictable, and are often difficult to fully understand without intensive testing. Regulation that does not require models to go through sufficient testing before deployment may therefore fail to reliably prevent deployed models from posing severe risks.” This is a non-sequitur. Because models cannot cause harm without being used, developing a model cannot be a harmful activity.6 Furthermore, because we are discussing general-purpose models, we cannot ensure safety of the model itself — it’s only possible to try to secure the use of a model. Another useful approach to regulation is to consider securing access to sensitive infrastructure, such as chemical labs. FAR briefly considers this idea, saying “for frontier AI development, sector-specific regulations can be valuable, but will likely leave a subset of the high severity and scale risks unaddressed.” But it does not study it further, resting on the assumption of an assumed “likely” subset of remaining risks to promote an approach which, as we’ve seen, could undo centuries of cultural, societal, and political development. If we are able to build advanced AI, we should expect that it could at least help us identify the sensitive infrastructure that needs hardening. If it’s possible to use such infrastructure to cause harm then it seems very likely that it can be identified — if AI can’t identify it, then it can’t use it. Now of course, actually dealing with an identified threat might not be straightforward; if it turns out, for instance, that a benchtop DNA printer could be used to produce a dangerous pathogen, then hardening all those devices is going to be a big job. But it’s a much smaller and less invasive job than restricting all the world’s computing devices. This leads us to another useful regulatory path: deployment disclosure. If you’re considering connecting an automated system which uses AI to any kind of sensitive infrastructure, then we should require disclosure of this fact. Furthermore, certain types of connection and infrastructure should require careful safety checks and auditing in advance. The path to centralization Better AI can be used to improve AI. This has already been seen many times, even in the earlier era of less capable and well-resourced algorithms. Google has used AI to improve how data centers use energy, to create better neural network architectures, and to create better methods for optimizing the parameters in those networks. Model outputs have been used to create the prompts used to train new models, and to create the model answers for these prompts, and to explain the reasoning for answers. As models get more powerful, researchers will find more ways to use them to improve the data, models, and training process. There is no reason to believe that we are anywhere near the limits of the technology. There is no data which we can use to make definitive predictions about how far this can go, or what happens next. Those with access to the full models can build new models faster and better than those without. One reason is that they can fully utilize powerful features like fine-tuning, activations, and the ability to directly study and modify weights.7 One recent paper, for instance, found that fine-tuning allows models to solve challenging problems with orders of magnitude fewer parameters than foundation models. This kind of feedback loop results in centralization: the big companies get bigger, and other players can’t compete. This results in centralization, less competition, and as a result higher prices, less innovation, and lower safety (since there’s a single point of failure, and a larger profit motive which encourages risky behavior). There are other powerful forces towards centralization. Consider Google, for instance. Google has more data than anyone else on the planet. More data leads directly to better foundation models. Furthermore, as people use their AI services, they are getting more and more data about these interactions. They use AI to improve their products, making them more “sticky” for their users and encouraging more people to use them, resulting in them getting still more data, which improves their models and products based on them further. Also, they are increasingly vertically integrated, so they have few powerful suppliers. They create their own AI chips (TPUs), run their own data centers, and develop their own software. Regulation of frontier model development encourages greater centralization. Licensing, in particular, is an approach proposed in FAR which is a potent centralization force. Licensing the development of frontier models requires that new entrants must apply for permission before being allowed to develop a model as good, or better, than the current state of the art. This makes it even harder to compete with entrenched players. And it opens up an extremely strong path to regulatory capture, since it results in an undemocratic licensing board having the final say in who has access to build the most powerful technology on the planet. Such a body would be, as a result, potentially the most powerful group in the world. Open source, and a new era of AI enlightenment The alternative to craving the safety and certainty of control and centralization is to once again take the risk we took hundreds of years ago: the risk of believing in the power and good of humanity and society. Just as thinkers of the Enlightenment asked difficult questions like “What if everyone got an education? What if everyone got the vote?”, we should ask the question “What if everyone got access to the full power of AI?” To be clear: asking such questions may not be popular. The counter-enlightenment was a powerful movement for a hundred years, pushing back against “the belief in progress, the rationality of all humans, liberal democracy, and the increasing secularization of society”. It relied on a key assumption, as expounded by French philosopher Joseph de Maistre, that “Man in general, if reduced to himself, is too wicked to be free.” We can see from the results of the Enlightenment that this premise is simply wrong. But it’s an idea that just won’t go away. Sociologists have for decades studied and documented “elite panic” — the tendency of elites to assume that regular people will respond badly to disasters and that they must therefore be controlled. But that’s wrong too. In fact, it’s more than wrong, as Rebecca Solnit explains: “I see these moments of crisis as moments of popular power and positive social change. The major example in my book is Mexico City, where the ’85 earthquake prompted public disaffection with the one-party system and, therefore, the rebirth of civil society.” What does it look like to embrace the belief in progress and the rationality of all humans when we respond to the threat of AI mis-use? One idea which many experts are now studying is that open source models may be the key. Models are just software — they are mathematical functions embodied as code. When we copy software, we don’t usually call it “proliferation” (as FAR does). That word is generally associated with nuclear weapons. When we copy software, we call it “installing”, or “deploying”, or “sharing”. Because software can be freely copied, it has inspired a huge open source movement which considers this sharing a moral good. When all can benefit, why restrict value to a few? This idea has been powerful. Today, nearly every website you use is running an open source web server (such as Apache), which in turn is installed on an open source operating system (generally Linux). Most programs are compiled with open source compilers, and written with open source editors. Open source documents like Wikipedia have been transformative. Initially, these were seen as crazy ideas that had plenty of skeptics, but in the end, they proved to be right. Quite simply, much of the world of computers and the internet that you use today would not exist without open source. What if the most powerful AI models were open source? There will still be Bad Guys looking to use them to hurt others or unjustly enrich themselves. But most people are not Bad Guys. Most people will use these models to create, and to protect. How better to be safe than to have the massive diversity and expertise of human society at large doing their best to identify and respond to threats, with the full power of AI behind them? How much safer would you feel if the world’s top cyber-security, bio-weapons, and social engineering academics were working with the benefits of AI to study AI safety, and that you could access and use all of their work yourself, compared to if only a handful of people at a for-profit company had full access to AI models? In order to gain access to the better features of full model access, and reduce the level of commercial control of what has previously been an open research community with a culture of sharing, the open-source community has recently stepped in and trained a number of quite capable language models. As of July 2023, the best of these are at a similar level to the second-tier cheaper commercial models, but not as good as GPT-4 or Claude. They are rapidly increasing in capability, and are attracting increasing investment from wealthy donors, governments, universities, and companies that are seeking to avoid concentration of power and ensure access to high quality AI models. However, the proposals for safety guarantees in FAR are incompatible with open source frontier models. FAR proposes “it may be prudent to avoid potentially dangerous capabilities of frontier AI models being open sourced until safe deployment is demonstrably feasible”. But even if an open-source model is trained in the exact same way from the exact same data as a regulatorily-approved closed commercial model, it can still never provide the same safety guarantees. That’s because, as a general-purpose computing device, anybody could use it for anything they want — including fine-tuning it using new datasets and for new tasks. Open source is not a silver bullet. This still requires care, cooperation, and deep and careful study. By making the systems available to all, we ensure that all of society can both benefit from their capabilities, but can also work to understand and counter their potential harms. Stanford and Princeton’s top AI and policy groups teamed up to respond to the US government’s request for comment on AI accountability, stating that: “For foundation models to advance the public interest, their development and deployment should ensure transparency, support innovation, distribute power, and minimize harm… We argue open-source foundation models can achieve all four of these objectives, in part due to inherent merits of open-source (pro-transparency, pro-innovation, anti-concentration)” Furthermore they warn that: “If closed-source models cannot be examined by researchers and technologists, security vulnerabilities might not be identified before they cause harm… On the other hand, experts across domains can examine and analyze open-source models, which makes security vulnerabilities easier to find and address. In addition, restricting who can create FMs would reduce the diversity of capable FMs and may result in single points of failure in complex systems.” The idea that access to the best AI models is critical to studying AI safety is, in fact, fundamental to the origin story of two of the most advanced AI companies today: OpenAI, and Anthropic. Many have expressed surprise that the executives of these companies have loudly warned of the potential existential risks of AI, yet they’re building those very models themselves. But there’s no conflict here — they’ve explained that the reason they do this is because they don’t believe it’s possible to properly understand and mitigate AI risks if you don’t have access to the best available models. Access to open source models is at grave risk today. The European AI Act may effectively ban open source foundation models, based on similar principles to those in FAR. Technology innovation policy analyst Alex Engler, in his article “The EU’s attempt to regulate open-source AI is counterproductive”, writes: “The Council’s attempt to regulate open-source could create a convoluted set of requirements that endangers open-source AI contributors, likely without improving use of GPAI. Open-source AI models deliver tremendous societal value by challenging the domination of GPAI by large technology companies and enabling public knowledge about the function of AI.” First, do no harm FAR concludes that “Uncertainty about the optimal regulatory approach to address the challenges posed by frontier AI models should not impede immediate action”. But perhaps they should. Indeed, AI policy experts Patrick Grady and Daniel Castro recommend exactly this — don’t be in a hurry to take regulatory action: ‘The fears around new technologies follow a predictable trajectory called “the Tech Panic Cycle.” Fears increase, peak, then decline over time as the public becomes familiar with the technology and its benefits. Indeed, other previous “generative” technologies in the creative sector such as the printing press, the phonograph, and the Cinématographe followed this same course. But unlike today, policymakers were unlikely to do much to regulate and restrict these technologies. As the panic over generative AI enters its most volatile stage, policymakers should take a deep breath, recognize the predictable cycle we are in, and put any regulation efforts directly aimed at generative AI temporarily on hold.’ Instead, perhaps regulators should consider the medical guidance of Hippocrates: “do no harm”. Medical interventions can have side effects, and the cure can sometimes be worse than the disease. Some medicines may even damage immune response, leaving a body too weakened to be able to fight off infection. So too with regulatory interventions. Not only can the centralisation and regulatory capture impacts of “ensuring safety” cause direct harm to society, but they can even result in decreased safety. If just one big organization holds the keys to vast technological power, we find ourselves in a fragile situation where the rest of society does not have access to the same power to protect ourselves. A fight for power could even be the trigger for the kind of misuse of AI that triggers societal destruction. The impact of AI regulations will be nuanced, complex, and hard to predict. The balance between defending society and empowering society to defend itself is precariously delicate. Rushing to regulate seems unlikely to walk that tight-rope successfully. We have time. The combined capabilities of all of human society are enormous, and for AI to surpass that capability is a big task. Ted Sanders, an OpenAI technical expert who has won numerous technology forecasting competitions, along with Ari Allyn-Feuer, Director of AI at GSK, completed an in-depth 114 page analysis of the timeframes associated with AI development, concluding that “we estimate the likelihood of transformative artificial general intelligence (AGI) by 2043 and find it to be <1%”. Importantly, the more time passes, the more we learn. Not just about the technology, but how society responds to it. We should not rush to implement regulatory changes which put society on a dystopian path that may be impossible to get off. Concerns about AI safety of advanced language models are not new. In early 2019 I wrote “Some thoughts on zero-day threats in AI, and OpenAI’s GPT-2”, a reaction to OpenAI’s controversial and (at the time) unusual decision to not release the weight of their new language model. In considering this decision, I pointed out that: The most in-depth analysis of this topic is the paper The Malicious Use of Artificial Intelligence. The lead author of this paper now works at OpenAI, and was heavily involved in the decision around the model release. Let’s take a look at the recommendations of that paper: “The Malicious Use of Artificial Intelligence” was written by 26 authors from 14 institutions, spanning academia, civil society, and industry. The lead author is today the Head of Policy at OpenAI. It’s interesting to see how far OpenAI, as co-creators of FAR, has moved from these original ideas. The four recommendations from the Malicious Use paper are full of humility — they recognise that effective responses to risks involve “proactively reaching out to relevant actors”, learning from “research areas with more mature methods for addressing dual-use concerns, such as computer security”, and “expand the range of stakeholders and domain experts involved in discussions”. The focus was not in centralization and control, but outreach and cooperation. The idea that the robot apocalypse may be coming is a striking and engaging idea. FAR warns that we must “guard against models potentially being situationally aware and deceptive”, linking to an article claiming that our current path “is likely to eventually lead to a full-blown AI takeover (i.e. a possibly violent uprising or coup by AI systems)”. It’s the kind of idea that can push us to something, anything, that makes us feel more safe. To push back against this reaction requires maturity and a cool head. The ancient Greeks taught us about the dangers of Hubris: excessive pride, arrogance, or overconfidence. When we are over-confident that we know what the future has in store for us, we may well over-react and create the very future we try to avoid. What if, in our attempts to avoid an AI apocalypse, we centralize control of the world’s most powerful technology, dooming future society to a return to a feudal state in which the most valuable commodity, compute, is owned by an elite few. We would be like King Oedipus, prophesied to kill his father and marry his mother, who ends up doing exactly that as a result of actions designed to avoid that fate. Or Phaethon, so confident in his ability to control the chariot of the sun that he avoids the middle path laid out by Helios, his father, and in the process nearly destroys Earth. “The Malicious Use of Artificial Intelligence” points towards a different approach, based on humility: one of consultation with experts across many fields, cooperation with those impacted by technology, in an iterative process that learns from experience. If we did take their advice and learn from computer security experts, for instance, we would learn that a key idea from that field is that “security through obscurity” — that is, hiding secrets as a basis for safety and security — is ineffective and dangerous. Cyber-security experts Arvind Narayanan, director of Princeton’s Center for Information Technology Policy, and Sayash Kapoor, in a recent analysis detailed five “major AI risks” that would be caused by licensing and similar regulations where “only a handful of companies would be able to develop state-of-the-art AI”: How did we get here? Everyone I know who has spent time using tools like GPT-4 and Bard has been blown away by their capabilities — including me! Despite their many mistakes (aka “hallucinations”), they can provide all kinds of help on nearly any topic. I use them daily for everything from coding help to playtime ideas for my daughter. As FAR explains: “Foundation models, such as large language models (LLMs), are trained on large, broad corpora of natural language and other text (e.g., computer code), usually starting with the simple objective of predicting the next “token.” This relatively simple approach produces models with surprisingly broad capabilities. These models thus possess more general-purpose functionality than many other classes of AI models” It goes on to say: “In focusing on foundation models which could have dangerous, emergent capabilities, our definition of frontier AI excludes narrow models, even when these models could have sufficiently dangerous capabilities. For example, models optimizing for the toxicity of compounds or the virulence of pathogens could lead to intended (or at least foreseen) harms and thus may be more appropriately covered with more targeted regulation. Our definition focuses on models that could — rather than just those that do — possess dangerous capabilities” Therefore, the authors propose “safety standards for responsible frontier AI development and deployment” and “empowering a supervisory authority to identify and sanction non-compliance; or by licensing the deployment and potentially the development of frontier AI”. They propose doing this in order to “ensure that” models “are harnessed in the public interest”. Let’s say these proposals are accepted and this regulation is created. What happens next? Well, there are two possibilities: In the case of (1), there’s little more to discuss. The regulations proposed in FAR would, at worst, be unnecessary, and perhaps lead to some regulatory capture of a fairly valuable product space. That would be a shame, but we can live with it. But this isn’t the case that FAR’s proposals are designed to handle — for the risks of misuse of regular technology like that we already have plenty of simple, well-understood approaches, generally based on liability for misuse (that is, if you do something bad using some technology, you get in trouble; the folks that made the technology don’t generally get in trouble too, unless they were negligent or otherwise clearly and directly contributed to the bad thing). Therefore we should focus on (2) — the case where AI turns out to be a very big deal indeed. To be clear, no one is certain this is going to happen, but plenty of people that have studied AI for a long time think it’s a real possibility. Humanity’s most powerful technology We are now in the era of “general-purpose artificial intelligence” (GPAI) thanks to “universal” or “foundation” models, such as OpenAI’s GPT-4, Google’s Bard, and Anthropic’s Claude. These models are general-purpose computing devices. They can answer (with varying degrees of success) nearly any question you can throw at them. As foundation models get more powerful, we should expect researchers to find more ways to use them to improve the data, models, and training process. Current models, dataset creation techniques, and training methods are all quite simple – the basic ideas fit in a few lines of code. There are a lot of fairly obvious paths to greatly improve them, and no reason to believe that we are anywhere near the limits of the technology. So we should expect to see increasingly fast cycles of technological development over the coming months and years. There is no data which we can use to make definitive predictions about how far this can go, or what happens next. Many researchers and AI company executives believe that there may be no practical limit. But these models are expensive to train. Thanks to technological advances, they’re getting cheaper to train the same sized model, but the models are getting bigger and bigger. GPT-4 may have cost around $100m to train. All the most powerful current models, GPT-4, Bard, and Claude, have been trained by large companies in the US (OpenAI, Google, and Anthropic respectively) and China. Building together There are already a great many regulatory initiatives in place, including The White House Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights, National Institutes of Standards and Technology’s AI Risk Management Framework, and Biden’s Executive Order 14091 to protect Americans against algorithmic discrimination. The AI community has also developed effective mechanisms for sharing important information, such as Datasheets for Datasets, Model Cards for Model Reporting, and Ecosystem Graphs. Regulation could require that datasets and models include information about how they were built or trained, to help users deploy them more effectively and safely. This is analogous to nutrition labels: whilst we don’t ban people from eating too much junk food, we endeavor to give them the information they need to make good choices. The proposed EU AI Act already includes requirements for exactly this kind of information. Whilst there is a lot of good work we can build on, there’s still much more to be done. The world of AI is moving fast, and we’re learning every day. Therefore, it’s important that we ensure the choices we make preserve optionality in the future. It’s far too early for us to pick a single path and decide to hurtle down it with unstoppable momentum. Instead, we need to be able, as a society, to respond rapidly and in an informed way to new opportunities and threats as they arise. That means involving a broad cross-section of experts from all relevant domains, along with members of impacted communities. The more we can build capacity in our policy making bodies, the better. Without a deep understanding of AI amongst decision makers, they have little choice but to defer to industry. But as Marietje Schaake, international policy director at Stanford University’s Cyber Policy Center, said, “We need to keep CEOs away from AI regulation”: “Imagine the chief executive of JPMorgan explaining to Congress that because financial products are too complex for lawmakers to understand, banks should decide for themselves how to prevent money laundering, enable fraud detection and set liquidity to loan ratios. He would be laughed out of the room. Angry constituents would point out how well self-regulation panned out in the global financial crisis. From big tobacco to big oil, we have learnt the hard way that businesses cannot set disinterested regulations. They are neither independent nor capable of creating countervailing powers to their own.” We should also be careful to not allow engaging and exciting sci-fi scenarios to distract us from immediate real harms. Aiden Gomez, the co-creator of the transformers neural network architecture, which powers all the top language models including GPT 4, warns: “*There are real risks with this technology. There are reasons to fear this technology, and who uses it, and how. So, to spend all of our time debating whether our species is going to go extinct because of a takeover by a superintelligent AGI is an absurd use of our time and the public’s mindspace… I would really hope that the public knows some of the more fantastical stories about risk [are unfounded]. They’re distractions from the conversations that should be going on.” The dislightenment What if, faced with a new power, with uncertainty, with a threat to our safety, we withdraw to the certainty of centralization, of control, of limiting power to a select few? This is the Dislightenment. The roll-back of the principles that brought us the Age of Enlightenment. We would create a world of “haves” and “have-nots”. The “haves” (big companies, organized crime, governments, and everyone that convinces their friends and family members to get a copy of the weights for them, and everyone that accesses darknet sites where hackers distribute those weights, and everyone that copies them…) can build better and better models, models which can (according to FAR) be used for mass propaganda, bio and cyber threat development, or simply for the purpose of ensuring you beat all of your competition and monopolize the most strategic and profitable industries. The “have-nots” would provide little value to society, since they can only access AI through narrow portals which provide limited (but “safe”) applications. The push for commercial control of AI capability is dangerous. Naomi Klein, who coined the term “shock doctrine” as “the brutal tactic of using the public’s disorientation following a collective shock… to push through radical pro-corporate measures”, is now warning that AI is “likely to become a fearsome tool of further dispossession and despoilation”. Once we begin down this path, it’s very hard to turn back. It may, indeed, be impossible. Technology policy experts Anja Kaspersen, Kobi Leins, and Wendell Wallach, in their article “Are We Automating the Banality and Radicality of Evil?”, point out that deploying bad solutions (such as poorly designed regulation) can take decades to undo, if the bad solution turns out to be profitable to some: “The rapid deployment of AI-based tools has strong parallels with that of leaded gasoline. Lead in gasoline solved a genuine problem—engine knocking. Thomas Midgley, the inventor of leaded gasoline, was aware of lead poisoning because he suffered from the disease. There were other, less harmful ways to solve the problem, which were developed only when legislators eventually stepped in to create the right incentives to counteract the enormous profits earned from selling leaded gasoline.” With centralization, we will create “haves” and “have-nots”, and the “haves” will have access to a technology that makes them vastly more powerful than everyone else. When massive power and wealth differentials are created, they are captured by those that most want power and wealth, and history tells us violence is the only way such differentials can be undone. As John F Kennedy said, “Those who make peaceful revolution impossible will make violent revolution inevitable.” Perhaps, with the power of AI and the creation of the surveillance needed to maintain control, even violence will be an ineffective solution. If we do start in this direction, let’s do it with eyes open, understanding where it takes us. The fragility of the Age of Enlightenment Through most of human history, the future was scary. It was unsafe. It was unknown. And we responded in the most simple and obvious way: by collectively placing our trust in others more powerful than us to keep us safe. Most societies restricted dangerous tools like education and power to an elite few. But then something changed. A new idea took hold in the West. What if there is another way to be safe: to trust in the overall good of society at large, rather put faith in a powerful elite? What if everyone had access to education? To the vote? To technology? This—though it would take a couple more centuries of progress for its promises to be fully realized—was the Age of Enlightenment. Now that so many of us live in liberal democracies it’s easy to forget how fragile and rare this is. But we can see nations around the world now sliding into the arms of authoritarian leaders. As Hermann Göring said, “The people can always be brought to the bidding of the leaders. That is easy. All you have to do is tell them they are being attacked…” Let’s be clear: we are not being attacked. Now is not the time to give up the hard-won progress we’ve made towards equality and opportunity. No one can guarantee your safety, but together we can work to build a society, with AI, that works for all of us. Appendix: Background This document started out as a red team review of Frontier AI Regulation: Managing Emerging Risks to Public Safety. Although red-teaming isn’t common for policy proposals (it’s mainly used in computer security) it probably should be, since they can have risks that are difficult to foresee without careful analysis. Following the release of the Parliament Version of the EU AI Act (which included sweeping new regulation of foundation model development), along with other similar private regulatory proposals from other jurisdictions that I was asked to review, I decided to expand our analysis to cover regulation of model development more generally. I’ve discussed these issues during the development of this review with over 70 experts from the regulatory, policy, AI safety, AI capabilities, cyber-security, economics, and technology transition communities, and have looked at over 300 academic papers. Eric Ries and I recorded a number of expert interviews together, which we will be releasing in the coming weeks. Our view is that the most important foundation for society to successfully transition to an AI future is for all of society to be involved, engaged, and informed. Therefore, we are working to build a cross-disciplinary community resource, to help those working on responses to the potential opportunities and threats of advanced AI. This resource will be called “AI Answers”. The review you’re reading now is the first public artifact to come out of the development of this project. If you’re a policy maker or decision maker in this field, or do research in any area that you feel has results possibly useful to this field, we want to hear from you! Acknowledgments Eric Ries has been my close collaborator throughout the development of this article and I’m profoundly appreciative of his wisdom, patience, and tenacity. Many thanks to the detailed feedback from our kind reviewers: Percy Liang, Marietje Schakke, Jack Clark, Andrew Maynard, Vijay Sundaram, and Brian Christian. Particularly special thanks to Yo Shavit, one of the authors of FAR, who was very generous in his time in helping me strengthen this critique of his own paper! I’m also grateful for the many deep conversations with Andy Matuschak, whose thoughtful analysis was critical in developing the ideas in this article. I’d also like to acknowledge Arvind Narayanan, Sayash Kapoor, Seth Lazar, and Rich Harang for the fascinating conversations that Eric and I had with them. Thank you to Jade Leung from OpenAI and Markus Anderljung from Governance.ai for agreeing to the review process and for providing pre-release versions of FAR for us to study. Footnotes Although to be fair to the authors of the paper — it’s not a problem I’ve seen mentioned or addressed anywhere.↩︎ As will happen if AI continues to develop in capability, without limit.↩︎ The cost of frontier models may continue to rise. Generative AI startup inflection.ai recently raised $1.3 billion, and plans to spend most of it on GPUs. But hundreds of companies could still afford to train a model even at that cost. (And even if they couldn’t, the implication is that theft then becomes the only way to compete. It doesn’t mean that models won’t proliferate.)↩︎ Although they are not discussed in FAR.↩︎ At least, in the case that AI turns out to be powerful enough that such regulation is justified in the first place↩︎ This doesn’t mean that model development shouldn’t be done without consideration of ethics or impact. Concepts like open source, responsible innovation, informed dialogue and democratic decision making are all an important part of model development. But it does mean we do not need to ensure safety at the point of development.↩︎ The only commercially available models that provide fine-tuning and activations, as at July 2023 are older, less capable models, and weights are not available for any major commercial model. OpenAI plans to provide some fine-tuning and activations features for GPT 4 down the track, but they will have had over a year headstart over everyone else at that point. Regardless, without access to the weights, developers’ ability to fully customize and tune models remains limited.↩︎
========================================
[SOURCE: https://en.wikipedia.org/wiki/World#cite_note-2] | [TOKENS: 5641]
Contents World The world is the totality of entities, the whole of reality, or everything that exists. The nature of the world has been conceptualized differently in different fields. Some conceptions see the world as unique, while others talk of a "plurality of worlds". Some treat the world as one simple object, while others analyze the world as a complex made up of parts. In scientific cosmology, the world or universe is commonly defined as "the totality of all space and time; all that is, has been, and will be". Theories of modality talk of possible worlds as complete and consistent ways how things could have been. Phenomenology, starting from the horizon of co-given objects present in the periphery of every experience, defines the world as the biggest horizon, or the "horizon of all horizons". In philosophy of mind, the world is contrasted with the mind as that which is represented by the mind. Theology conceptualizes the world in relation to God, for example, as God's creation, as identical to God, or as the two being interdependent. In religions, there is a tendency to downgrade the material or sensory world in favor of a spiritual world to be sought through religious practice. A comprehensive representation of the world and our place in it, as is found in religions, is known as a worldview. Cosmogony is the field that studies the origin or creation of the world, while eschatology refers to the science or doctrine of the last things or of the end of the world. In various contexts, the term "world" takes a more restricted meaning associated, for example, with the Earth and all life on it, with humanity as a whole, or with an international or intercontinental scope. In this sense, world history refers to the history of humanity as a whole, and world politics is the discipline of political science studying issues that transcend nations and continents. Other examples include terms such as "world religion", "world language", "world government", "world war", "world population", "world economy", or "world championship". Etymology The English word world comes from the Old English weorold. The Old English is a reflex of the Common Germanic *weraldiz, a compound of weraz 'man' and aldiz 'age', thus literally meaning roughly 'age of man'; this word led to Old Frisian warld, Old Saxon werold, Old Dutch werolt, Old High German weralt, and Old Norse verǫld. The corresponding word in Latin is mundus, literally 'clean, elegant', itself a loan translation of Greek cosmos 'orderly arrangement'. While the Germanic word thus reflects a mythological notion of a "domain of Man" (compare Midgard), presumably as opposed to the divine sphere on the one hand and the chthonic sphere of the underworld on the other, the Greco-Latin term expresses a notion of creation as an act of establishing order out of chaos. Conceptions Different fields often work with quite different conceptions of the essential features associated with the term "world". Some conceptions see the world as unique: there can be no more than one world. Others talk of a "plurality of worlds". Some see worlds as complex things composed of many substances as their parts while others hold that worlds are simple in the sense that there is only one substance: the world as a whole. Some characterize worlds in terms of objective spacetime while others define them relative to the horizon present in each experience. These different characterizations are not always exclusive: it may be possible to combine some without leading to a contradiction. Most of them agree that worlds are unified totalities. Monism is a thesis about oneness: that only one thing exists in a certain sense. The denial of monism is pluralism, the thesis that, in a certain sense, more than one thing exists. There are many forms of monism and pluralism, but in relation to the world as a whole, two are of special interest: existence monism/pluralism and priority monism/pluralism. Existence monism states that the world is the only concrete object there is. This means that all the concrete "objects" we encounter in our daily lives, including apples, cars and ourselves, are not truly objects in a strict sense. Instead, they are just dependent aspects of the world-object. Such a world-object is simple in the sense that it does not have any genuine parts. For this reason, it has also been referred to as "blobject" since it lacks an internal structure like a blob. Priority monism allows that there are other concrete objects besides the world. But it holds that these objects do not have the most fundamental form of existence, that they somehow depend on the existence of the world. The corresponding forms of pluralism state that the world is complex in the sense that it is made up of concrete, independent objects. Scientific cosmology can be defined as the science of the universe as a whole. In it, the terms "universe" and "cosmos" are usually used as synonyms for the term "world". One common definition of the world/universe found in this field is as "[t]he totality of all space and time; all that is, has been, and will be". Some definitions emphasize that there are two other aspects to the universe besides spacetime: forms of energy or matter, like stars and particles, and laws of nature. World-conceptions in this field differ both concerning their notion of spacetime and of the contents of spacetime. The theory of relativity plays a central role in modern cosmology and its conception of space and time. A difference from its predecessors is that it conceives space and time not as distinct dimensions but as a single four-dimensional manifold called spacetime. This can be seen in special relativity in relation to the Minkowski metric, which includes both spatial and temporal components in its definition of distance. General relativity goes one step further by integrating the concept of mass into the concept of spacetime as its curvature. Quantum cosmology uses a classical notion of spacetime and conceives the whole world as one big wave function expressing the probability of finding particles in a given location. The world-concept plays a role in many modern theories of modality, sometimes in the form of possible worlds. A possible world is a complete and consistent way how things could have been. The actual world is a possible world since the way things are is a way things could have been. There are many other ways things could have been besides how they actually are. For example, Hillary Clinton did not win the 2016 US election, but she could have won. So there is a possible world in which she did. There is a vast number of possible worlds, one corresponding to each such difference, no matter how small or big, as long as no outright contradictions are introduced this way. Possible worlds are often conceived as abstract objects, for example, in terms of non-obtaining states of affairs or as maximally consistent sets of propositions. On such a view, they can even be seen as belonging to the actual world. Another way to conceive possible worlds, made famous by David Lewis, is as concrete entities. On this conception, there is no important difference between the actual world and possible worlds: both are conceived as concrete, inclusive and spatiotemporally connected. The only difference is that the actual world is the world we live in, while other possible worlds are not inhabited by us but by our counterparts. Everything within a world is spatiotemporally connected to everything else but the different worlds do not share a common spacetime: They are spatiotemporally isolated from each other. This is what makes them separate worlds. It has been suggested that, besides possible worlds, there are also impossible worlds. Possible worlds are ways things could have been, so impossible worlds are ways things could not have been. Such worlds involve a contradiction, like a world in which Hillary Clinton both won and lost the 2016 US election. Both possible and impossible worlds have in common the idea that they are totalities of their constituents. Within phenomenology, worlds are defined in terms of horizons of experiences. When we perceive an object, like a house, we do not just experience this object at the center of our attention but also various other objects surrounding it, given in the periphery. The term "horizon" refers to these co-given objects, which are usually experienced only in a vague, indeterminate manner. The perception of a house involves various horizons, corresponding to the neighborhood, the city, the country, the Earth, etc. In this context, the world is the biggest horizon or the "horizon of all horizons". It is common among phenomenologists to understand the world not just as a spatiotemporal collection of objects but as additionally incorporating various other relations between these objects. These relations include, for example, indication-relations that help us anticipate one object given the appearances of another object and means-end-relations or functional involvements relevant for practical concerns. In philosophy of mind, the term "world" is commonly used in contrast to the term "mind" as that which is represented by the mind. This is sometimes expressed by stating that there is a gap between mind and world and that this gap needs to be overcome for representation to be successful. One problem in philosophy of mind is to explain how the mind is able to bridge this gap and to enter into genuine mind-world-relations, for example, in the form of perception, knowledge or action. This is necessary for the world to be able to rationally constrain the activity of the mind. According to a realist position, the world is something distinct and independent from the mind. Idealists conceive of the world as partially or fully determined by the mind. Immanuel Kant's transcendental idealism, for example, posits that the spatiotemporal structure of the world is imposed by the mind on reality but lacks independent existence otherwise. A more radical idealist conception of the world can be found in Berkeley's subjective idealism, which holds that the world as a whole, including all everyday objects like tables, cats, trees and ourselves, "consists of nothing but minds and ideas". Different theological positions hold different conceptions of the world based on its relation to God. Classical theism states that God is wholly distinct from the world. But the world depends for its existence on God, both because God created the world and because He maintains or conserves it. This is sometimes understood in analogy to how humans create and conserve ideas in their imagination, with the difference being that the divine mind is vastly more powerful. On such a view, God has absolute, ultimate reality in contrast to the lower ontological status ascribed to the world. God's involvement in the world is often understood along the lines of a personal, benevolent God who looks after and guides His creation. Deists agree with theists that God created the world but deny any subsequent, personal involvement in it. Pantheists reject the separation between God and world. Instead, they claim that the two are identical. This means that there is nothing to the world that does not belong to God and that there is nothing to God beyond what is found in the world. Panentheism constitutes a middle ground between theism and pantheism. Against theism, it holds that God and the world are interrelated and depend on each other. Against pantheism, it holds that there is no outright identity between the two. History of philosophy In philosophy, the term world has several possible meanings. In some contexts, it refers to everything that makes up reality or the physical universe. In others, it can mean have a specific ontological sense (see world disclosure). While clarifying the concept of world has arguably always been among the basic tasks of Western philosophy, this theme appears to have been raised explicitly only at the start of the twentieth century, Plato is well known for his theory of forms, which posits the existence of two different worlds: the sensible world and the intelligible world. The sensible world is the world we live in, filled with changing physical things we can see, touch and interact with. The intelligible world is the world of invisible, eternal, changeless forms like goodness, beauty, unity and sameness. Plato ascribes a lower ontological status to the sensible world, which only imitates the world of forms. This is due to the fact that physical things exist only to the extent that they participate in the forms that characterize them, while the forms themselves have an independent manner of existence. In this sense, the sensible world is a mere replication of the perfect exemplars found in the world of forms: it never lives up to the original. In the allegory of the cave, Plato compares the physical things we are familiar with to mere shadows of the real things. But not knowing the difference, the prisoners in the cave mistake the shadows for the real things. Two definitions that were both put forward in the 1920s, however, suggest the range of available opinion. "The world is everything that is the case", wrote Ludwig Wittgenstein in his influential Tractatus Logico-Philosophicus, first published in 1921. Martin Heidegger, meanwhile, argued that "the surrounding world is different for each of us, and notwithstanding that we move about in a common world". "World" is one of the key terms in Eugen Fink's philosophy. He thinks that there is a misguided tendency in western philosophy to understand the world as one enormously big thing containing all the small everyday things we are familiar with. He sees this view as a form of forgetfulness of the world and tries to oppose it by what he calls the "cosmological difference": the difference between the world and the inner-worldly things it contains. On his view, the world is the totality of the inner-worldly things that transcends them. It is itself groundless but it provides a ground for things. It therefore cannot be identified with a mere container. Instead, the world gives appearance to inner-worldly things, it provides them with a place, a beginning and an end. One difficulty in investigating the world is that we never encounter it since it is not just one more thing that appears to us. This is why Fink uses the notion of play or playing to elucidate the nature of the world. He sees play as a symbol of the world that is both part of it and that represents it. Play usually comes with a form of imaginary play-world involving various things relevant to the play. But just like the play is more than the imaginary realities appearing in it so the world is more than the actual things appearing in it. The concept of worlds plays a central role in Nelson Goodman's late philosophy. He argues that we need to posit different worlds in order to account for the fact that there are different incompatible truths found in reality. Two truths are incompatible if they ascribe incompatible properties to the same thing. This happens, for example, when we assert both that the earth moves and that the earth is at rest. These incompatible truths correspond to two different ways of describing the world: heliocentrism and geocentrism. Goodman terms such descriptions "world versions". He holds a correspondence theory of truth: a world version is true if it corresponds to a world. Incompatible true world versions correspond to different worlds. It is common for theories of modality to posit the existence of a plurality of possible worlds. But Goodman's theory is different since it posits a plurality not of possible but of actual worlds. Such a position is in danger of involving a contradiction: there cannot be a plurality of actual worlds if worlds are defined as maximally inclusive wholes. This danger may be avoided by interpreting Goodman's world-concept not as maximally inclusive wholes in the absolute sense but in relation to its corresponding world-version: a world contains all and only the entities that its world-version describes. Religion Mythological cosmologies depict the world as centered on an axis mundi and delimited by a boundary such as a world ocean, a world serpent or similar. Hinduism constitutes a family of religious-philosophical views. These views present perspectives on the nature and role of the world. Samkhya philosophy, for example, is a metaphysical dualism that understands reality as comprising 2 parts: purusha and prakriti. The term "purusha" stands for the individual conscious self that each of "us" possesses. Prakriti, on the other hand, is the 1 world inhabited by all these selves. Samkhya understands this world as a world of matter governed by the law of cause and effect. The term "matter" is understood in a sense in this tradition including physical and mental aspects. This is reflected in the doctrine of tattvas, according to which prakriti is made up of 23 principles or elements of reality. These principles include physical elements, like water or earth, and mental aspects, like intelligence or sense-impressions. The relation between purusha and prakriti is conceived as 1 of observation: purusha is the conscious self aware of the world of prakriti and does not causally interact with it. A conception of the world is present in Advaita Vedanta, the monist school among the Vedanta schools. Unlike the realist position defended in Samkhya philosophy, Advaita Vedanta sees the world of multiplicity as an illusion, referred to as Maya. This illusion includes impression of existing as separate experiencing selfs called Jivas. Instead, Advaita Vedanta teaches that on the most fundamental level of reality, referred to as Brahman, there exists no plurality or difference. All there is is 1 all-encompassing self: Atman. Ignorance is seen as the source of this illusion, which results in bondage to the world of mere appearances. Liberation is possible in the course of overcoming this illusion by acquiring the knowledge of Brahman, according to Advaita Vedanta. Contemptus mundi is the name given to the belief that the world, in all its vanity, is nothing more than a futile attempt to hide from God by stifling our desire for the good and the holy. This view has been characterised as a "pastoral of fear" by historian Jean Delumeau. "The world, the flesh, and the devil" is a traditional division of the sources of temptation. Orbis Catholicus is a Latin phrase meaning "Catholic world", per the expression Urbi et Orbi, and refers to that area of Christendom under papal supremacy. In Islam, the term "dunya" is used for the world. Its meaning is derived from the root word "dana", a term for "near". It is associated with the temporal, sensory world and earthly concerns, i.e. with this world in contrast to the spiritual world. Religious teachings warn of a tendency to seek happiness in this world and advise a more ascetic lifestyle concerned with the afterlife. Other strands in Islam recommend a balanced approach. In Mandaean cosmology, the world or earthly realm is known as Tibil. It is separated from the World of Light (alma d-nhūra) above and the World of Darkness (alma d-hšuka) below by aether (ayar). Related terms and problems A worldview is a comprehensive representation of the world and our place in it. As a representation, it is a subjective perspective of the world and thereby different from the world it represents. All higher animals need to represent their environment in some way in order to navigate it. But it has been argued that only humans possess a representation encompassing enough to merit the term "worldview". Philosophers of worldviews commonly hold that the understanding of any object depends on a worldview constituting the background on which this understanding can take place. This may affect not just our intellectual understanding of the object in question but the experience of it in general. It is therefore impossible to assess one's worldview from a neutral perspective since this assessment already presupposes the worldview as its background. Some hold that each worldview is based on a single hypothesis that promises to solve all the problems of our existence we may encounter. On this interpretation, the term is closely associated to the worldviews given by different religions. Worldviews offer orientation not just in theoretical matters but also in practical matters. For this reason, they usually include answers to the question of the meaning of life and other evaluative components about what matters and how we should act. A worldview can be unique to one individual but worldviews are usually shared by many people within a certain culture or religion. The idea that there exist many different worlds is found in various fields. For example, theories of modality talk about a plurality of possible worlds and the many-worlds interpretation of quantum mechanics carries this reference even in its name. Talk of different worlds is also common in everyday language, for example, with reference to the world of music, the world of business, the world of football, the world of experience or the Asian world. But at the same time, worlds are usually defined as all-inclusive totalities. This seems to contradict the very idea of a plurality of worlds since if a world is total and all-inclusive then it cannot have anything outside itself. Understood this way, a world can neither have other worlds besides itself or be part of something bigger. One way to resolve this paradox while holding onto the notion of a plurality of worlds is to restrict the sense in which worlds are totalities. On this view, worlds are not totalities in an absolute sense. This might be even understood in the sense that, strictly speaking, there are no worlds at all. Another approach understands worlds in a schematic sense: as context-dependent expressions that stand for the current domain of discourse. So in the expression "Around the World in Eighty Days", the term "world" refers to the earth while in the colonial expression "the New World" it refers to the landmass of North and South America. Cosmogony is the field that studies the origin or creation of the world. This includes both scientific cosmogony and creation myths found in various religions. The dominant theory in scientific cosmogony is the Big Bang theory, according to which both space, time and matter have their origin in one initial singularity occurring about 13.8 billion years ago. This singularity was followed by an expansion that allowed the universe to sufficiently cool down for the formation of subatomic particles and later atoms. These initial elements formed giant clouds, which would then coalesce into stars and galaxies. Non-scientific creation myths are found in many cultures and are often enacted in rituals expressing their symbolic meaning. They can be categorized concerning their contents. Types often found include creation from nothing, from chaos or from a cosmic egg. Eschatology refers to the science or doctrine of the last things or of the end of the world. It is traditionally associated with religion, specifically with the Abrahamic religions. In this form, it may include teachings both of the end of each individual human life and of the end of the world as a whole. But it has been applied to other fields as well, for example, in the form of physical eschatology, which includes scientifically based speculations about the far future of the universe. According to some models, there will be a Big Crunch in which the whole universe collapses back into a singularity, possibly resulting in a second Big Bang afterward. But current astronomical evidence seems to suggest that our universe will continue to expand indefinitely. World history studies the world from a historical perspective. Unlike other approaches to history, it employs a global viewpoint. It deals less with individual nations and civilizations, which it usually approaches at a high level of abstraction. Instead, it concentrates on wider regions and zones of interaction, often interested in how people, goods and ideas move from one region to another. It includes comparisons of different societies and civilizations as well as considering wide-ranging developments with a long-term global impact like the process of industrialization. Contemporary world history is dominated by three main research paradigms determining the periodization into different epochs. One is based on productive relations between humans and nature. The two most important changes in history in this respect were the introduction of agriculture and husbandry concerning the production of food, which started around 10,000 to 8,000 BCE and is sometimes termed the Neolithic Revolution, and the Industrial Revolution, which started around 1760 CE and involved the transition from manual to industrial manufacturing. Another paradigm, focusing on culture and religion instead, is based on Karl Jaspers' theories about the Axial Age, a time in which various new forms of religious and philosophical thoughts appeared in several separate parts of the world around the time between 800 and 200 BCE. A third periodization is based on the relations between civilizations and societies. According to this paradigm, history can be divided into three periods in relation to the dominant region in the world: Middle Eastern dominance before 500 BCE, Eurasian cultural balance until 1500 CE and Western dominance since 1500 CE. Big History employs an even wider framework than world history by putting human history into the context of the history of the universe as a whole. It starts with the Big Bang and traces the formation of galaxies, the Solar System, the Earth, its geological eras, the evolution of life and humans until the present day. World politics, also referred to as global politics or international relations, is the discipline of political science studying issues of interest to the world that transcend nations and continents. It aims to explain complex patterns found in the social world that are often related to the pursuit of power, order and justice, usually in the context of globalization. It focuses not just on the relations between nation-states but also considers other transnational actors, like multinational corporations, terrorist groups, or non-governmental organizations. For example, it tries to explain events such as the September 11 attacks, the 2003 invasion of Iraq or the 2008 financial crisis. Various theories have been proposed in order to deal with the complexity involved in formulating such explanations. These theories are sometimes divided into realism, liberalism and constructivism. Realists see nation-states as the main actors in world politics. They constitute an anarchical international system without any overarching power to control their behavior. They are seen as sovereign agents that, determined by human nature, act according to their national self-interest. Military force may play an important role in the ensuing struggle for power between states, but diplomacy and cooperation are also key mechanisms for nations to achieve their goals. Liberalists acknowledge the importance of states but they also emphasize the role of transnational actors, like the United Nations or the World Trade Organization. They see humans as perfectible and stress the role of democracy in this process. The emergent order in world politics, on this perspective, is more complex than a mere balance of power since more different agents and interests are involved in its production. Constructivism ascribes more importance to the agency of individual humans than realism and liberalism. It understands the social world as a construction of the people living in it. This leads to an emphasis on the possibility of change. If the international system is an anarchy of nation-states, as the realists hold, then this is only so because we made it this way and may change since this is not prefigured by human nature, according to the constructivists. See also References External links Africa Antarctica Asia Australia Europe North America South America Afro-Eurasia Americas Eurasia Oceania
========================================
[SOURCE: https://en.wikipedia.org/wiki/Statistical_model] | [TOKENS: 2562]
Contents Statistical model A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model represents, often in considerably idealized form, the data-generating process. When referring specifically to probabilities, the corresponding term is probabilistic model. All statistical hypothesis tests and all statistical estimators are derived via statistical models. More generally, statistical models are part of the foundation of statistical inference. A statistical model is usually specified as a mathematical relationship between one or more random variables and other non-random variables. As such, a statistical model is "a formal representation of a theory" (Herman Adèr quoting Kenneth Bollen). Introduction Informally, a statistical model can be thought of as a statistical assumption (or set of statistical assumptions) with a certain property: that the assumption allows us to calculate the probability of any event. As an example, consider a pair of ordinary six-sided dice. We will study two different statistical assumptions about the dice. The first statistical assumption is this: for each of the dice, the probability of each face (1, 2, 3, 4, 5, and 6) coming up is ⁠1/6⁠. From that assumption, we can calculate the probability of both dice coming up 5: ⁠1/6⁠ × ⁠1/6⁠ = ⁠1/36⁠. More generally, we can calculate the probability of any event: e.g. (1 and 2) or (3 and 3) or (5 and 6). The alternative statistical assumption is this: for each of the dice, the probability of the face 5 coming up is ⁠1/8⁠ (because the dice are weighted). From that assumption, we can calculate the probability of both dice coming up 5: ⁠1/8⁠ × ⁠1/8⁠ = ⁠1/64⁠. We cannot, however, calculate the probability of any other nontrivial event, as the probabilities of the other faces are unknown. The first statistical assumption constitutes a statistical model: because with the assumption alone, we can calculate the probability of any event. The alternative statistical assumption does not constitute a statistical model: because with the assumption alone, we cannot calculate the probability of every event. In the example above, with the first assumption, calculating the probability of an event is easy. With some other examples, though, the calculation can be difficult, or even impractical (e.g. it might require millions of years of computation). For an assumption to constitute a statistical model, such difficulty is acceptable: doing the calculation does not need to be practicable, just theoretically possible. Formal definition In mathematical terms, a statistical model is a pair ( S , P {\displaystyle S,{\mathcal {P}}} ), where S {\displaystyle S} is the set of possible observations, i.e. the sample space, and P {\displaystyle {\mathcal {P}}} is a set of probability distributions on S {\displaystyle S} . The set P {\displaystyle {\mathcal {P}}} represents all of the models that are considered possible. This set is typically parameterized: P = { F θ : θ ∈ Θ } {\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}} . The set Θ {\displaystyle \Theta } defines the parameters of the model. If a parameterization is such that distinct parameter values give rise to distinct distributions, i.e. F θ 1 = F θ 2 ⇒ θ 1 = θ 2 {\displaystyle F_{\theta _{1}}=F_{\theta _{2}}\Rightarrow \theta _{1}=\theta _{2}} (in other words, the mapping is injective), it is said to be identifiable. In some cases, the model can be more complex. An example Suppose that we have a population of children, with the ages of the children distributed uniformly, in the population. The height of a child will be stochastically related to the age: e.g. when we know that a child is of age 7, this influences the chance of the child being 1.5 meters tall. We could formalize that relationship in a linear regression model, like this: heighti = b0 + b1agei + εi, where b0 is the intercept, b1 is a parameter that age is multiplied by to obtain a prediction of height, εi is the error term, and i identifies the child. This implies that height is predicted by age, with some error. An admissible model must be consistent with all the data points. Thus, a straight line (heighti = b0 + b1agei) cannot be admissible for a model of the data—unless it exactly fits all the data points, i.e. all the data points lie perfectly on the line. The error term, εi, must be included in the equation, so that the model is consistent with all the data points. To do statistical inference, we would first need to assume some probability distributions for the εi. For instance, we might assume that the εi distributions are i.i.d. Gaussian, with zero mean. In this instance, the model would have 3 parameters: b0, b1, and the variance of the Gaussian distribution. We can formally specify the model in the form ( S , P {\displaystyle S,{\mathcal {P}}} ) as follows. The sample space, S {\displaystyle S} , of our model comprises the set of all possible pairs (age, height). Each possible value of θ {\displaystyle \theta } = (b0, b1, σ2) determines a distribution on S {\displaystyle S} ; denote that distribution by F θ {\displaystyle F_{\theta }} . If Θ {\displaystyle \Theta } is the set of all possible values of θ {\displaystyle \theta } , then P = { F θ : θ ∈ Θ } {\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}} . (The parameterization is identifiable, and this is easy to check.) In this example, the model is determined by (1) specifying S {\displaystyle S} and (2) making some assumptions relevant to P {\displaystyle {\mathcal {P}}} . There are two assumptions: that height can be approximated by a linear function of age; that errors in the approximation are distributed as i.i.d. Gaussian. The assumptions are sufficient to specify P {\displaystyle {\mathcal {P}}} —as they are required to do. General remarks A statistical model is a special class of mathematical model. What distinguishes a statistical model from other mathematical models is that a statistical model is non-deterministic. Thus, in a statistical model specified via mathematical equations, some of the variables do not have specific values, but instead have probability distributions; i.e. some of the variables are stochastic. In the above example with children's heights, ε is a stochastic variable; without that stochastic variable, the model would be deterministic. Statistical models are often used even when the data-generating process being modeled is deterministic. For instance, coin tossing is, in principle, a deterministic process; yet it is commonly modeled as stochastic (via a Bernoulli process). Choosing an appropriate statistical model to represent a given data-generating process is sometimes extremely difficult, and may require knowledge of both the process and relevant statistical analyses. Relatedly, the statistician Sir David Cox has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis". There are three purposes for a statistical model, according to Konishi & Kitagawa: Those three purposes are essentially the same as the three purposes indicated by Friendly & Meyer: prediction, estimation, description. Dimension of a model Suppose that we have a statistical model ( S , P {\displaystyle S,{\mathcal {P}}} ) with P = { F θ : θ ∈ Θ } {\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}} . In notation, we write that Θ ⊆ R k {\displaystyle \Theta \subseteq \mathbb {R} ^{k}} where k is a positive integer ( R {\displaystyle \mathbb {R} } denotes the real numbers; other sets can be used, in principle). Here, k is called the dimension of the model. The model is said to be parametric if Θ {\displaystyle \Theta } has finite dimension.[citation needed] As an example, if we assume that data arise from a univariate Gaussian distribution, then we are assuming that In this example, the dimension, k, equals 2. As another example, suppose that the data consists of points (x, y) that we assume are distributed according to a straight line with i.i.d. Gaussian residuals (with zero mean): this leads to the same statistical model as was used in the example with children's heights. The dimension of the statistical model is 3: the intercept of the line, the slope of the line, and the variance of the distribution of the residuals. (Note the set of all possible lines has dimension 2, even though geometrically, a line has dimension 1.) Although formally θ ∈ Θ {\displaystyle \theta \in \Theta } is a single parameter that has dimension k, it is sometimes regarded as comprising k separate parameters. For example, with the univariate Gaussian distribution, θ {\displaystyle \theta } is formally a single parameter with dimension 2, but it is often regarded as comprising 2 separate parameters—the mean and the standard deviation. A statistical model is nonparametric if the parameter set Θ {\displaystyle \Theta } is infinite dimensional. A statistical model is semiparametric if it has both finite-dimensional and infinite-dimensional parameters. Formally, if k is the dimension of Θ {\displaystyle \Theta } and n is the number of samples, both semiparametric and nonparametric models have k → ∞ {\displaystyle k\rightarrow \infty } as n → ∞ {\displaystyle n\rightarrow \infty } . If k / n → 0 {\displaystyle k/n\rightarrow 0} as n → ∞ {\displaystyle n\rightarrow \infty } , then the model is semiparametric; otherwise, the model is nonparametric. Parametric models are by far the most commonly used statistical models. Regarding semiparametric and nonparametric models, Sir David Cox has said, "These typically involve fewer assumptions of structure and distributional form but usually contain strong assumptions about independencies". Nested models Two statistical models are nested if the first model can be transformed into the second model by imposing constraints on the parameters of the first model. As an example, the set of all Gaussian distributions has, nested within it, the set of zero-mean Gaussian distributions: we constrain the mean in the set of all Gaussian distributions to get the zero-mean distributions. As a second example, the quadratic model has, nested within it, the linear model —we constrain the parameter b2 to equal 0. In both those examples, the first model has a higher dimension than the second model (for the first example, the zero-mean model has dimension 1). Such is often, but not always, the case. As an example where they have the same dimension, the set of positive-mean Gaussian distributions is nested within the set of all Gaussian distributions; they both have dimension 2. Comparing models Comparing statistical models is fundamental for much of statistical inference. Konishi & Kitagawa (2008, p. 75) state: "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling. They are typically formulated as comparisons of several statistical models." Common criteria for comparing models include the following: R2, Bayes factor, Akaike information criterion, and the likelihood-ratio test together with its generalization, the relative likelihood. Another way of comparing two statistical models is through the notion of deficiency introduced by Lucien Le Cam. See also Notes References Further reading
========================================
[SOURCE: https://he.wikipedia.org/wiki/אצ"ל] | [TOKENS: 63278]
תוכן עניינים ארגון צבאי לאומי האימפריה הבריטית האימפריה הבריטית הארגון הצבאי הלאומי בארץ ישראל (בראשי תיבות: אצ"ל) היה ארגון צבאי מחתרתי עברי, שנוסד בירושלים בשנת 1931. הארגון הוקם על ידי מפקדים שפרשו מארגון "ההגנה" בשל דרישתם לפעולה נחרצת יותר אל מול התוקפנות הערבית של אותם ימים, בעיקר זו של מאורעות תרפ"ט. מרבית אנשיו היו צעירים מתנועת הנוער הרוויזיוניסטית בית"ר. מטעמי חשאיות לא נהגו לקרוא לארגון בשמו כי אם "המעמד". אנשי הארגון הצטרפו לצה"ל במהלך מלחמת העצמאות. האצ"ל נחשב בעיני ממשלת המנדט הבריטי כארגון טרור. לקביעה זו היו שותפים חלק ממתנגדי הארגון, כמו גם גורמים נוספים אחרים כגון ועדת החקירה האנגלו-אמריקאית לענייני ארץ ישראל, הסוכנות היהודית ועיתונים בין-לאומיים בתקופה שלפני קום המדינה. ישנם היסטוריונים הרואים במאבק, שבו השתתף האצ"ל, גורם משמעותי ביציאת הבריטים מארץ ישראל. היסטוריונים אחרים רואים במאבק נגד הבריטים גורם משני בהשפעתו על החלטת הבריטים לוותר על המנדט ובסופו של דבר לעזוב את הארץ. הקמת הארגון ואופיו חברי האצ"ל הגיעו בעיקר משורותיהן של בית"ר והתנועה הרוויזיוניסטית בארץ ישראל ובגולה. התנועה הרוויזיוניסטית העניקה למעשה חזות ציבורית לארגון המחתרת. זאב ז'בוטינסקי, מחולל הציונות הרוויזיוניסטית, היה המנהיג העליון בארגון עד פטירתו. הוא קבע את הקווים הכלליים לפעולת הארגון, כמו בעניין ההבלגה ושבירתה, ובהשראת תורתו פעלו אנשי המחתרת. עם זאת, כפיפותו הפורמלית של האצ"ל להנהגה מדינית חיצונית נחלשה עם השנים. מקורות רעיוניים נוספים לרוח הלחימה היו מורשת "ברית הבריונים" ושירת אורי צבי גרינברג. סמל הארגון – הכיתוב "רק כך" ולצדו יד אוחזת רובה על רקע ארץ ישראל משתי גדות הירדן – ביטא שאיפה לעצמאות עברית על ארץ ישראל השלמה, שתושג רק בכוחו של נשק עברי. הסמל נהגה לראשונה על ידי לילי שטרסמן-לובינסקי, פעילה בתנועה הרוויזיוניסטית בפולין ועורכת ביטאון הארגון "ירושלים המשוחררת" שהיה לשבועון האצ"ל בפולין. סמל זה היה אחר כך לסמל האצ"ל, ועוצב על ידי יהושע אדרי, המאייר והקריקטוריסט של העיתון הרוויזיוניסטי "המשקיף". מספר חברי הארגון השתנה במשך השנים, ונע בין כמה מאות בשנות משבר לכמה אלפים בתקופות שיא. מרבית חבריו היו אנשים שקיבלו את מרות המחתרת, ובמסגרתה מילאו משימות ותפקידים (לעיתים קרובות בניגוד לחוקי הממשלה הבריטית). רובם היו אנשים "רגילים" מן היישוב אשר החזיקו משלח יד או עבודה סדירה, ורק עשרות בודדות עסקו במלאכת הארגון באופן בלעדי. כלפי ההנהגה המדינית הנבחרת של היישוב וכלפי ההסתדרות הציונית העולמית, האצ"ל חלק הן על האסטרטגיה, הן על תפיסות היסוד והן על הטקטיקה המדינית הצבאית וההסברתית, בנושאים רבים כמו – השימוש בכוח ובנשק למען הגשמת מטרות הציונות, היחס לאוכלוסייה הערבית בזמן פרעות, והיחסים עם בריטניה השולטת בארץ ישראל. אי לכך, נהג האצ"ל לפסול את החלטות ההנהגה הציונית ומוסדות היישוב. אי קבלת מרותם והמאבקים הרעיוניים של אנשי האצ"ל מול מחנה הסתדרות העובדים הכללית, שהייתה הגורם הדומיננטי בהסתדרות הציונית, הביאה את המוסדות הנבחרים לשלול את קיומו העצמאי של האצ"ל, ובמשך רוב שנות קיומו נתפס ארגון "הפורשים" על ידיהם כבלתי אחראי, וככזה שראוי למנוע את פעולותיו. על כן ליווה האצ"ל את פעולותיו החמושות בפעולות הסברה פוליטיות, שמטרתן הייתה לשכנע הציבור בצדקת דרכו, ובהוקעת המחדלים, לפי השקפתו, בהתנהלותה הפוליטית של הנהגת היישוב הרשמית. הארגון פרסם כרוזים רבים, עיתונים מחתרתיים, ואף הפעיל תחנת רדיו עברית עצמאית ראשונה – "קול ציון הלוחמת". גיל חברי וחברות הארגון נע בעיקר בין 16 ל-25, מרביתם רווקים ורווקות. בארגון היו יותר עולים מאשר ילידי הארץ. 27% מהם היו ממוצא ספרדי, בדומה לשיעור הספרדים בקרב האוכלוסייה היהודית בארץ. אחוז הנשים באצ"ל עמד על כ-15, בהנחה שמספר הפעילים והפעילות בו עמד על כ-6,200. כארגון מחתרת, לא נהגו חבריו לכנותו בשמו הרשמי, ותחת זאת עשו שימוש בכינויים שונים. בשנים הראשונות הוא נודע בעיקר כ"ההגנה הלאומית", וכן בשמות "ארגון ב'", "הגנה ב'", "הארגון המקביל" ו"הארגון הימני". בשנים מאוחרות יותר כונה הארגון בעיקר "המעמד", וכן "הלבב" ו"הלימן". כהמנון הארגון אומץ השיר "חיילים אלמונים" של אברהם שטרן (יאיר), מפקד בארגון. בהמשך, לאחר פרישתו של שטרן מהאצ"ל לשם הקמת ארגון הלח"י, שונה ההמנון לבית השלישי של "שיר ביתר" מאת זאב ז'בוטינסקי. מ-1933, כשנתיים לאחר ייסודו, פעל האצ"ל תחת "ועד מפקח" שבו היו נציגים של רוב המפלגות הציוניות הלא פועליות. אחרי שב-1937 התפלג הארגון, נשארו בו בעיקר תומכי זאב ז'בוטינסקי, והארגון היה כפוף לו מבחינה מדינית. לאחר מות ז'בוטינסקי ב-1940 התקיים קשר עם ההנהגה המדינית של ההסתדרות הציונית החדשה. הקשר בין ההסתדרות החוקית לבין ארגון המחתרת נותק ב-1944, כאשר הכריז האצ"ל מלחמה על השלטון הבריטי, ומאז עמד הארגון בפני עצמו. בתוך הארגון, היה אברהם תהומי הראשון שכיהן כ"ראש המפקדה", או "המפקד הראשי", ולצדו כיהנה "המפקדה". כאשר התרחב הארגון התקיימו בו "מחוזות". יחידת אצ"ל מקומית כונתה "סניף". "פלוגה" באצ"ל הורכבה משלוש מחלקות, אשר כונו גם "גונדות". הגונדה הורכבה משתי קבוצות, כאשר בראש כל קבוצה הועמד "ראש קבוצה" וסגן. זו הייתה היחידה הבסיסית. בהמשך פעילותו הוקמו באצ"ל מחלקות שונות שמעליהן פיקד "מטה" או "מרכז". דרגות, שהוכנסו לשימוש בהמשך, מנו (בסדר עולה) סגן, ראש קבוצה, סמל (על גונדה), סמל א' (פלוגה), רב-סמל (גדוד), ודרגות הקצינים היו גונדר (מפקד מחוז או יחידה) וגונדר ראשון (מפקד בכיר). דרגת סרן ניתנה למפקד הארגון יעקב מרידור ודרגת אלוף לדוד רזיאל. עד פטירתו ב-1940 היה זאב ז'בוטינסקי "מצביא האצ"ל" או "המפקד העליון". האצ"ל ראה את עצמו כמסגרת צבאית, והדבר בא לידי ביטוי, בין השאר, בשני תחומים: עד מלחמת העולם השנייה הצליח האצ"ל להתחמש באמצעות הברחה של נשק שנרכש באירופה, בייחוד מאיטליה ומפולין. בתחילה נקנו בעיקר אקדחים ורובים, ובהמשך הובאו גם תת-מקלעים. בנוסף לשימוש בנשק, הקים הארגון בתי מלאכה שייצרו חלקי חילוף ועזרים נלווים לנשק. כבר ב-1936 החלו בניסיונות ראשונים לייצר כלי נשק, מוקשים ואמצעי לחימה אחרים בבתי מלאכה ובמחסנים של תומכי הארגון. ב-1939 החל ייצור מתקדם יותר של מוקשים המופעלים בלחץ עבור פיצוץ מסילות רכבת. בשנות ה-40 גם הרבו להשתמש ב"פיטארדות" (רימוני הפחדה, הלם) תוצרת הארגון בעת התקפות. פרט למוקשים ולרימונים יוצרו בשנת 1947 כמה אלפי תת-מקלעים "סטן" ומרגמות 52 מ"מ. הארגון גם השיג נשק באמצעות "החרמות" - פשיטות שמטרתן לקיחה של נשק מידי כוחות משטרה וצבא בריטיים. תולדות הארגון כבר בתחילתו לא התקבל הארגון החדש ברצון על ידי "ההגנה", והסתדרות העובדים מנעה עבודה מאלו שבחרו להשתייך אליו. בהמשך התגלע קרע בין מחנה האצ"ל לבין הנהגת היישוב ובין הצדדים שררה עוינות תמידית. בשנות ה-30 עיקר פועלו של האצ"ל התרכז בהגנת יישובים וכן בפעולות חמושות נגד ערביי ארץ ישראל, שהיוו "שבירת הבלגה" כלפי קיומן של פרעות ביהודי הארץ. בכך התבדל האצ"ל מן "ההגנה", שנקטה בקו הגנתי. כשפורסם הספר הלבן של 1939, החל האצ"ל לפעול גם נגד הבריטים, אך כשפרצה מלחמת העולם השנייה החליט האצ"ל שלא להילחם כמקודם בבריטים, וחלק מאנשיו אף התגייסו לצבאות בעלות הברית. חילוקי דעות בעניין שביתת הנשק מול הבריטים פיצלו מתוך האצ"ל ארגון חדש – לוחמי חרות ישראל (לח"י). לקראת סוף המלחמה, לנוכח הידיעות על התחוללות השואה ולנוכח המשך מדיניות הספר הלבן, שהגבילה עליית יהודים לארץ ישראל, הצהיר הארגון על חידוש המאבק המזוין נגד השלטון הבריטי במטרה לגרשו מהארץ ולהקים מדינה עברית עצמאית תחתיו. באוקטובר 1945 הצטרף האצ"ל אל תנועת המרי העברי ללחימה משותפת נגד הבריטים. מסגרת זו פורקה לאחר פיצוץ מלון המלך דוד ופרישת "ההגנה" מהמאבק המזוין, לעומת האצ"ל ולח"י שהמשיכו בו. בין הפעולות הבולטות ביצע הארגון את הפריצה לכלא עכו ותליית הסרג'נטים. לאחר יציאת הבריטים מהארץ ובזמן מלחמת העצמאות לחמו אנשי האצ"ל בחזיתות שונות, פעולה בולטת הייתה כיבוש שכונת מנשייה ביפו. בעקבות הסכם עם הממשלה הצטרפו ב-1 ביוני יחידות אצ"ל לצה"ל. ב-22 ביוני 1948 פרץ עימות בין הממשלה לאצ"ל על רקע סירוב האצ"ל לקבל את מרות הממשלה והגעת ספינת הנשק של הארגון, אלטלנה, במהלכו הפגיז צה"ל את הספינה. בספטמבר 1948 פורק האצ"ל והארגון נטמע באופן סופי בצה"ל. ראשיתו של האצ"ל בפילוג שחל ב-1931 בסניף הירושלמי של ארגון "ההגנה". קבוצה גדולה של אנשי הסניף, בראשות אברהם ("גדעון") תהומי (זילברג), שהיה מפקד הסניף עד זמן קצר לפני הפילוג, חשה התמרמרות כלפי הנהגת "ההגנה", בייחוד לאחר מאורעות תרפ"ט. אנשי הקבוצה התנגדו ל"מדיניות ההבלגה" כלפי ערביי ארץ ישראל; רצו לראות את הארגון נושא אופי יותר צבאי והיררכי ולא מיליציוני; ודרשו שהארגון, שבאותה תקופה נשלט על ידי הסתדרות העובדים, יוכפף למרות המוסדות הלאומיים. ב-10 באפריל 1931 הודיעו מפקדים ואחראי ציוד כי הם מסרבים להחזיר למחסני ההגנה כלי נשק שהצטיידו בהם קודם לכן, בעת כוננות לקראת חג נבי מוסא. אחרי משא ומתן עם הנהגת ההגנה הורה תהומי להחזיר את הנשק תמורת דיון במוסדות הלאומיים בסוגיות שהעלו הוא ואנשיו. אולם לאחר שהנשק הוחזר, ותביעות אנשיו לא נענו, שלחו המפקדים מקרב המורדים הודעה להנהלת "הוועד הלאומי" על פרישתם מן הארגון, וכך יצר הפילוג ארגון עצמאי חדש. אברהם תהומי עמד בראשות הארגון המחתרתי החדש שהוקם, ולצדו מייסדים נוספים – אליהו בן גרא, אברהם בן זיו, אברהם גיורא קריצ'בסקי – כולם מפקדים בכירים בהגנה, חברי מפלגת הפועל הצעיר וחברי ההסתדרות הכללית. היה שותף עמם אליהו בן חורין (בינדר), פעיל בתנועה הרוויזיוניסטית. קבוצה זו כונתה "החבורה האודסאית", משום שהיו לפני כן פעילי ההגנה העצמית של יהודי אודסה. בראשית 1924 חלקו הנ"ל, בהנהגתו ופיקודו של בן חורין, תא מחתרתי בשם 'המפעל' בתוך קבוצת השרון, וייתכן שהם שרצחו את יעקב ישראל דה האן. הוחלט לקרוא לגוף החדש "ארגון צבאי לאומי", שם הממחיש את אופיו האקטיביסטי לעומת "ההגנה" ואת שאיפתו להיותו ארגון צבאי ולא "מיליציוני" כפי שהייתה "ההגנה". בסתיו אותה שנה התמזג הארגון מירושלים עם קבוצות חמושות המשויכות לתנועת בית"ר. עיקר כוחן היה מרוכז בתל אביב, והן החלו את פעולתן בשנת 1928, כאשר נוסד "בית הספר לקצינים ולמדריכים של בית"ר". חניכי המוסד פרשו בזמנם גם הם משורותיה של ההגנה, מסיבות פוליטיות, והארגון החדש כינה עצמו "הגנה לאומית". במאורעות תרפ"ט השתתפו צעירי בית"ר בהגנה על שכונות תל אביב תחת פיקודו של ירמיהו הלפרין, וזאת לאחר קריאה של עיריית תל אביב. בארגון זה משה רוזנברג היה מופקד על הנשק ורכישתו. לאחר הפעולה במאורעות התרחב הגוף התל אביבי וכונה "הארגון הימני". ארגון זה התאחד עם הארגון הירושלמי החדש. לאחר שהתרחב הארגון לתל אביב הוקם סניף גם בחיפה. בסוף שנת 1932 עבר אל שורות האצ"ל גם סניף ההגנה בצפת. אל הארגון הצעיר הצטרפו גם צעירים חברי תנועת הספורט מכבי. באותה עת החל לצאת לאור עיתון מחתרתי משוכפל בשם "המצודה" ובו הובעה המגמה האקטיביסטית של הארגון. הארגון גם הרחיב שורותיו על ידי גידולן של פלוגות הגיוס של בית"ר – קבוצות בהן התנדבו צעירים לשירות בן שנתיים במלאכות ביטחון וחלוציות. מקום מושבן של הפלוגות הצמיח גם מעוזי אצ"ל חדשים במושבות יסוד המעלה, משמר הירדן, ראש פינה, מטולה, נהריה; במרכז הארץ בחדרה, בת שלמה, בנימינה, גבעת עדה, נתניה, הרצליה, קריית שאול, פתח תקווה, כפר סבא, מגדיאל; ודרומה משם בראשון לציון, רחובות, נס ציונה, עקרון ובאר יעקב. בהמשך הזמן הוקמו ופעלו גם פלוגות בעיר העתיקה בירושלים ("פלוגות הכותל"), תל צור ונחלת יצחק. מרכזי אימונים ראשיים של הארגון שכנו ברמת גן, תל ליטוינסקי, קלמניה (סמוך לכפר סבא), קסטינה (סמוך לבאר טוביה), נחלת יצחק ורמת טיומקין (סמוך לנתניה). באוגוסט 1933 נוסד "ועד מפקח" לאצ"ל אשר כלל נציגים של רוב המפלגות הציוניות הלא פועליות. ישבו בו מאיר גרוסמן (ממפלגת המדינה העברית), הרב מאיר בר-אילן (מהמזרחי), עמנואל ניומן או יהושע סופרסקי (מהציונים הכלליים) וזאב ז'בוטינסקי או אליהו בן חורין (מהצה"ר). בין השנים 1931 עד 1936 שררה רגיעה יחסית בארץ ישראל, למעט גל קצר של התקוממות ערבית נגד השלטון הבריטי בשנת 1933 שדוכא על ידי השלטונות במהירות. בתקופה זו פעל האצ"ל בדומה להגנה ושימש ארגון שמירה המקיים כוננות ביטחונית. בין שני הארגונים התקיימו שיתופי פעולה בדמות תיאומי עמדות ואף שיתוף מודיעיני הדדי. ב-19 באפריל 1936 פרץ המרד הערבי הגדול, המכונה גם "מאורעות תרצ"ו-תרצ"ט". כנופיות ערביות חמושות, שתוגברו במתנדבים סורים ועיראקים, ביצעו פעולות טרור שכללו מארבי ירי בדרכים והטלת פצצות בכבישים ובישובים, ופורעים פגעו ברכוש ובחקלאות יהודיים. בשלב הראשון במאורעות, שנמשכו מאפריל ועד לסוף אוקטובר, נהרגו 80 יהודים, נפצעו 369, הותקפו 19 בתי ספר, 9 בתי תינוקות ויתומים ו-3 בתי מבוגרים. נערכו 380 התקפות על רכבות ואוטובוסים וכ-17,000 דונמים חקלאיים הושמדו. בתחילת המאורעות דגל האצ"ל, בדומה להגנה, באופן כללי בהבלגה. ז'בוטינסקי, שהשפעתו על מדיניות הארגון כבר הייתה רבה בשל הצטרפות צעירי בית"ר רבים לארגון, סבר שמטעמי מוסר אין לנקוט בדרך של פעולות תגמול כנגד האלימות. סיבה נוספת לתמיכתו בהבלגה הייתה תקוותו להקמת כוח יהודי גלוי ולא מחתרתי. אולם ההבלגה עוררה תסיסה פנימית באצ"ל, כמו גם בהגנה. אף על פי שלא הייתה מדיניות מסודרת של האצ"ל לפעולות תגמול, פעולות כאלה ננקטו על ידי חברי הארגון, לעיתים אף ללא אישור הפיקוד. פעולות ראשונות החלו בסביבות חודש אפריל 1936. מתווה הפעילויות היה ניסיון להגיב ב"עין תחת עין", פעולות תגמול של פעילי אצ"ל נגד הטרור הערבי. לעיתים היה ניסיון להתאים את אופי התגמול או מיקומו לתקיפה שקדמה לו. הפעולה הראשונה ארעה ימים ספורים לפני פרוץ המאורעות. בתגובה לרצח של נוסע יהודי ליד ענבתא, אנשי האצ"ל רצחו שני ערבים בצריף סמוך לכביש פתח תקווה – ירקונה. פעולות הנקמה המשיכו במתכונת דומה גם בתקופה הראשונה של המאורעות. בהמשך חודש אפריל, לאחר ירי ערבי אל בית הספר "כרמל" בתל אביב שכתוצאה ממנו נהרג ילד יהודי, תקפו לוחמי אצ"ל שכונה ערבית סמוכה לכרם התימנים בתל אביב, הרגו אדם ערבי ופצעו אחר. ב-17 באוגוסט הגיב האצ"ל על פיגועי ירי שבוצעו על ידי ערבים שנסעו במסילת הרכבת יפו–ירושלים על יהודים שהמתינו ליד מחסום הרכבת ברחוב הרצל בתל אביב. למחרת 16 באוגוסט, כאשר נפצע מהירי ילד יהודי, תקפו לוחמי אצ"ל רכבת בקו זה, הרגו ארמני, ופצעו חמישה. בשנת 1936 ביצעו אנשי אצ"ל כ-10 פעולות גמול. בתחילת אוקטובר דעכה פעילות הכנופיות בעקבות התערבות הצבא הבריטי, ובנובמבר נשלחה ועדת פיל לחקור את הסיבה לפרוץ המאורעות ולהציע פתרונות עתידיים. בתחילת 1937 היו קיימות סברות בקרב היישוב כי ישנם הלכי רוח שיביאו את הוועדה להמליץ על חלוקת ארץ ישראל המערבית, ובתוך כך להקים מדינה יהודית על חלק משטח זה. מפקדת האצ"ל, כמו גם ה"ועד המפקח" החזיקו באמונות אלה בדומה לחוגים בקרב ההגנה והסוכנות היהודית. בעקבות כך התגברו הקולות, בראשותו של תהומי, שחשבו כי אין מקום לשני ארגוני מגן יהודיים נפרדים. תהומי צוטט כאומר: "אנו עומדים לפני התרחשויות כבירות: מדינה יהודית וצבא יהודי. יש צורך בכוח צבאי אחד". אנשים אלה כבר לא ראו הבדלים גדולים בין הארגונים, בייחוד משום שבאותה תקופה ההגנה כבר לא הייתה כפופה להסתדרות העובדים הכללית, אלא למוסדות הלאומיים (מה שהוציא אותה לכאורה משליטת גורמים בעלי אינטרס פוליטי). בינואר החליט מרכז הארגון לפתוח במשא ומתן על איחוד עם "ההגנה", ובאפריל הושג הסכם בין תהומי לבין אנשי ההגנה לאיחוד הארגונים והעמדתם תחת מרות הסוכנות היהודית והוועד הלאומי. אנשי התנועה הרוויזיוניסטית התנגדו להסכם בעקבות ז'בוטינסקי שהתנגד לאיחוד של הארגונים ללא חזרה של הצ"ח להסתדרות הציונית. דוד רזיאל ואברהם שטרן קראו בכרוז להמשיך לקיים את הארגון העצמאי: לעומתם רוב אנשי המפלגות האזרחיות האחרות בארגון תמכו באיחוד. בכינוס מיוחד של המועצה הארצית של הארגון היה רוב לאיחוד, אולם בשל חילוקי הדעות הוחלט לערוך משאל בקרב חברי הארגון. המשאל נערך ב-24 באפריל 1937 ולגבי תוצאותיו יש הערכות סותרות. בעקבות ההסכם, ולמרות המשאל, הארגון התפלג: בין 1,200–1,500 אנשים, כמחצית מחברי הארגון, ובתוכם רוב הפיקוד הבכיר, אנשי ועדים אזוריים, ועמם גם רוב כלי הנשק של הארגון, חזרו אל ארגון ההגנה. מי שנשארו בו היו אקטיביסטים ופעילים צעירים, רובם אנשי שורה, שצידדו בקיומו העצמאי של הארגון. למעשה היה הרוב הגדול של הנשארים אנשי בית"ר. משה רוזנברג העריך כי נשארו בארגון כ-1,800 איש. האצ"ל שמר לכאורה על צביון על-מפלגתי, אולם למעשה הוועד הציבורי התפרק ובהמשך הותוותה דרכו בהתאם למשנתו והחלטותיו של זאב ז'בוטינסקי, והארגון הפך לזרוע הצבאית של הציונות הרוויזיוניסטית. ב-27 באפריל הוקמה לארגון מפקדה חדשה, אותה איישו משה רוזנברג כראש המפקדה, אברהם שטרן (יאיר) כמזכיר, דוד רזיאל מפקד סניף ירושלים, חנוך קלעי מפקד חיפה ואהרון חייכמן מפקד תל אביב. ב-כ' בתמוז תרצ"ז (29 ביוני), יום פטירתו של הרצל, ערכו עשרות אנשי הארגון מסדר לציון ארגונה מחדש של המחתרת. מטעמי חשאיות נערך האירוע, לכאורה של ארגון "ברית החייל", באתר בנייה בתל אביב. זאב ז'בוטינסקי העמיד בראש הארגון את קולונל רוברט ביטקר, אשר כיהן לפני כן כנציב בית"ר בסין והיה בעל ניסיון צבאי, אך לא הכיר את תנאי הארץ, ואף לא דיבר עברית. כמה חודשים לאחר מכן, כפי הנראה בשל חוסר התאמה מלאה לתפקיד, החליף ז'בוטינסקי את ביטקר במשה רוזנברג. גם בתקופה הראשונה שלאחר הפילוג, המשיך האצ"ל לשמור על הבלגה יחסית, תוך כדי פעולות נקמה ביוזמות מקומיות. ב-29 ביוני 1937 תקפה חוליה ערבית אוטובוס "אגד" בכביש ירושלים-תל אביב והרגה יהודי, ולמחרת נהרגו שני יהודים סמוך לכרכור. לאחר כמה שעות הגיב האצ"ל במספר פעולות. בירושלים הותקף אוטובוס ערבי שיצא מליפתא. בירושלים גם נורו ערבים בשני מוקדים. בתל אביב הושלך רימון על בית קפה ערבי ברחוב הכרמל, שגרם לשני פצועים קשה מקרב יושביו. כמו כן נפצע ערבי ברחוב ריינס בעיר. ב-5 בספטמבר הגיב האצ"ל על רציחתו של רב שחזר מתפילה בעיר העתיקה. מטען נפץ הושלך אל אוטובוס ערבי שיצא מליפתא, וכתוצאה ממנו נפצעו שתי נוסעות ושוטר בריטי. הרגיעה היחסית במאורעות הסתיימה עם פרסום מסקנות ועדת פיל, שקראו לראשונה לחלוקת הארץ, ודחייתן על הסף על ידי הציבור וההנהגה הערביים. בתחילה המשיכו פעולות הנקם של האצ"ל לשאת חותם בלתי רשמי, אולם ההבלגה נשברה רשמית ב-14 בנובמבר 1937. בתגובה להריגת חמישה חברי גרעין קיבוץ ליד קריית ענבים (לימים מעלה החמישה), האצ"ל בירושלים, בפיקודו של דוד רזיאל, יצא בשורת התקפות על עוברי אורח ערבים בשכונות יהודיות בעיר, שכתוצאה מהן נהרגו חמישה ערבים. מספר פעולות גם התבצעו בחיפה (ירי על שכונת ואדי ניסנאס) והרצליה. יום זה נרשם בתולדות הארגון כיום שבירת ההבלגה, בו עבר הארגון באופן מלא, ובאישורם של ז'בוטינסקי ומפקדת הארגון, לשיטת "ההגנה האקטיבית", בפעולות שעליהן הארגון נטל אחריות. השלטון הבריטי הגיב במעצרים של אנשי בית"ר והצה"ר, בחשד על השתייכות לאצ"ל. בתי הדין הצבאיים הורשו לפעול לפי "תקנות שעת חירום" ולפסוק גם עונשי מוות. כך לדוגמה יחזקאל אלטמן, נוטר בפלוגת בית"ר בנחלת יצחק, ביצע, ללא ידיעת מפקדיו, ירי על אוטובוס ערבי שהצליח לחמוק מהמקום. בכך ביקש להגיב על שלוש התקפות שבוצעו יום לפני כן על כלי רכב יהודים בכביש ירושלים. הנוטר הסגיר את עצמו ונידון על ידי בית הדין למוות. העונש הומר לאחר מכן למאסר עולם. האצ"ל המשיך בפעולות מסוג זה, אולם היקפן צומצם מאוד בהוראתו של רוזנברג. מחשש פן יוציאו הבריטים לפועל את עונש המוות לאדם אשר ייתפס מחזיק נשק, הוקפאו הפעולות לכשמונה חודשים, אולם בהדרגה גברה ההתנגדות בארגון למדיניות ההבלגה הזו. ב-21 באפריל, כתגובה לרצח של שישה יהודים, שכלל אונס אישה וביתור גופתה, יצאו שלושה מחברי פלוגת בית"ר בראש פינה לפעולת תגמול, שלא באישור מפקדיהם. החוליה ביצעה ירי וזרקה רימון (שלא התפוצץ) על אוטובוס ערבי, מהם לא נפגע איש. השלושה נתפסו, על שניים מהם נגזר גזר דין מוות, שלבסוף אושר רק עבור אחד מהם – שלמה בן-יוסף. הפגנות רבות ברחבי הארץ וכן פעולות של מוסדות ואישים כמו נשיא ההסתדרות הציונית חיים ויצמן והרב הראשי לישראל יצחק אייזיק הרצוג לא שינו את עונשו. שלמה בן-יוסף ישב בתא כלאו מתוך השלמה מלאה עם גזר הדין וראה את עונשו כחלק מהמאבק לעצמאות. בין הכתובות שכתב על קירות התא נאמר: "אני הולך למות ואני בכלל לא מצטער. מדוע? מפני שאני הולך למות בעד ארצנו. שלמה בן-יוסף". ב-29 ביוני 1938 הוצא להורג והיה עולה הגרדום הראשון. הארגון קידש את דמותו, ובעיני אנשיו הפך לדמות מופת. המשפט וההוצאה להורג של בן יוסף הכריעו גם את הוויכוח הפנימי בתוך האצ"ל על ההבלגה. רבים מחברי הארגון כבר התמרמרו על שמירת ההבלגה היחסית על ידי רוזנברג, וההתמרמרות גברה כאשר רוזנברג יצא מהארץ לקפריסין ביום הוצאתו להורג של בן יוסף. בהמלצת מפקדת הארגון שחרר ז'בוטינסקי את רוזנברג מתפקידו כמפקד הארגון ומינה תחתיו את מפקד מחוז ירושלים דוד רזיאל ("האלוף בן-ענת") למפקד הארגון. הנהגת האצ"ל התלבטה האם כנקמה על תליית בן יוסף יש לבצע פעולות כנגד השלטון הבריטי או דווקא כנגד הערבים. אף על פי שבן יוסף הוצא להורג על ידי השלטונות הבריטיים הוחלט להפנות את הזעם דווקא לפעולות תגמול נגד הערבים ולשבירה מוחלטת של ההבלגה על הטרור הערבי שגם הוא התעצם מאז אמצע יוני. תרמו לכך הגישה הפרו-בריטית של רזיאל וכמה מחברי ההנהגה האחרים, החשש לתגובה בריטית חריפה, וייתכן שגם ניסיון להראות לבריטים שבניגוד לתקוותיהם, ההוצאה להורג דווקא לא תרתיע את האצ"ל מפעולות נקם נוספות. פעולה ראשונה הייתה תלייתו של ערבי בחיפה. מ-3 ביולי נערכו עוד התקפות ירי וזריקת פצצות בערים המעורבות. ב-4 ביולי נורו ערבים מאחורי בית החולים שערי צדק, ליד בית קברות מוסלמי במרכז ירושלים וליד בית ישראל. בנוסף, הוטל מטען חבלה על אוטובוס ערבי ברחוב יפו בירושלים, שהרג שלושה ערבים ופצע 12. ב-5 ביולי נרצחו אב ובנו יהודים בירושלים, ולמחרת הושלך בתגובה מטען נפץ מגג בית סמוך, שהרג שני ערבים ופצע ארבעה. אולם ההסלמה העיקרית בפעולות האצ"ל הגיעה בגל של פעולות תגמול המוניות בהן הוטמנו מטעני נפץ רבי עוצמה עם מנגנוני השהיה בכדי חלב, סלים, ופחי נפט שהונחו במקומות ריכוז ערביים, ובמיוחד בשווקים. ב-15 ביולי נהרגו 10 ערבים בשוק הירקות שליד רחוב דוד בעיר העתיקה בירושלים, באזור שלטענת אצ"ל שימש אזור של חמושים ערבים, ב-25 ביולי הוטמנה בשוק בחיפה פצצה שהרגה כ-50 ערבים, וב-26 באוגוסט נהרגו 24 ערבים כתוצאה מפיצוץ במרכז יפו. ב-26 ביולי השתבשה פעולה של האצ"ל בעיר העתיקה בירושלים, בה ניסה יעקב רז להציב מטען בשוק. הערבים חשדו בו, הפכו את הסל עם המטען שהציב בפתח חנות ומצאו את המטען. הם דקרו אותו בכל גופו, והניחוהו כשחשבוהו למת, אך משטרת המנדט הביאה אותו לבית החולים הממשלתי, שם נחקר במשך כשבועיים. כשחש שכוחותיו עוזבים אותו, הסיר את התחבושת ודימם למוות. רז נחשב לחלל הראשון במבצעי האצ"ל. בתקופה זו המתיחות בין "ההגנה" לאצ"ל הגיעה לשיא. אליהו גולומב הזהיר במרומז את ז'בוטינסקי בפגישה ביניהם ב-10 ביולי ש"ההגנה" תאלץ לפעול בכוח כנגד אנשיו בארץ. ז'בוטינסקי מצדו הזהיר בפומבי שאם תתחיל מלחמת אחים בארץ, בה יש רוב לאנשי השמאל, היא תגלוש גם לגולה, שם כוח התנועה הרוויזיוניסטית גדול יותר. ב-25 ביולי ארבה חוליה של אצ"ל בחיפה לעוברי אורח ערבים, והרגה בשוגג יהודי מזרחי אותו חשבו לערבי. אחד מחברי החוליה, אליהו רפופורט, נתפס על ידי יהודים, הועבר לידי אנשי "ההגנה", והוחזק על ידם. בתגובה אנשי האצ"ל חטפו מפקד של "ההגנה". לבסוף, אחרי כשבוע שוחרר שני האנשים, אולם רפופורט נעצר זמן קצר אחר כך על ידי הבולשת הבריטית (ולטענת אנשי האצ"ל, הוסגר לידה על ידי "ההגנה"). אולם מתוך מתיחות זו שהגיעה עד סף פיצוץ נולדה הסכמה בין האצ"ל להגנה, שנחתמה בראשי תיבות ב-19 בספטמבר. האצ"ל התחייב "להפסיק כל פעולות היוצאות מגדר ההגנה הרגילה" אלא באישור ועדה פריטטית של שני הארגונים. בתמורה ההגנה הבטיחה לשתף את האצ"ל בפעולות המגן השונות של היישוב. האצ"ל הבטיח להפסיק בתקופת המשא ומתן את הפיגועים כנגד הערבים, ואכן פעולות אלה פסקו למשך כמה חודשים. אולם בעוד ההסכם התקבל על ידי ז'בוטינסקי, דוד בן-גוריון התנגד לו בחריפות, והכשיל את קבלתו. אף על פי שהדיונים נמשכו עוד מספר חודשים, ההסכם נכשל. הפיגועים של האצ"ל התחדשו בפברואר 1939, אחרי ששר המושבות הבריטי, מלקולם מקדונלד, הודיע על כוונת הממשלה הבריטית לבטל את המנדט בארץ ישראל ולהקים בה מדינה ערבית שתשמור על זכויות היהודים. הדבר עורר גל מהומות מצד הערבים, במהלכו נרצחו בחיפה 3 יהודים. האצ"ל החליט על גל פעולות תגמול נגד ערבים במטרה ליצור לחץ על ממשלת בריטניה לשנות את מדיניותה הפרו ערבית. הפעולות בוצעו ב-27 בפברואר, ועשרות ערבים נרצחו בהן. פצצות הונחו בשוק הערבי בחיפה, ובתחנת הרכבת חיפה מזרח, ופיגועי ירי והשלכת פצצות אירעו בירושלים וביפו. ב-17 במאי 1939 פורסם הספר הלבן של מקדונלד שכלל תוכנית להקמת מדינה דו-לאומית, הגבלות חמורות על מכירת קרקע ליהודים והטלת מכסות זעומות לעלייה. אנשי היישוב מכל הזרמים זעמו על תוכנו, הפגינו נגדו וכינו אותו "ספר המעל", משום שראו בו בגידה מוחלטת של הבריטים בהתחייבותם להקים בית לאומי ליהודים בארץ ישראל. בתגובה החל האצ"ל לראשונה לצאת לפעולות גם נגד מטרות בריטיות, אולם בניגוד לפגיעות המכוונות נגד חיי ערבים שנמשכו בתקופה זו, הפעולות כנגד הבריטים התמקדו בפגיעה ברכוש כגון מתקני חשמל, רדיו, טלפוניה ודואר. למרות הניסיון להימנע מפגיעה בנפש בפעולות אלה, נהרגו בהן חבלן בריטי, ושניים מעובדי תחנת הרדיו קול ירושלים, הקריינית מלכה מאי וייסנברג והמהנדס הערבי אדיב מנצור, כתוצאה מהנחת מטענים בבניין התחנה (ב-2 באוגוסט 1939). בנוסף החל האצ"ל להפיץ בפומבי ידיעות על פועלו ומטרותיו בכרוזי רחוב, בעיתונים, ובתחנת הרדיו המחתרתית "קול ציון הלוחמת". ב-29 במאי 1939 נכנסה לקולנוע רקס בירושלים חוליה של שבעה פעילי הארגון, לאחר שרכשו כרטיסים לצפייה בסרט טרזן. במהלך ההקרנה הפעילו באולם מטען חבלה עם מנגנון השהיה, והשליכו מטעני חבלה נוספים מן היציע, הרגו 5 אנשים ופצעו 18. אנשי האצ"ל הצליחו לחמוק מן הקולנוע ללא פגע. באותו היום רצחו אנשי האצ"ל גם את אריה פולונסקי שנחשד בהלשנה לבריטים, ועולה חדש שהזדמן למקום הרצח. הבריטים ערכו מעצרים רבים של אנשי בית"ר והצה"ר. מפקד הארגון, דוד רזיאל, נעצר ב-19 במאי, ובמקומו מונה חנוך קלעי (סטרליץ), שעד מעצרו היה ממלא מקומו. ב-31 באוגוסט עצרה המשטרה הבריטית את חברי מפקדת האצ"ל בעת שהתקיימה ישיבה משותפת. חלק מהעצורים עונו לשם הוצאת מידע על האצ"ל. האצ"ל פרסם אזהרה כי הדבר יגרור תגובה מצדו. ב-26 באוגוסט 1939 הארגון התנקש בחייו של ראלף קרנס, קצין משטרה בריטי שכיהן כראש המחלקה היהודית בבולשת ועינה בעצמו צעירים חברי מחתרת. קרנס וקצין בריטי נוסף בבולשת נהרגו מפיצוץ מוקש מוסתר של המחתרת. הפעולות ההתקפיות של האצ"ל כנגד הערבים והבריטים היו הבולטות ביותר של הארגון באותה תקופה, אך הן היו מעשי חוליות קטנות או אפילו יחידים מקרב האצ"ל. לעומת זאת רוב כוח האדם בארגון עסק באותה תקופה בשמירה ובהגנת יישובים. שורות האצ"ל הורכבו אז מצעירי בית"ר (מקִניה או מפלוגות העבודה שלה), אנשי הצה"ר והסתדרות העובדים הלאומית, צעירים שהשתייכו לתנועת המכבי הצעיר, חברי תנועת הנוער הדתית "ברית החשמונאים", וסטודנטים מהאגודות הלאומיות "יבנה ויודפת" ו"אל על". בכמה מהמושבות בשומרון, בשרון ובדרום יהודה היוו אלה כוחות מגן עיקריים. בחלק מהאזורים התקיים שיתוף פעולה עם אנשי ההגנה. בין היתר סייעו אנשי ה"הגנה" לביסוס תל צור, יישוב חומה ומגדל בית"רי. באותה תקופה התבסס האצ"ל גם באירופה. הארגון הקים תאים מחתרתיים אשר השתתפו בארגון שיירות העלייה. התאים הורכבו כמעט רק מאנשי בית"ר והפעילות העיקרית בהם הייתה הכשרה צבאית ואימונים לקראת עלייה לארץ ישראל. קשרים שנקשרו עם השלטונות הפולניים הביאו לפתיחת קורסים קטנים בהם הוכשרו מפקדי אצ"ל על ידי קצינים פולניים בנושאים צבאיים מתקדמים, כגון מלחמת גרילה, טקטיקה ומיקוש. אברהם שטרן (יאיר) היה מהבולטים במארגני התאים באירופה. משנת 1937 החלו שלטונות פולין להזרים למחתרת כלי נשק רבים. העברת האקדחים, הרובים, חומרי הנפץ והתחמושת הופסקה כשפרצה מלחמת העולם השנייה. תחום נוסף בו פעל האצ"ל היה תחום הכשרת טייסים, שיוכלו לשרת בחיל אוויר במלחמה העתידית לעצמאות, בבית ספר לתעופה בלוד. שני מחזורי טייסים הוכשרו במוסד זה. רובה של העלייה הבלתי חוקית בסוף שנות ה-30 בוצע על ידי המחנה הרוויזיוניסטי, בהתאם ל"תוכנית האבאקואציה" של ז'בוטינסקי – לפיה יש לפנות בהקדם מיליונים מיהודי אירופה. לעומתו, עד פרסום הספר הלבן של מקדונלד, מוסדות היישוב והסוכנות היהודית, ובייחוד בן-גוריון, נרתעו ממפעל עלייה בלתי חוקי והשלכותיו המדיניות, וקיוו כי בריטניה תתיר עלייה חוקית רחבת היקף של יהודים. בתחילה האצ"ל עסק רק בהעלאת העולים אל החוף ופיזורם בין היישובים העבריים השונים, בעוד ארגון העלייה נעשה על ידי אנשי הצה"ר ובית"ר. ההעפלה דרך הים החלה בספטמבר 1937 עם הגעתה של האונייה "ארטמיזיה" לחוף טנטורה. החל מקיץ 1938 נטל על עצמו האצ"ל עוד תפקידים בארגון וביצע ליווי לאוניות, ובפברואר 1939 הוחלט על הקמת מטה שיתאם את העלייה, והוסכם על חלוקת הסמכויות בין הגופים השונים בו. בסך הכל, הארגונים הרוויזיוניסטים (בהם כאמור האצ"ל) ארגנו כ-30 הפלגות של אוניות מעפילים, בהן עלו כ-20,000 איש. רובם לא נתפסו בידי הבריטים. אוניית המעפילים הגדולה ביותר במסגרת זו הייתה "סקריה", שעל סיפונה הגיעו בפברואר 1940 2,300 עולים. עם פרוץ מלחמת העולם השנייה בסוף 1939 מיהר זאב ז'בוטינסקי להביע תמיכה בבעלות הברית בכלל ובבריטניה בפרט. גם רזיאל תמך ממקום מעצרו בקו זה, ובעקבות כך פרסם האצ"ל ב-11 בספטמבר הודעה על הפסקת פעולותיו התוקפניות נגד בריטניה, על מנת שלא להפריע לה להילחם "באויבו הגדול ביותר של העם העברי בעולם – הנאציזם הגרמני". בעקבות ההכרזה, שוחרר רזיאל ממעצרו בסוף אוקטובר. באותה תקופה שחררו הבריטים את רוב אנשי אצ"ל, בית"ר והצה"ר העצורים, אולם אלה שהתנגדו לקו של ז'בוטינסקי ורזיאל, ובראשם אברהם שטרן ("יאיר"), נותרו במעצר עד יוני 1940. דבר זה גרם להתמרמרות נגד רזיאל בקרב מתנגדי הקו הפרו-בריטי, ורזיאל הודיע על התפטרותו מתפקידו עקב מורת רוחו מפעילויות עצמאיות של בכירים בארגון, ואף ספקות שהטילו כמה מפקדים בנאמנותו. למרות זאת הוא חזר לתפקידו, בלחץ אנשי התנועה הרוויזיוניסטית ובראשם ז'בוטינסקי עצמו, שנתן אמון מלא ברזיאל. עם שחרורו פעל רזיאל לשיקום הארגון, שנפגע קשות ממעצר אנשיו, ולהידוק שיתוף הפעולה עם הבריטים. באותה תקופה עמדת האצ"ל הייתה אף יותר פרו-בריטית מזו של מוסדות היישוב, שהמשיכו למחות נגד מדיניות הספר הלבן. הארגון לא שלל הצטרפות לצבא הבריטי ואנשי אצ"ל התגייסו ליחידות צבא שונות. עיקר שיתוף הפעולה היה בתחום המודיעיני: מחלקת שירות הידיעות (מש"י) של הארגון העבירה לידי הבריטים ידיעות על סוכנים גרמנים ואיטלקים בארץ, וכן על קומוניסטים (בתקופה זו ברית המועצות הייתה עוד בברית עם גרמניה), ותוכננו פעולות ביון למען הבריטים בארצות אירופה הכבושה ובמזרח התיכון. גם אחרי פרוץ מלחמת העולם המשיכו הבריטים באכיפת חוקי "הספר הלבן", נגד מכירת הקרקעות והעלייה. בשורות האצ"ל הביא הדבר לתחושות אכזבה קשה ולתסיסה שבמוקדה דעות שונות מאלו של הנהגת הצ"ח, של רזיאל ושל מפקדת האצ"ל. ב-18 ביוני השתחררו ממעצרם אברהם שטרן (יאיר) וחברי מפקדה אחרים שנותרו במעצר אחרי שחרור רזיאל, ונוצר קרע גלוי בינם לבין הנהגת האצ"ל והצה"ר. הנושאים השנויים במחלוקת היו סוגיית כפיפות המחתרת להנהגה פוליטית גלויה, ושאלת המאבק בבריטים. רזיאל פרש שוב מתפקידו, ובמקומו נבחר שטרן לראשות המפקדה. אנשי בית"ר והצה"ר קיבלו את המינוי החדש במורת רוח מפני שראו בו ערעור על סמכותו של ז'בוטינסקי – בעוד רזיאל היה נאמן לחלוטין לז'בוטינסקי, שטרן הקים בעבר את תאי האצ"ל החשאיים בפולין ללא ידיעתו של ז'בוטינסקי ובניגוד להשקפתו, ובנוסף תמך בהפקעת הארגון מהתנועה הרוויזיוניסטית המדינית. ועד המורשים של התנועה הרוויזיוניסטית לחץ על רזיאל לחזור לתפקידו, והוא נענה לבסוף. ז'בוטינסקי כתב מכתבים לרזיאל ולשטרן שהופצו בסניפי האצ"ל ובקרב חבריו. אל רזיאל כתב: ואילו אל שטרן נשלח מברק ובו פקודה לציית לרזיאל אשר מונה מחדש, אולם אירועים אלה לא מנעו את פילוגו של הארגון. חשדנות וחוסר אמון נזרעו בין החברים שהתפלגו בנאמנותם. מתוך האצ"ל נוצר בחודש יולי ארגון חדש שנקרא בתחילה "הארגון הצבאי הלאומי בישראל" (לעומת "אצ"ל בארץ ישראל"), ובהמשך קיבל את השם "לח"י". מות ז'בוטינסקי ב-3 באוגוסט 1940 לא עצר את הפילוג, וב-14 באוגוסט הפסיק רזיאל את המשא ומתן עם אנשי שטרן. אחת מפעולות האצ"ל בשירות מלחמתה של בריטניה יועדה לחבלה נגד כוחות פרו-נאציים בעיראק. אל הפעולה יצאו בין היתר דוד רזיאל, מפקד הארגון, יעקב סיקא אהרוני ויעקב מרידור. ב-20 במאי 1941, בהפצצת מטוסים גרמניים בשדה התעופה חבניה באזור בגדאד, נהרג דוד רזיאל. נפילתו של מפקד הארגון הייתה מכה קשה, שהצטרפה לפילוג ולפטירת ז'בוטינסקי. מנהיגות האצ"ל החדשה, בראשותו של יעקב מרידור, השתמשה בהפוגת הפעילות לשם שיקום הארגון ששותק מהמכות שנחתו עליו. באותה תקופה גם חלה התקרבות בין מחנה האצ"ל למחנה הסוכנות. נחתמה "הצעת הסכם על תוכנית פעולה ציונית לתקופת המלחמה וועידת השלום", אולם דרישתו הבלתי מתפשרת של דוד בן-גוריון, יושב ראש הנהלת הסוכנות היהודית, כי האצ"ל יקבל את מרותה של הסוכנות וסרובו הנחרץ של האצ"ל לקבל את מרות המוסדות הנבחרים של היישוב טרפדו את המהלך. בֵּיתָר – מִגֹּב רִקָּבוֹן וְעָפָר בַּדָּם וּבַיֶּזַע יוּקַם לָנוּ גֶּזַע גָּאוֹן וְנָדִיב וְאַכְזָר, בֵּיתָ"ר הַנִּלְכָּדָה, יוֹדֶפֶת, מַסָּדָה, תָּרֹמְנָה בְּעֹז וְהָדָר. עִבְרִי גַּם בְּעֹנִי בֶּן-שַׂר, אִם עֶבֶד, אִם הֶלֶךְ – נוֹצַרְתָּ, בֶּן-מֶלֶךְ בְּכֶתֶר דָּוִד נֶעֱטָר. בָּאוֹר וּבַסֵּתֶר זְכֹר אֶת הַכֶּתֶר – עֲטֶרֶת גָּאוֹן וְתַגָּר. עַל כָּל מַעֲצוֹר וּמֵצָר! אִם תַּעַל אוֹ תֵּרֵד – בְּלַהַב הַמֶּרֶד שָׂא אֵשׁ לְהַצִּית, אֵין דָּבָר: כִּי שֶׁקֶט הוּא רֶפֶשׁ, הַפְקֵר דָּם וָנֶפֶשׁ לְמַעַן הַהוֹד הַנִּסְתָּר! לָמות או לכבּוש את ההר – יוֹדֶפֶת, מַסָדָה, בֵּיתָר. בשורות האצ"ל ובשורות "ההגנה" רבו הדעות המצדדות בהתנגדות לשלטון הבריטים. במשך כל ימי מלחמת העולם, על אף שהיישוב היהודי ניצב לצד בעלות הברית והוציא מקרבו מתנדבים למלחמה, לא פסקו הבריטים מאכיפה נוקשה של מדיניות הספר הלבן. נוספו על כך טרגדיות כמו התקלה המבצעית של ההגנה שהביאה לטביעת "פאטריה", על 216 מבין כ-1,800 המעפילים שעל סיפונה, שאותם התכוונו הבריטים לגרש למאוריציוס, גירוש 1,584 מעפילים באנייה "אטלאנטיק", וטביעתה של "סטרומה" על 769 מעפיליה. בסוף 1943, התארגנה קבוצה בה היו חברים יחדיו אנשים מההגנה ומהאצ"ל, במטרה להקים גוף לוחם מאוחד, ללא שיוך מפלגתי, בשם "עם לוחם". תוכניתו הראשונה של הגוף היה חטיפת הנציב העליון וגירושו לקפריסין. אולם "ההגנה" חשפה את ההתארגנות מתוכה, והתוכנית סוכלה על ידי המחתרות בעודה באיבה. עם זאת שלב זה הביא את האצ"ל לחדול משיתוף פעולה עם הבריטים. אליהו לנקין סיפר בספרו: "מיד עם הכישלון של עם לוחם החלו במפקדת האצ"ל הדיונים המעשיים על הכרזת המלחמה". כישלון הפעולה גם הביא את מרידור להחלטה לוותר על תפקידו כמפקד הארגון. נקודת מפנה בתולדות האצ"ל הייתה הגעתו לארץ ישראל של מנחם בגין, שהיה נציב בית"ר בפולין, ואחת הדמויות הבולטות במחנה הרוויזיוניסטי. בגין הגיע לארץ ישראל כחייל בצבא אנדרס, שחנה בארץ בדרכו מברית המועצות דרך פרס לחזית באירופה. עם הגיעו, במקביל לשרותו הצבאי במפקדת צבא אנדרס, השתלב בגין במפקדת האצ"ל, והוצע לו לעמוד בראש הארגון. יוזמה זו הייתה בין היתר של המפקד המכהן, יעקב מרידור. אולם בגין סירב להתמנות לתפקיד כל עוד לא שוחרר מהצבא, והארגון המשיך לדשדש בתקופת ביניים. שחרור בגין מצבא אנדרס הגיע בסוף שנת 1943. הוא קיבל את הפיקוד על הארגון, וגיבש מפקדה חדשה בה כיהנו מרידור כסגנו, וכן אריה בן-אליעזר, אליהו לנקין ושלמה לב עמי. עוד בפולין היה בגין מראשי המחנה האקטיביסטי בתנועה הרוויזיוניסטית, שהתנגד לקו הפרו-בריטי. העברת הפיקוד על האצ"ל לידיו נעלה סופית את התקופה הפרו-בריטית של האצ"ל, והביאה לתחילת "המרד" נגד השלטון הבריטי בארץ. ב-1 בפברואר 1944 פרסם האצ"ל את כרוז "הכרזת המרד". המסמך פותח בפרטים לפיהם עמדו כל התנועות הציוניות לימין בעלות הברית ולמעלה מ-25,000 יהודים התגייסו לשורות צבא בריטניה. תקווה להקים צבא עברי התבדתה. באותו הזמן עמדו ערביי המזרח התיכון לצדה של גרמניה. יהדות אירופה אותה עת הייתה כלואה והחלה מחוסלת אולם בריטניה לא אפשרה פעולות הצלה. סעיפי העובדות במסמך מסתיימים במילים: לאחר מכן מכריז המסמך כי מבחינת האצ"ל תמה שביתת הנשק מול הבריטים וכי מעתה נפתחת מולו מלחמה. המסמך ממשיך בדרישה למסירת השלטון על ארץ ישראל לידי "ממשלה עברית זמנית", שתקיים עשרה סעיפים, בהם פינוי המוני של יהדות אירופה, כריתת בריתות עם כל מדינה שתכיר בריבונות של המדינה העברית (ובכלל זה בריטניה), הבטחת צדק סוציאלי לתושביה והענקת שוויון זכויות מלא לאוכלוסייה הערבית. האצ"ל פתח את המערכה כששורותיו חלשות – הארגון מנה רק קרוב ל-1,000 אנשים, מתוכם כ-200 לוחמים בלבד. גם אמצעי הלחימה היו מעטים. הארגון עבר ארגון מחדש וחולק לחטיבות שונות: ח"ק – חיל קרב – עיקר הכוח הקרבי של הארגון, הים – יחידות מחץ, דלק – מודיעין הארגון, חת"מ – חיל תעמולה מהפכני, אחראי על הפצת ההסברה, ח"ת – חטיבת תכנון. הארגון כולו שקע במחתרתיות עמוקה יותר, ובייחוד מפקדיו החלו להחליף מקומות מגורים וזהויות. בגין, לדוגמה, אימץ בחלק מהזמן זהות של רב ("ישראל ססובר"), וכן נקרא לפרקים בשמות כגון "בן זאב", "ד"ר קניגסהופר" ואחרים. הארגון פתח בהתקפות על סמלי השלטון הבריטי, בניסיון לפגוע בתפקוד השלטון כמו גם ביוקרתו, תוך שמירת הכלל שהציב לעצמו – הימנעות מטרור אישי וניסיון שלא לפגוע בחיי אדם. התקפה ראשונה התבצעה ב-12 בפברואר 1944 על משרדי העלייה הממשלתיים שסמלו בעיני היישוב את גזרות העלייה. ההתקפות התבצעו במוצאי שבת, כאשר המבנים ריקים, במשרדי שלוש הערים הגדולות – ירושלים, תל אביב וחיפה. ב-27 בפברואר פוצצו משרדי מס הכנסה בשלוש הערים, אותם הגדיר הארגון "המכשיר הראשי לניצול הפועל והאזרח העברי על ידי ממשלת הבגידה". גם התקפה זו בוצעה במוצאי שבת. אזהרות מוקדמות פורסמו בקרבת הבניינים. עיריית תל אביב קבעה לוחית זיכרון על בניין בנק ישראל ברחוב נחלת בנימין 69 שנבנה על הריסות בניין מס ההכנסה המנדטורי שנהרס על ידי האצ"ל. ב-23 במרץ הותקף המטה הארצי של הבולשת במגרש הרוסים וחלק ממנו פוצץ. פעולות האצ"ל בחודשים הראשונים גונו בחריפות על ידי הנהגת היישוב המאורגן והסוכנות היהודית, שראו בהן מעשי פרובוקציה מסוכנים. באותו זמן גם החל לח"י לחדש את התקפותיו על הבריטים. האצ"ל המשיך לתקוף את מטות הבולשת, תחנות משטרה ומצודות טגארט, לעיתים תוך קרבות ירי עם אנשי המשטרה. פעולה מורכבת יחסית הייתה השתלטות של חמש יחידות לוחמות על תחנת השידור הממשלתית ברמאללה ב-17 במאי. פעולה סמלית הייתה פרסום אזהרות לשוטרים בריטיים לבל יגיעו לרחבת הכותל המערבי במהלך יום הכיפורים. לדברי יהודה לפידות, איש האצ"ל, לראשונה מאז תחילת שלטונם, לא הגיעו שוטרים בריטיים אל הרחבה ולא מנעו כבעבר מהיהודים את תקיעת השופר המסורתית. במוצאי יום הכיפורים התקיף האצ"ל ארבע תחנות משטרה במרכזי יישוב ערביים. כדי להשיג נשק יזם הארגון פעולות "החרמה" – השתלטות על מוקדי נשק בריטיים והברחתם לידיו. מנחם בגין הביא את דעתו בספרו "המרד" אודות פעולות אלה: כמו כן העריכה באותו זמן מפקדת הארגון "ביישוב כולו שוררת ההרגשה, כי התוצאה הישירה ממלחמתנו היא מניעת מאורעות מצד הערבים". באותה תקופה גם ניתק הארגון את קשריו עם ההסתדרות הציונית החדשה והמפלגה הרוויזיוניסטית כדי שלא לכרוך גורלות עם ארגונים גלויים וחוקיים. סיבה נוספת לנתק היו חילוקי הדעות בין הארגון שנקט עתה במדיניות אנטי בריטית מפורשת, לבין הארגונים הרוויזיוניסטים הפוליטיים, שעדיין נקטו ברובם בקו פרו-בריטי כל עוד המלחמה באירופה נמשכת. באוקטובר 1944 החלו הבריטים בגירוש מאות אנשי אצ"ל ולח"י עצורים אל מחנות מעצר באפריקה. 251 עצורים מלטרון הוטסו ב-19 באוקטובר ב-13 מטוסים צבאיים אל מחנה עצורים באסמרה שבאריתריאה (באותם ימים חלק מאתיופיה). בהמשך הגיעו אחד עשר משלוחי מגורשים נוספים של גולי אצ"ל ולח"י. עד שהוחזרו הגולים לישראל, ביולי 1948, נעשו ניסיונות בריחה מן המחנות. הניסיונות אמנם הצליחו ביציאה מתחומי גדרות המחנה אולם רק תשעה בורחים הצליחו הלכה למעשה להשלים את בריחתם במלואה ולחזור לפעילות במחתרותיהם. מן הבולטים היה יעקב מרידור שברח תשע פעמים ממחנות המעצר עד שהצליח להגיע לאירופה באפריל 1948 (על כך כתב בספרו "ארוכה הדרך לחירות"). לאורך תקופת המעצר יזמו הגולים מעשי התמרדות נגד נוהלי המחנות ושביתות רעב. בעוד פעולות האצ"ל ולח"י נגד הבריטים הלכו והחריפו, נקטה הנהגת היישוב בקו של הידוק שיתוף פעולה עם הבריטים במלחמה באירופה (שליחת הצנחנים, הקמת הבריגדה היהודית) וריסון הפעולות נגד השלטון הבריטי בארץ – הן בשל עצם התרומה למאבק נגד הנאצים, והן בתקווה שהדבר יביא להישגים מדיניים לאחר תום המלחמה. הנהגת היישוב חששה שפעולות האצ"ל ולח"י יחבלו במאמצים אלה, ויגררו את היישוב כולו למאבק שבו לא רצתה בשלב זה. היה אף חשש מתגובה של השלטון הבריטי לפעולות "הארגונים הפורשים", שתהיה מכוונת נגד היישוב כולו. לאחר מגעים ממושכים וניסיונות שכנוע של האצ"ל ולח"י למתן את פעילותם נגד הבריטים, ואחרי ההתנקשות של אנשי לח"י בלורד מוין, שר המדינה הבריטי למזרח התיכון, בנובמבר 1944, החלה הפעולה נגד שני הארגונים, אם כי עיקר הפעולה כוון נגד האצ"ל, בשל הבנות מאחורי הקלעים שהושגו עם לח"י. במהלך מה שכונה "הסזון" הוצאו חשודים בהשתייכות למחתרת או בתמיכה בה ממוסדות כגון בתי ספר, מקומות עבודה וקופת החולים הכללית, ובעיקר פעלו אנשי ההגנה והפלמ"ח בעיכוב, חטיפה, כליאה, חקירה ומסירת אנשי אצ"ל ולח"י, או פרטים מסגירים אודותיהם, לידי הבולשת הבריטית. בין היתר הוסגרו לבולשת חברי מפקדת האצ"ל יעקב מרידור, שלמה לב עמי ואליהו לנקין. עוד בתחילת פעולות אלו חייבה מפקדת הארגון את אנשיה בכל הדרגים לנקוט הבלגה מלאה כלפי היהודים מבצעי הסזון. על אף שהיו מתנגדים למדיניות נוקשה זו, הטיל מנחם בגין את כל כובד השפעתו כדי למנוע "מלחמת אחים": בעקבות הסזון פסקו פעולות האצ"ל לכמה חודשים, עד לתום המלחמה באירופה, אך הארגון לא חוסל. התאוששות בשורותיו ניכרה כאשר החל מחדש בפעולות חבלה בשיתוף עם לח"י במאי 1945 נגד צינורות נפט, קווי טלפון וטלגרף וגשרי מסילות ברזל. בחודשים שלאחר תום המלחמה באירופה במאי 1945 החלה התקרבות בין האצ"ל ולח"י לבין ההגנה בשאלת היחס לשלטון הבריטי. הנהגת היישוב קיוותה שעם תום המלחמה בריטניה תגמול ליישוב על תמיכתו בה בימי המלחמה. התקוות גברו כאשר בסוף יולי עלתה מפלגת הלייבור לשלטון בבריטניה, לנוכח הצהרות קודמות מטעם המפלגה התומכות בציונות ובביטול הספר הלבן. אולם משעלתה לשלטון נתבדו התקוות, לאחר שמדיניות הממשלה בנושא, שהוביל שר החוץ ארנסט בווין, המשיכה במדיניות הספר הלבן, ובמיוחד המשיכה להגביל את העלייה. לנוכח האכזבה מהבריטים, התגברו הקולות בקרב ההגנה ומוסדות היישוב שקראו לפתוח במאבק נגד השלטון. בכך הצטמצמה עד מאוד אחת המחלוקות העיקריות בין הנהגת היישוב לבין האצ"ל ולח"י, ונסללה הדרך לשיתוף פעולה. באוגוסט החל משא ומתן בין המחתרות ובאוקטובר הושגה הסכמה והוקמה "תנועת המרי העברי", גוף שאיגד את שלוש המחתרות במאבק מזוין נגד הבריטים. האצ"ל ולח"י התחייבו לתאם את כל פעולותיהם עם ההגנה (למעט פעולות שנועדו להשגת נשק), ואף קיבלו משימות לביצוע מהנהגת תנועת המרי העברי. במשך עשרה חודשים פעלו האצ"ל ולח"י ב-19 מבצעי תקיפה, ההגנה והפלמ"ח ביצעו עשרה, רובם פעולות גדולות (הגדולה והידועה בהן היא ליל הגשרים). בנוסף העלתה ההגנה באופן בלתי חוקי למעלה מ-13,000 מעפילים. בגיליון נ"א של עיתון חרות משנת 1945, מובאות תחושות אנשי הארגון: היום שרה הקטנה ניפגש בצאתי למלחמה את המדינה לכונן על שתי גדות הירדן. גזזי צמתך וחגרי את חגורתך, חבקיני, קחי מקלע ואתי לשורה. עלי בריקדות נפגש, נפגש עלי בריקדות חרות נשא בדם ואש. רובה אל רובה, קנה יצדיע, כדור אל כדור יריע, עלי בריקדות, עלי בריקדות נפגש. ואם בתליה אמסור את חיי לאומה, אל נא תבכי כך נגזר גורלי מחי דמעתך, לחצי המקלע אל לבך ובחרי לך שני מאנשי גונדתי הפעולה המשותפת הראשונה של תנועת המרי הייתה "ליל הרכבות" ב-1 בנובמבר 1945. האצ"ל ולח"י ביצעו ביחד התקפה על תחנת הרכבת לוד ואילו כוחות ההגנה והפלמ"ח פגעו ברשת המסילות במאות מקומות לאורכה. המתח מול השלטונות גבר עם ריבוי הפעולות נגד מתקניהם. האצ"ל ולח"י יצאו לפעולות משותפות: התקפת מחנות צבא בבית נבאלא, ראש העין ומחנה חיל האוויר בצפון תל אביב, שם ביצע האצ"ל לראשונה פשיטה מכיוון הים. בסוף דצמבר התקיף עם לח"י והחריב את מרכז הבולשת הבריטית ביפו ובמגרש הרוסים. בנוסף ביצע הארגון שש "פעולות החרמה" בהן השיג עשרות מקלעים ותתי מקלעים, מרגמות ותחמושת. בשוד רכבת הכסף, שנערך ב-12 בינואר 1946, העביר האצ"ל לידיו 35 אלף לא"י, סכום עצום אשר שימש למימון פעולות המחתרת. ב-25 בפברואר 1946 נערכה התקפת שדות התעופה הבריטיים: לוחמי האצ"ל חדרו לשדה התעופה לוד ופוצצו 11 מטוסים צבאיים, בשדה התעופה בקסטינה (כיום בסיס חצור) השמידו לוחמי אצ"ל 12 מטוסים, ובשדה התעופה ליד כפר סירקין (כיום מחנה סירקין) השמידו לוחמי הלח"י שישה מטוסים. עשרה ימים אחר כך נכנסו שלושים אנשי אצ"ל מחופשים ל"כלניות" (חיילי הדיוויזיה השישית המוטסת) אל מחנה צריפין, ומילאו משאית בתחמושת ובנשק. בדרכם החוצה נפתחה עליהם אש, ושני לוחמים, מיכאל אשבל ויוסף שמחון, נפצעו קשה ובמהלך פינויים לבית החולים גלעד בתל אביב נשבו. לאחר שנידונו לתלייה חטף האצ"ל חמישה קציני צבא בריטיים כבני ערובה. עונשם של הנידונים הומר בחנינה והאצ"ל שחרר את הקצינים. מיכאל אשבל כתב בתא הנידונים למוות את השיר "עלי בריקדות", שהיה לאחד משירי המארש האהובים ביותר על אנשי הארגון. אשבל ושמחון הפכו את משפטם לבימה פוליטית בה השתמשו לנאומים שקיבלו חשיפה תקשורתית מחוץ לארץ ישראל. ב-23 באפריל בוצעה ההתקפה על משטרת רמת גן על ידי 40 לוחמי הארגון, חלקם מחופשים לאסירים ערביים המובלים על ידי שוטרים לכאורה. לאחר שנכנסו פנימה באין מפריע, שלפו החוליות את נשקן, השתלטו על התחנה, והחלו להעמיס נשק מתוכה על משאית. באמצע הפעולה הגיעה תגבורת בריטית למקום, ותוך קרב יריות המשיכו אנשי הארגון להעמיס את כלי הנשק והחלו לסגת. בנסיגה נהרג מפקד הפורצים, יעקב זלוטניק וגופתו נתלתה על גדר התיל. לוחם נוסף, יצחק בילו, נהרג בעת פעולת ההסחה – מטען נפץ נפל מידו, והוא השליך את עצמו עליו כדי שלא יפגע בחבריו שנשאו חומרי תבערה. בפעולה נפצע ונתפס דב גרונר. הוא הועמד למשפט ואחרי שנידון למוות בתלייה, סירב לחתום על בקשת חנינה והיה לעולה גרדום. בעקבות פעולות תנועת המרי החריפה מדיניות הבריטים נגד היישוב. הוחרפו עד מאוד תקנות החרום, ורבבות אנשי ביטחון הוצבו בארץ. נערכו מעצרים רבים, נערכו חיפושים ביישובים יהודיים, וכן הופרעו בכוח מבצעי התיישבות (דוגמת העלייה לביריה). השיא הגיע באירועי השבת השחורה ב-29 ביוני 1946, שבהם נאסרו אלפי אנשים, בהם רבים ממנהיגי היישוב. בתגובה לשבת השחורה החליט הפיקוד העליון של ההגנה, בראשותו של משה סנה, לממש במסגרת תנועת המרי תוכנית פעולה חריפה נגד הבריטים, אותה סירב לאשר בעבר. תפקיד האצ"ל בתוכנית היה פיצוץ האגף הדרומי של מלון המלך דוד, בו שכנו המזכירות הראשית של הממשלה והמפקדה הצבאית. בזמן ההכנות לפעולה התערב חיים ויצמן ודרש להפסיק את פעולות תנועת המרי. סנה ביקש מבגין לדחות את ביצוע הפעולה, מבלי שהסביר את סיבת הדחייה, אולם לבסוף הוציא האצ"ל לפועל את הפעולה ב-22 ביולי 1946. כדי חלב ובהם מטעני חבלה המחוברים למנגנון השהיה הוצבו בתוך מסעדה ששכנה בקומה הראשונה סמוך לעמודי התמך של הבניין. בשעה 12:37 הופעלו המטענים, והאגף הדרומי של הבניין התמוטט על יושביו. בניגוד לפעולות רבות אחרות של האצ"ל נגד מוסדות ממשלתיים, הפיצוץ במלון המלך דוד אירע באמצע היום, כאשר המקום היה מלא בעובדים. אזהרה ששוגרה מבעוד מועד לא נענתה. בפיצוץ נהרגו כ-90 איש, מתוכם כ-15 יהודים. הפעולה ותוצאותיה עוררה זעזוע ביישוב. הסוכנות ומחנה ההגנה התנערו מהפעולה בה נהרגו אזרחים רבים והוקיעו אותה בחריפות. באותה עת הציבור לא היה מודע לכל השתלשלות הפרשה, שחלקים ממנה שנויים במחלוקת גם היום. הבריטים, למשל, הכחישו בתחילה שהתקבלה הודעת אזהרה, והקשר של תנועת המרי וההגנה לפעולה גם הוא לא היה ידוע ברבים. רק לאחר שנה פרסם האצ"ל את גרסתו על מעורבות מפקדת תנועת המרי ואחריותה לפעולה. הפעולה, תוצאותיה הטראגיות והתגובות שלאחריה גרמו לערעור מחודש של היחסים בין ההגנה והנהגת היישוב לבין האצ"ל, ולפירוק תנועת המרי. פיצוץ מלון המלך דוד ומאסר של אנשי הסוכנות היהודית ואישי היישוב ב"שבת השחורה" הביאו את ההגנה לפרוש מן המאבק המזוין בבריטים. ראשי הסוכנות והיישוב שוחררו ממחנה המעצר בלטרון. המאבק בבריטים נוהל מאז ועד סוף המנדט על ידי שתי המחתרות הקטנות יותר. בתחילת ספטמבר 1946 חידש הארגון את התקפותיו על מתקנים בריטיים, כשהיעדים העיקריים היו מסילות רכבת, קווי תקשורת וגשרים, וגם תחנות רכבת. פעולה בולטת הייתה ההתקפה על תחנת הרכבת בירושלים. בעיני האצ"ל אלה היו מטרות לגיטימיות משום שהן שימשו את הבריטים להעברת כוחותיהם. ההתקפות גרמו לבריטים להפסיק את תנועת הרכבות בלילה לתקופות מסוימות. לאורך התקופה נהג הארגון לפרסם כרוזים בשלוש השפות המזהירות את ציבור הנוסעים לבל ישתמש ברכבות העומדות בפני סכנת התקפה. בדצמבר 1946 נגזר עונש בן 18 שנות מאסר ו-18 מלקות לצעיר מן האצ"ל. האצ"ל מימש איום שפרסם, ולאחר שהולקה העצור, חטפו אנשי הארגון קצינים בריטיים בערים שונות ברחבי הארץ והלקו אותם. אחרי הפעולה, שכונתה "ליל ההלקאות", לא הולקו עוד יהודים על ידי הבריטים. בריטניה התייחסה בכובד ראש אל האירועים המכוונים נגדה. משפחות בריטיות רבות הועברו מבתיהם אל תחומי מחנות צבאיים, ומאוחר יותר חזרו לארצן. בהמשך נשלחו מארץ ישראל כל האזרחים הבריטיים, בעיקר ילדים ונשים. בשלוש הערים הגדולות יצרו הבריטים "אזורי ביטחון" מגודרים ושמורים היטב. היהודים כינו אזורים בצורים אלה "בווינגרד", על שם ארנסט בווין, שר המושבות הבריטי. בירושלים הוקמו ארבעה אזורים כאלה, המרכזי שבהם באזור מגרש הרוסים. האצ"ל הגביר את פעולותיו, ומ-19 בפברואר ועד 3 במרץ התקיף 18 יעדים בריטיים, מחנות צבא, מתחמים, צירים וכלי רכב. בין היתר הותקפו ביום אחד חמישה מחנות צבא בירי ומרגמות. התקפה בולטת הייתה פיצוץ מועדון הקצינים בירושלים (בית גולדשמידט), אשר שכן באחד מאזורי "בווינגרד" השמורים שבירושלים. כתוצאה מהריסת הבניין נהרגו 17 איש, בהם קצינים בכירים בבולשת. בפעולה זו נמנע לראשונה האצ"ל לפרסם אזהרה מוקדמת לפני הפיצוץ. כתגובה הטילו הבריטים עוצר ומשטר צבאי בחלקים מהארץ על ידי כ-20,000 חיילים. הפעולה וההרוגים גרמו לזעזוע בבריטניה. ב-3 במרץ שאל מנהיג האופוזיציה בבית הנבחרים הבריטי, וינסטון צ'רצ'יל: "מאחר שבארץ ישראל חונה צבא גדול פי ארבעה מזה שבהודו והחזקת מאה אלף חיילים עולה 40 מיליון ליש"ט בשנה, מה טעם יש להמשיך בשפיכות הדמים?" גם חלק מהעיתונות הבריטית תמכה ביציאה מן הארץ. האו"ם נתבקש לזרז את הקמת הוועדה המיוחדת לקראת הדיון על ארץ ישראל. במהלך "המצב הצבאי" שהוטל ביצעו האצ"ל ולח"י 68 פעולות, רבות מהן על מחנות צבא. בין היתר תקף אצ"ל את בניין הצבא ומשרדי המפקדה במחנה שנלר ב"בווינגרד" בירושלים תוך פריצת הביצורים החיצוניים. ההתקפה אשר הצליחה לגבור על אמצעי הביטחון הרבים יצרה הד תקשורתי. המצב הצבאי בוטל ארבעה ימים אחר כך. לאחר תום מלחמת העולם השנייה, בראשית שנת 1946, החליטה מפקדת האצ"ל לחדש את פעילותה באירופה ולפתוח "חזית שנייה" נגד השלטון הבריטי. התפקיד הוטל על אלי תבין והבסיס הראשון הוקם באיטליה, שם היו קרוב לאלף בית"רים מאורגנים שהגיעו יחד עם הפליטות היהודית ממזרח אירופה, גרמניה ואוסטריה. לאחר התארגנות הוחלט להתחיל בפעילות מבצעית. המטרה הראשונה שנבחרה: השגרירות הבריטית ברומא, כתגובה על נעילת ארץ ישראל בפני שארית הפליטה מהשואה. האצ"ל הקים מחדש תאים ברחבי העולם, ועד 1948 פעלו ב-23 מדינות, ובהן סין, דרום אפריקה וצפונה, תאים אשר פעלו לעיתים נגד נציגויות בריטניה או ערכו נגדה פעולות תעמולה. ב-31 באוקטובר 1946 פוצצו אנשי האצ"ל את השגרירות הבריטית ברומא, בתגובה לנעילת שערי ארץ ישראל בפני שארית הפליטה. שלושה לוחמים הציבו במהלך הלילה חומרי נפץ ליד הדלת המרכזית של בניין השגרירות וחלקו המרכזי של הבניין נהרס. שני איטלקים שנקלעו לאזור נפצעו קל. בתגובה להגברת פעילות האצ"ל, החלו הבריטים להשתמש בעונש המוות, בו השתמשו נגד יהודים רק פעמיים לפני כן, בסוף שנות ה-30. גזר הדין הראשון, נגד דב גרונר ניתן ב-1 בינואר 1947. מועד הוצאתו להורג נקבע ל-28 בינואר, אולם הוא נדחה לאחר שאנשי האצ"ל חטפו שופט וקצין בולשת בריטיים, ששוחררו לאחר פרסום דחיית ביצוע התלייה. בחודשים פברואר–אפריל נגזרו עוד כמה גזרי דין מוות. ב-16 באפריל הועלו לגרדום בכלא עכו דב גרונר ושלושת אנשי אצ"ל שנתפסו נושאים נשק במהלך "ליל ההלקאות": יחיאל דרזנר, אליעזר קשאני ומרדכי אלקחי. ב-21 באפריל פוצצו עצמם מאיר פיינשטיין ואיש לח"י משה ברזני ברימון, שעות אחדות לפני שעמדו לעלות לגרדום בבית הסוהר המרכזי בירושלים. ב-4 במאי נערך אחד המבצעים הגדולים של הארגון – הפריצה לכלא עכו. יחידה בת 23 אנשים, בעזרת אסירי אצ"ל ולח"י הכלואים במבצר, פרצו אותו והביאו לבריחת 41 אסירי מחתרת (חלקם נתפסו מחוץ לכלא והיו שנהרגו בבריחה). לצדם נמלטו גם אסירים פליליים ערביים רבים. המבצע זכה לתהודה עולמית. שלושה מהמתקיפים שנתפסו – מאיר נקר, אבשלום חביב ויעקב וייס – נשפטו למוות ונתלו. פרשת הסרג'נטים הייתה שיאה של מערכת חטיפות בריטים על ידי האצ"ל. לאחר שאושר גזר דינם של שלושת אסירי המחתרת שנתפסו בפריצה לכלא עכו החליט האצ"ל לנסות להצילם על ידי חטיפת שני סמלים בריטיים ברחובות נתניה. הכוחות הבריטיים ערכו חיפושים נרחבים תוך הטלת עוצר על האזור, אולם לא מצאו את הסמלים החטופים. ב-29 ביולי הועלו לגרדום מאיר נקר, אבשלום חביב ויעקב וייס. כשלוש-עשרה שעות אחר כך הוצאו להורג בתלייה שני הסמלים במקום שבו הוסתרו, ולמחרת נתלו גופותיהם על עצי אקליפטוס בחורשה סמוך לנתניה. עם מציאת הגופות רוסקה אחת מהן כתוצאה מפיצוץ מוקש שהונח לידן. הפעולה גרמה זעזוע וזעם בבריטניה. גם רבים בקרב אנשי היישוב היהודי הזדעזעו מהמעשה, והפעולה גונתה בחריפות על ידי ראשי היישוב. סיבה נוספת לכעס על האצ"ל היה עיתוי הפעולה. באותו זמן התחוללה פרשת אוניית המעפילים "אקסודוס", ובהנהגת היישוב חשו שתליית הסרג'נטים מסיטה את תשומת הלב התקשורתית, ופוגעת בהצלחה התעמולתית של היישוב שהושגה בעת פרשת אקסודוס. מאז פעולה זו ועד תום המנדט הבריטי בארץ ישראל, לא ביצעו הבריטים הוצאות נוספות להורג. במהלך שנת 1947 העבירה בריטניה את שאלת ארץ ישראל לדיון באו"ם, שבעקבותיו הוחלט על סיום המנדט הבריטי. הממשלה הבריטית הכריזה כי תבקש דיון באו"ם בחודש פברואר, ודיון זה הוביל בחודש מאי להקמת ועדת אונסקופ (הוועדה המיוחדת של האו"ם לענייני ארץ ישראל). הוועדה הגישה את מסקנותיה בתחילת ספטמבר, והמליצה על סיום המנדט הבריטי על הארץ. דעת הרוב בוועדה היוותה את הבסיס לתוכנית החלוקה. בכ"ט בנובמבר קיבלה עצרת האו"ם את תוכנית החלוקה, וקבעה שהמנדט הבריטי יסתיים לא יאוחר מ-1 באוגוסט 1948. הממשלה הבריטית החליטה שלא לשתף פעולה עם האו"ם ביישום התוכנית, אלא לסיים את המנדט ולהסיג את כוחותיה עד 15 במאי. התפתחויות אלה הביאו למעשה להפסקה כמעט מוחלטת של פעילות האצ"ל נגד השלטון הבריטי. הפעולה הגדולה האחרונה הייתה פיצוץ בניין המשטרה הבריטית בחיפה ב-29 בספטמבר. לאחר מכן בוצעו נגד הכוחות הבריטיים בעיקר פעולות שנועדו לשם השגת נשק ותחמושת. עם זאת, בסוף פברואר 1948 תקפו אנשי אצ"ל את בניין בית הדין הצבאי בירושלים. אנשי האצ"ל והלח"י רואים ביציאת הבריטים מהארץ הישג של המאבק המזוין שהם ניהלו – מאות פעולות שגבו את חייהם של למעלה מ-300 אנשי ביטחון בריטים, ובראשן פעולות כמו פיצוץ מלון המלך דוד, ליל ההלקאות, הפריצה לכלא עכו ותליית הסרג'נטים שזיעזעו את דעת הקהל בבריטניה, והביכו את ממשלתה. ב-29 בנובמבר 1947 (כ"ט בנובמבר), החליטה העצרת הכללית של האו"ם על קבלת תוכנית החלוקה וסיום המנדט הבריטי. בניגוד למוסדות היישוב, דחה האצ"ל את תוכנית החלוקה, והכריז כי "ביתורה של מולדתנו הוא בלתי חוקי". למחרת החלטת החלוקה פתחו ערביי ארץ ישראל בהתקפות על היישוב היהודי והחל השלב הראשון במלחמת העצמאות. התקפות ראשונות היו על אוכלוסיות יהודיות בירושלים, שכונות בקרבת יפו, בת ים, חולון ושכונת התקווה, ועל נוסעים יהודים בדרכים. עם ההחלטה על סיום המנדט הסתיים למעשה המאבק בבריטים בו התמקד האצ"ל כמעט בלבדית מאז ההכרזה על "המרד", והארגון היה צריך להתאים עצמו לאתגרים החדשים. בפגישה שנערכה בנובמבר עם מפקדים בארגון אמר בגין ש"על האצ"ל להיהפך תוך זמן קצר מגוף מחתרתי מצומצם לצבא סדיר, שיטול על עצמו את המשימה להביא את המלחמה אל מעבר לתחומי החלוקה". עם פרוץ המלחמה מנה הארגון כ-3,000 אנשים, מהם כשליש לוחמים, לעומת עשרות אלפים חברים בהגנה ומאות חברים בלח"י. בידי הארגון היו 2 מרגמות, 30 מקלעים, וכ-500 כלי נשק אישיים (רובים, תתי מקלעים ואקדחים). עם עליית האצ"ל מהמחתרת ומעבר לפעילות כמעט גלויה (למשל הקמת מחנות ברמת גן ובפתח תקווה) הוא הרחיב את שורותיו וצירף מתנדבים חדשים, במקביל לגיוס הכללי של מוסדות היישוב להגנה, אולם הוא התקשה באימון החברים החדשים, שחלקם הצטרף על מנת להימנע מהגיוס ל"הגנה". הארגון גם פתח במבצע גיוס כספים בשם "קרן הברזל", במקביל לגיוס הכספים של המוסדות הרשמיים במסגרת "כופר היישוב". סדר הכוחות של הארגון, שהספיק לפעולות גרילה נגד השלטון הבריטי, לא אפשר לו לעמוד ביעד של קיום ארגון צבאי סדיר שיכבוש אזורים שמעבר לקווי החלוקה ויגן על היישוב היהודי בירושלים. החל מדצמבר החל האצ"ל לבצע התקפות נגד האוכלוסייה הערבית במטרה להעביר את המערכה מהאזורים היהודיים אל האזורים ואל העורף הערביים. בכך חזר לדפוס הפעולה מימי המרד הערבי בשנות השלושים, אולם הפעם גם ההגנה נטשה את מדיניות ההבלגה, ונהגה בצורה דומה, בייחוד אחרי נפילת מחלקת הל"ה. האצ"ל תקף את הכפרים הערביים א-טירה באזור הכרמל, יהודייה סמוך לפתח תקווה ושועפאט ליד ירושלים. בירושלים הטילו אנשי הארגון פצצה בינות ערבים שהמתינו לאוטובוס ברחבת שער יפו. בפיצוץ נהרגו אנשים רבים. כמו כן תקף האצ"ל בחיפה, בוואדי רושמייה ובשכונת אבו כביר ליד יפו. ב-29 בדצמבר יחידות אצ"ל הגיעו ליפו באמצעות סירות ונכנסו לקרב יריות ורימונים עם ערבים. פעולות אלו גרמו בקרב הערבים להרוגים ופגיעות ברכוש. למחרת, במפרץ חיפה, תקפו אנשי הארגון, מתוך מכונית חולפת, פועלי-יום שהמתינו בכניסה לבתי הזיקוק לנפט, הרגו 7 ערבים ופצעו עשרות. בתגובה תקפו פועלים ערבים עובדים יהודים בשטח בתי הזיקוק, והרגו 39 מהם (על כך יצאה "ההגנה" לפעולת תגמול בבלד א-שייח'). בתגובה להתקפת האצ"ל החליטה "ההגנה" לחטוף את אנשי האצ"ל שהיו מעורבים בהתקפה, חטיפות שבעקבותיהם נחטף ומת איש האצ"ל ידידיה סגל, אירוע שכמעט הוליד מלחמת אחים בין ההגנה לאצ"ל. ב-1 בינואר 1948 תקף האצ"ל שוב ביפו, כשאנשיו חדרו אליה מחופשים לבריטים. בסוף ינואר תקף הארגון את בית נבאלא, בסיס פעולה ללוחמים ערביים רבים. בפברואר התקיף האצ"ל בין היתר תחבורה ליד יהודייה, יאזור ורמלה. לוחמי הארגון השתתפו בקרבות מול חמושים ערביים ברמלה ובקלקיליה. בחודש מרץ תקף הארגון בקאקון, ליד טולכרם, כפר ובו ריכוז חמושים ערבים. זמן קצר לאחר קבלת תוכנית החלוקה החל משא ומתן בין האצ"ל להנהלת הסוכנות היהודית על שיתוף פעולה בין הארגון להגנה, נוכח אתגרי המלחמה. מפלגות הפועלים ומפקדת ההגנה התנגדו לניסיון להגיע להסכם. בן-גוריון הכריז כי "אם הפורשים יפרקו את ארגוניהם וימסרו את נשקם, יוכל כל אחד מהם להתנדב להגנת היישוב ככל יהודי אחר, ואם יימצא מתאים יצורף לשורותיה". החוגים האזרחיים ביישוב פעלו למען הסכם, וראשיהם, ובהם יצחק גרינבוים ויהודה לייב פישמן מימון עמדו בראש נציגות הסוכנות במשא ומתן, שהתארך בשל התנגדות הרבה לו. עוד בטרם הגיעו הצדדים להסכמה כוללת, התגבשו סיכומים מקומיים רשמיים למחצה לשיתוף פעולה בחזיתות השונות, בהן האצ"ל פעל במקביל ל"הגנה" והיה צורך למנוע כפילויות והפרעות הדדיות. סיכומים כאלה הושגו בירושלים, בתל אביב, בפתח תקווה ובנתניה. בחיפה השתתף כוח של האצ"ל בכיבוש העיר ב-21 באפריל. במקביל התחוללו גם עימותים בין אנשי האצ"ל וההגנה. בחיפה התחוללה בחודש ינואר פרשה של חטיפות הדדיות של אנשי הארגונים. אחד מחטופי האצ"ל, ידידיה סגל, נמצא מת ליד הכפר הערבי טירה. ההגנה טענה כי הוא נמלט ונהרג על ידי אנשי הכפר, ואילו האצ"ל טען שהוא מת במהלך חקירתו על ידי אנשי ההגנה. ב-22 בפברואר סיכלו אנשי ההגנה ניסיון של האצ"ל לשדוד סניף בנק בתל אביב, וארבעה ימים אחר כך התחולל עימות בין הארגונים בכיכר מוגרבי בתל אביב בעת שהאצ"ל ארגן שם עצרת, במסגרתו נזרקו רימוני הלם על ידי אנשי ההגנה, ואילו אנשי האצ"ל טענו את רוביהם בהפגנתיות. נציגי הצדדים הגיעו להסכם ב-7 במרץ והעבירו אותו לאישור המוסדות. ההסכם אושר בישיבת הוועד הפועל הציוני ב-12 באפריל בקולות הציונים הכלליים, המזרחי והרוויזיונסטים, כנגד קולות מפלגות הפועלים. לאחר עוד כמה עיכובים נכנס ההסכם לתוקף ב-27 באפריל. בהסכם נקבע כי כוחות האצ"ל יישארו עצמאיים אך יקבלו את מרותם של מפקדי החזיתות השונים מטעם ההגנה, פעולות האצ"ל יהיו טעונות אישור של מפקדת ההגנה, והאצ"ל אף יקבל משימות שיוטלו עליו על ידי מפקדת ההגנה. אחת הפעולות הבולטות ביותר של האצ"ל במלחמת העצמאות הייתה כיבוש הכפר דיר יאסין ששכן ממערב ירושלים. ההתקפה על הכפר נערכה במקביל למבצע נחשון של ההגנה לפתיחת הדרך לירושלים, וסוף הקרבות של ההגנה לכיבוש הקסטל (קרב הקסטל). אנשי האצ"ל רצו גם הם להשתתף בשלב חדש זה של הלחימה, בו עברו הכוחות היהודים מהתגוננות והתקפות "פגע וברח" להתקפות מסודרות שכללו כיבוש שטחים והחזקתם. אנשי האצ"ל בחרו כיעד לכיבוש את הכפר דיר יאסין, ופעלו בידיעתו ובאישורו המסויג של מפקד ההגנה בירושלים דוד שאלתיאל. בבוקר 9 באפריל יצאו 72 אנשי אצ"ל ו-40 אנשי לח"י לפעולה. הקרב היה קשה בשל חוסר ניסיונם של הלוחמים בלחימה בשטח בנוי, וחמישה מאנשי הכוח הכובש נהרגו. במהלכו נאלצו אנשי האצ"ל ולח"י לבקש עזרה ממפקדת ההגנה, ששלחה כוח פלמ"ח שסייע באש ממרגמה, נשק שלא היה בידי האצ"ל או לח"י, ועזר בפינוי הפצועים. הקרב התנהל בעוד רוב האוכלוסייה האזרחית נמצאת בכפר, ורבים מתושביו נהרגו במהלכו. מספר ההרוגים הערבים שהאצ"ל ולח"י פרסמו היה 240, ומספר זה התקבע במידה רבה בתודעה. אולם ההערכות כיום מדברות על כ־100 עד 120 הרוגים. מחשש להתקפה אווירית, שתכננו הכוחות הבריטיים, פינו כוחות האצ"ל ולח"י את הכפר כשלושה ימים אחרי כיבושו, והעבירו אותו לידי כוחות ההגנה. רק עם כניסתם נקברו ההרוגים הערבים. תוך זמן קצר עמדו אירועי כיבוש הכפר במוקד של מלחמת תעמולה מרובת צדדים, שכולם הגזימו בתיאור האירועים לצרכים פוליטיים. האצ"ל ולח"י התפארו במתכוון במספר קורבנות הערביים שהיה גדול מהאמת, הן על מנת לגרום לדמורליזציה בקרב הציבור הערבי, והן על מנת להעלות את יוקרתם ככוח לוחם בקרב הציבור היהודי. ההנהגה הערבית אימצה את המספרים המנופחים, והפיצה תאורים על מעשי זוועה על מנת לעורר רגש נקם כנגד הציבור היהודי. אנשי "ההגנה" והשמאל השתמשו גם הם במספרים ובתיאורים המוגזמים על מנת לנגח את "הפורשים" ולהבאיש את ריחם בקרב הציבור היהודי בארץ. לאירועים בכפר, שחלקם נתונים בוויכוח היסטורי עד היום, ולמלחמת התעמולה סביבם, היו השפעות מרחיקות לכת וארוכות טווח. אירועי הכפר, והתעמולה הערבית סביבם שסיפרה על מעשי זוועה שחלק מהם היו בדויים, הטילו פחד על הציבור הערבי וגרמו להאצת הבריחה שלו. כך הפכה פרשת דיר יאסין לנקודת מפנה במלחמה ולאחת מאבני היסוד של הנכבה. אחרי שההגנה כבשה את טבריה וחיפה באמצע אפריל, רצה האצ"ל לרשום לזכותו גם כיבוש עיר ערבית. יפו, שממנה נורו יריות על תל אביב הסמוכה, נבחרה כיעד, ולשם כיבושה גויסו כמעט כל משאבי הארגון באזור שפלת החוף. על פי גבולות החלוקה יפו הייתה אמורה להיות מובלעת של המדינה הערבית בתוך אזור יהודי, דבר שהעלה את האטרקטיביות שלה בעיני האצ"ל שסירב להכיר בחלוקה. בגלל מיקום העיר בתחומי השטח שהיה אמור להיות המדינה הערבית, בניגוד לטבריה וחיפה, הבריטים המשיכו לשלוט בה, ולא איפשרו לכוחות היהודים להשתלט עליה. בהחלטה על היציאה לכיבוש יפו האצ"ל פעל עצמאית, בניגוד לתוכניות ההגנה לכתר את העיר עד צאת הבריטים, ולא לכבוש אותה, ובנפרד מלח"י שרצה להצטרף למבצע, אך סורב. ב-25 באפריל יצאו יחידות האצ"ל שכללו כ-600 לוחמים, ממחנה הארגון ברמת גן להתקפה על יפו הערבית. הקרבות מול הקבוצות הערביות החמושות היו קשים ובהמשך הארגון נתקל בהתנגדות עזה גם מצד הבריטים, שהפעילו נגדו משוריינים ואף כוח אווירי. הייתה זו הפעם הראשונה בה הארגון נלחם ישירות מול כוחות צבא בריטיים. תחת פיקודו של עמיחי פאגלין "גידי", קצין המבצעים של הארגון, האצ"ל כבש לאחר קרב קשה את שכונת מנשייה שאיימה על תל אביב, התקדם עד לאזור הים ולכיוון הנמל, ובו בזמן הפגיז במרגמות את השכונות מדרום. פעולות אלה גרמו למנוסת תושבים ערביים מן העיר, ובמהלכן נהרגו כ-30 אנשי מחתרת, רובם מאש הבריטים. הבריטים תבעו את פינוי מנשייה הכבושה, אולם בעקבות הסכם עם ההגנה, בו הובטח כי לא תהיה נסיגה מיפו בלחץ הבריטים, העביר הארגון להגנה את החסות על מנשייה אותה כבש, ובכך הסתיימה המתקפה העצמאית של האצ"ל על יפו. אף על פי שבמתקפה זו האצ"ל נכשל בכיבוש העיר כולה, והסתפק בכיבוש שכונת מנשייה, פעולתו האיצה משמעותית את בריחת התושבים הערבים מיפו, ותרמה רבות לנפילת העיר. ב-26 באפריל נכנס לתוקפו ההסכם הכלל ארצי בין ההגנה לאצ"ל, וכוחות האצ"ל השתלבו במבצע חמץ של ההגנה, שבמהלכו, בהתאם לתוכניתה המקורית של ההגנה, נכבשו הכפרים מסביב ליפו והעיר כותרה ונכנעה סופית ב-13 במאי, לאחר כניסת כוחות ההגנה והשתלטותם על שאר העיר מדרום. הפעולה ביפו נחשבה בעיני האצ"ל כאחד מההישגים הבולטים של הארגון. מאחר שכבש את מנשייה עד הים וגרם למנוסת האוכלוסייה משאר חלקי העיר, זקף הארגון לזכותו את כיבושה של יפו. מוזיאון האצ"ל בתש"ח ("בית גידי" הממוקם כיום בפארק צ'ארלס קלור), נבנה באחד הבתים המעטים שנותרו משכונת מנשייה. מ-27 באפריל, מועד כניסתו לתוקף של ההסכם עם ההגנה, ועד הקמת צה"ל המשיך האצ"ל את קיומו העצמאי, אולם פעל בשיתוף פעולה עם "ההגנה" והשתלב בפעולותיה. בתחילת חודש מאי כבש האצ"ל את הכפר יהודייה (יהוד בימינו), ומספר כפרים באזור רמות מנשה. מ-16 במאי ואילך נערכו קרבות האצ"ל ברמלה, אשר באותו שלב נכשלו תוך אבדות כבדות לארגון. בחודש מאי הוכרזה המדינה ובסופו של אותו חודש הוקם צבא ההגנה לישראל על בסיס ארגון ההגנה. ב-1 ביוני נחתם הסכם בין מנחם בגין לישראל גלילי. בגין הודיע על כך ללוחמי האצ"ל בשידור בתחנת הרדיו "קול ציון הלוחמת": ”אין צורך במחתרת עברית. במדינת ישראל נהיה חיילים ובונים. לחוקיה נשמע, כי חוקינו הם. ואת ממשלתה נכבד, כי ממשלתנו היא”. חברי האצ"ל החלו להשתלב בצה"ל ביחידות נפרדות. הם קיבלו מספר אישי שהתחיל בספרות "93" וזיהה את נושאו כחבר האצ"ל. עד סוף יוני הושלם התהליך תחת ניהולו של מטה שהוקם לשם כך במפקדת הארגון. אנשי הארגון השתתפו, במסגרת גדודיהם שצורפו לצה"ל, בהדיפת המצרים באזור איסדוד וכיבושה של יבנא ב-4 ביוני. בסוף אוקטובר לחמו וכבשו את הכפר תרשיחא בצפון. כמו כן, ניסו כוחות אצ"ל מצומצמים להגן על משמר הירדן הישנה, שהופגזה במשך שלושה שבועות, אולם בהיעדר תגבורת מכוחות צה"ל באזור נפלה המושבה, שהייתה מזוהה עם תנועת בית"ר, לידי הצבא הסורי. בירושלים הנצורה, שהייתה, להלכה, מחוץ לתחום השלטון של הממשלה הזמנית, המשיכו האצ"ל ולח"י לפעול כגופים עצמאיים וחלקו גזרות עם ההגנה. "אלטלנה" (שנקראה בשם העט של ז'בוטינסקי) הייתה ספינה שארגנו אנשי האצ"ל בצרפת, ועל סיפונה נשלחו כ-900 עולים שנועדו לגיוס ליחידות אצ"ל ונשק ותחמושת רבים שנרכשו במהלך 1948. הספינה הייתה אמורה להגיע ביום תום המנדט, אבל בשל עיכובים יצאה לדרכה מצרפת רק ב-11 ביוני, לאחר שהאצ"ל כבר הצטרף לצה"ל (למעט בירושלים), והתחייב להימנע מפעולות רכש עצמאיות. בין בגין לממשלה ניטשה מחלוקת בשאלה מה יעשה בנשק. בגין דרש שחמישית מהנשק והתחמושת יופנו לכוחות האצ"ל בירושלים, והשאר ליחידות האצ"ל שהצטרפו כבר לצה"ל. בן-גוריון הסכים רק כי חמישית מהנשק תופנה לירושלים (אם כי לאו דווקא לאנשי האצ"ל), ועמד על כך כי שאר הנשק יופקד ישירות בידי צה"ל. הספינה הגיעה אל חופי הארץ ב-20 ביוני, בזמן ההפוגה הראשונה והופנתה אל חוף כפר ויתקין, שם קיבלו את פניה חברי אצ"ל ובראשם מנחם בגין. בכפר ויתקין ירדו ממנה מרבית העולים, והוחל בפריקת הנשק. בהיעדר הסכמה על גורל הנשק התפתח עימות חמוש בין אנשי האצ"ל לכוחות צה"ל שנשלחו לתפוס את הציוד שנפרק מהספינה. בחילופי האש נהרגו שישה מאנשי האצ"ל ושניים מחיילי צה"ל. הספינה, ועליה כמאה אנשי אצ"ל, ובראשם בגין, עזבה את המקום והגיעה אל מול חוף תל אביב, שם עגנה מול מטה הפלמ"ח ב-22 ביוני. על החוף התקבצו אנשי אצ"ל מחד, וכוחות צה"ל ובייחוד פלמ"ח מאידך ופרץ קרב ביניהם. הפרשה הסתיימה כאשר האוניה הופגזה מהחוף בפקודתו של בן-גוריון. אחד הפגזים גרם להתלקחותה, אנשי האצ"ל נאלצו לנוטשה, ולבסוף התחמושת שבבטנה החלה להתפוצץ. בקרב זה נהרגו עוד עשרה אנשי אצ"ל וחייל צה"ל אחד. בעקבות הפרשה נעצרו למספר שבועות כ-200 מאנשי האצ"ל, ופורקו לחלוטין יחידות האצ"ל העצמאיות בצה"ל ואנשיהן פוזרו ביחידות הצבא הרגילות. בשל אי הבהירות בקשר למעמדה של ירושלים, שעל פי תוכנית החלוקה הייתה אמורה להיות בשליטה בינלאומית, המשיך האצ"ל (כמו גם לח"י) לפעול בה עצמאית, ויחידותיו כונו בשם "הגדוד הירושלמי". מספר לוחמיו בעיר עמד על כ־400 בפיקודו של מרדכי רענן. אף על פי שההסכמים של האצ"ל עם ההגנה (ואחר כך עם הממשלה הזמנית) לא כללו את ירושלים, התקיים שיתוף פעולה בין הארגון לבין ההגנה. הארגון שותף בהיערכות לקראת יציאת הכוחות הבריטים מהעיר, במסגרת מבצע קלשון, שמטרתו הייתה השתלטות על האזורים המיועדים לפינוי. לאצ"ל הוקצו השטחים של בית הספר לשוטרים ושייח' ג'ראח. ב-14 במאי התפנו הבריטים. לוחמי האצ"ל הסתערו על בניין ג'נרלי באזור הביטחון והניפו עליו את דגל הלאום. כמו כן תפסו אנשיו והחזיקו את בנייני הבולשת, בית הסוהר במגרש הרוסים ובית המשפט, ובשיתוף ההגנה החזיקו גם ביעדים נוספים. בצהרי היום כבש האצ"ל את בית הספר לשוטרים אולם נסוג משייח' ג'ראח לאחר שכבש רק חלק ממנה כתוצאה מקרבות מול הכוחות הערבים תוך אבדות בנפש. כוחות האצ"ל השתתפו גם בהגנה ובלחימה על הרובע היהודי שבעיר העתיקה במסגרת שתי כיתות שפעלו בתיאום עם מפקד ההגנה. עד לכניעה נהרגו חמישה לוחמי אצ"ל ונפלו בשבי 19 (מתוך סך של 35 לוחמים שבויים יהודים). אחרי פלישת צבאות ערב לארץ ישראל, כוחות הארגון הגנו על רמת רחל, לאחר שהקיבוץ ננטש פעמיים בשל התקפות מצד כוחות ערביים וכוח מצרי. ב-25 במאי נכנסו לקרב על הקיבוץ שתי מחלקות אצ"ל, ולאחר קרבות קשים בין העמדות הדפו את המתקפה וכבשו את השטח. לאחר ההפוגה הראשונה כוחו של האצ"ל באזור ירושלים הוכפל לכ-900 איש, על ידי מתנדבים חדשים ובאמצעות אנשי ארגון שהגיעו לסייע בכיבוש העיר. ב-14 ביולי כבשו לוחמי האצ"ל, בשיתוף צעירי גדנ"ע של ההגנה, את הכפר אל-מאלחה (לימים שכונת מלחה). למחרת ניהלו הערבים התקפת נגד שלא צלחה, אולם כתוצאה ממנה נהרגו או נפגעו עשרות מאנשי הארגון. ב-17 ביולי פתחו במשותף כוחות צה"ל בירושלים יחד עם כוחות של האצ"ל ולח"י במבצע קדם שנועד לכבוש את העיר העתיקה. גדוד חיל שדה של חטיבת עציוני הופקד על פריצת השער החדש, פלוגת לח"י אחראית על פריצה בין השער החדש ושער יפו, ושתי פלוגות אצ"ל ליד השער החדש. חבלני אצ"ל פרצו את השער וכוחותיו התקדמו מרחק קצר אל תוך העיר העתיקה, אולם התקפתם נבלמה באש עזה. מטעני החבלה של ההגנה ולח"י נכשלו בפיצוץ והגיעה פקודה לסגת מן המערכה לכיבוש העיר העתיקה. לאחר ההתנקשות ברוזן פולקה ברנדוט בידי ארגון יוצאי לח"י, הציבה הממשלה אולטימטום לאצ"ל בירושלים להתפרק. ב-21 בספטמבר התייצבו חיילי ומפקדי הארגון בפני גופי הקבלה של צה"ל והצטרפו אל שורותיו. לאחר שפורק האצ"ל בקיץ 1948, יזם מנחם בגין את הקמת "תנועת החרות – מיסודו של הארגון הצבאי הלאומי", ועמד בראשה. בתחילת ימיה תבעה חרות את המשך הלחימה, ביטול תוכנית החלוקה ואי-קיום משא ומתן עם מדינות ערב. בבחירות לאספה המכוננת בינואר 1949 זכתה התנועה ל-14 מושבים בכנסת והייתה לסיעה הרביעית בגודלה. בבחירות התמודדה גם רשימה של המפלגה הרוויזיוניסטית ממנה צמח האצ"ל, אולם רשימה זו לא עברה את אחוז החסימה, ואנשי התנועה הרוויזיוניסטית הצטרפו לתנועת החרות ב-1950. בכך הפכה תנועת החרות לממשיכת דרכה של התנועה הרוויזיוניסטית כולה, ובכך גם התקבע מעמדו ההיסטורי של האצ"ל כזרם המרכזי של הציונות הרוויזיוניסטית. בשנותיה הראשונות נדחקה תנועת החרות על ידי בן-גוריון ומפא"י אל שולי המערכת הפוליטית, תחת הסיסמה "בלי חרות ומק"י". אולם עם השנים היא נכנסה אל תוך הקונצנזוס הציבורי, עד שב-1977 עמדה במרכז רשימת הליכוד שניצחה בבחירות לכנסת התשיעית והביאה את מפקד האצ"ל, מנחם בגין, לראשות ממשלת ישראל. שפה ומינוח פנימי מלבד כינויים ששימשו במקום שמות מפקדי הארגון ואנשיו, נעשה שימוש נרחב בכינויים וביטויים פנימיים עבור מקומות, תפקידים וגופים שונים. להלן חלק קטן מהם: מורשת הארגון והנצחתו בשנים הראשונות של מדינת ישראל לא היה ביטוי ממלכתי למורשת האצ"ל ולח"י וברדיו לא הושמעו שיריהם. מרכזיותה של מפא"י, שהחלה בימים שלפני קום המדינה, המשיכה גם בעשורים הראשונים לקיומה. למחנה הרוויזיוניסטי בכללו ועבור "המשפחה הלוחמת" (השם בו כינתה עצמה קהילת יוצאי האצ"ל) בפרט, היו סיבות להאמין כי מדיניות דחיקתם מסיפור התקומה נמשכת באופן מכוון. כבר ב-1949 דרש בגין מן הממשלה להעניק ליוצאי אצ"ל ולח"י זכויות כמו שהעניקה לחברי ההגנה. באותו זמן נטלה על עצמה תנועת החרות לדאוג לצורכיהם של לוחמי מחתרות ותיקים שלא נתמכו על ידי הממשלה. למרות היעדר התמיכה הממשלתית, כבר בראשית שנות החמישים הוחל בהנצחה. כך, הוקמה ברמת גן, בתמיכת ראש העיר אברהם קריניצי, האנדרטה לזכר דב גרונר ועולי הגרדום בעיצובה של חנה אורלוף. לחנוכתה ב-1954 הגיע קהל רב ללא נציג ממשלתי. כלא עכו, שאותו פרץ הארגון במבצע גדול ובו עלו לוחמיו לגרדום, נמסר למשרד הבריאות והפך לבית חולים לחולי נפש. רק בשנת 1984 פונה בית החולים. עם השנים עשה המחנה שעליו נמנו אנשי האצ"ל את דרכו אל הקונצנזוס הלאומי. בשנת 1964 החליט ראש הממשלה לוי אשכול על העלאת עצמותיו של זאב ז'בוטינסקי לקבורה בישראל (בקשתו של ז'בוטינסקי הייתה להעלות את עצמותיו לארץ ישראל רק בפקודה של הממשלה העברית). הטקס לבש אופי ממלכתי, בהשתתפות קהל רב ומחוגים שונים ובחלקו אף השתתף ראש הממשלה. שיאו של תהליך זה היה בחילופי השלטון ב"מהפך" של שנת 1977. בהחלטת ממשלת ישראל בראשות בגין נקבע אות שירות לאנשי הארגון, הקרוי "אות האצ"ל". חלוקת אות שירות זה החלה בשנת 1979, והוא היה אות השירות הרביעי של מדינת ישראל, לאחר אות ההגנה, אות ההתנדבות ואות המשמר (אות לוחם ההגנה חולק החל משנת 1958 ועיטור לוחמי המדינה חולק החל משנת 1968). לא נפקד מקומו של הארגון מהספרות ההיסטורית, ממורשת המאבק לעצמאות, וממפעלי תיעוד והנצחה. עשרות ספרים, בהוצאות פרטיות וממלכתיות, מגוללים את פועלו של הארגון לאורך שנות פעולתו. "מוזיאון האצ"ל" שוכן במצודת זאב, מוצגים בו תולדות הארגון, פעולותיו, וסרט מקורי הממחיש את פעולות האצ"ל שהוקרן בזמנו בארצות הברית במטרה לגייס כספים למאבק. אגף מיוחד מוקדש למחנות המעצר בישראל ומחוצה לה בהם נכלאו חברי הארגון, באגף מוצגות גם התעודות המזויפות ששימשו את אנשי האצ"ל שברחו ממחנה המעצר באריתראה שבאפריקה. תולדות הארגון מוצגות גם ב"מוזיאון האצ"ל בתש"ח" בפאתי יפו, וב"מוזיאון אסירי המחתרות" בירושלים ובעכו. שמות רחובות ושלטי זיכרון רבים בישראל נושאים את זכר המחתרת ואנשיה. ותיקי הארגון מאוגדים תחת "ברית חיילי האצ"ל", העוסקת אף היא במפעלי תיעוד. ראו גם לקריאה נוספת קישורים חיצוניים הערות שוליים
========================================
[SOURCE: https://www.fast.ai/posts/2023-05-31-extinction.html] | [TOKENS: 1097]
Is Avoiding Extinction from AI Really an Urgent Priority? Seth Lazar, Jeremy Howard, & Arvind Narayanan May 30, 2023 This article is the result of a collaboration between philosopher Seth Lazar, AI impacts researcher Arvind Narayanan, and fast.ai’s Jeremy Howard. At fast.ai we believe that planning for our future with AI is a complex topic and requires bringing together cross-disciplinary expertise. This is the year extinction risk from AI went mainstream. It has featured in leading publications, been invoked by 10 Downing Street, and mentioned in a White House AI Strategy document. But a powerful group of AI technologists thinks it still isn’t being taken seriously enough. They have signed a statement that claims: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” “Global priorities” should be the most important, and urgent, problems that humanity faces. 2023 has seen a leap forward in AI capabilities, which undoubtedly brings new risks, including perhaps increasing the probability that some future AI system will go rogue and wipe out humanity. But we are not convinced that mitigating this risk is a global priority. Other AI risks are as important, and are much more urgent. Start with the focus on risks from AI. This is an ambiguous phrase, but it implies an autonomous rogue agent. What about risks posed by people who negligently, recklessly, or maliciously use AI systems? Whatever harms we are concerned might be possible from a rogue AI will be far more likely at a much earlier stage as a result of a “rogue human” with AI’s assistance. Indeed, focusing on this particular threat might exacerbate the more likely risks. The history of technology to date suggests that the greatest risks come not from technology itself, but from the people who control the technology using it to accumulate power and wealth. The AI industry leaders who have signed this statement are precisely the people best positioned to do just that. And in calling for regulations to address the risks of future rogue AI systems, they have proposed interventions that would further cement their power. We should be wary of Prometheans who want to both profit from bringing the people fire, and be trusted as the firefighters. And why focus on extinction in particular? Bad as it would be, as the preamble to the statement notes AI poses other serious societal-scale risks. And global priorities should be not only important, but urgent. We’re still in the middle of a global pandemic, and Russian aggression in Ukraine has made nuclear war an imminent threat. Catastrophic climate change, not mentioned in the statement, has very likely already begun. Is the threat of extinction from AI equally pressing? Do the signatories believe that existing AI systems or their immediate successors might wipe us all out? If they do, then the industry leaders signing this statement should immediately shut down their data centres and hand everything over to national governments. The researchers should stop trying to make existing AI systems safe, and instead call for their elimination. We think that, in fact, most signatories to the statement believe that runaway AI is a way off yet, and that it will take a significant scientific advance to get there—one that we cannot anticipate, even if we are confident that it will someday occur. If this is so, then at least two things follow. First, we should give more weight to serious risks from AI that are more urgent. Even if existing AI systems and their plausible extensions won’t wipe us out, they are already causing much more concentrated harm, they are sure to exacerbate inequality and, in the hands of power-hungry governments and unscrupulous corporations, will undermine individual and collective freedom. We can mitigate these risks now—we don’t have to wait for some unpredictable scientific advance to make progress. They should be our priority. After all, why would we have any confidence in our ability to address risks from future AI, if we won’t do the hard work of addressing those that are already with us? Second, instead of alarming the public with ambiguous projections about the future of AI, we should focus less on what we should worry about, and more on what we should do. The possibly extreme risks from future AI systems should be part of that conversation, but they should not dominate it. We should start by acknowledging that the future of AI—perhaps more so than of pandemics, nuclear war, and climate change—is fundamentally within our collective control. We need to ask, now, what kind of future we want that to be. This doesn’t just mean soliciting input on what rules god-like AI should be governed by. It means asking whether there is, anywhere, a democratic majority for creating such systems at all. And we should focus on building institutions that both reduce existing AI risks and put us in a robust position to address new ones as we learn more about them. This definitely means applying the precautionary principle, and taking concrete steps where we can to anticipate as yet unrealised risks. But it also means empowering voices and groups underrepresented on this AI power list—many of whom have long been drawing attention to societal-scale risks of AI without receiving so much attention. Building on their work, let’s focus on the things we can study, understand and control—the design and real-world use of existing AI systems, their immediate successors, and the social and political systems of which they are part.
========================================
[SOURCE: https://www.bbc.com/live/sport] | [TOKENS: 1927]
Live NowLIVEWinter Olympics: Follow the action on day 15Follow live updates from the Milan-Cortina Winter Olympics. Follow Live nowLIVEListen: Australia v India - third women's T20Listen to ball-by-ball ABC commentary as Australia face India in the third women's T20 at Adelaide Oval.Follow Live nowLIVEFantasy Premier League: Gameweek 27 Q&A with expert Pras - send us your questionsGet all your Fantasy Premier League gameweek 27 tips and team news before Saturday's deadline and send your questions to FPL expert Pras.Follow Live nowRecently LiveWatch U20 Six Nations: England v Ireland Watch live BBC coverage as England host Ireland at The Recreation Ground.See how it played outBlackburn beat Preston with 95th-minute winnerYuki Ohashi's stoppage-time header earns Blackburn a vital three points in their fight for Championship survival as they beat Preston.See how it played outWatch Bundesliga: Mainz 1-1 Hamburg Watch and follow live text commentary, score updates and match stats from Mainz vs Hamburg in the Bundesliga.See how it played outIrish Premiership - Carrick stun Larne as Coleraine defeat BallymenaLive text and in-play clip coverage of Friday's two Irish Premiership fixturesSee how it played outHighlights: Carrick extend unbeaten run with win over LarneWatch highlights as Carrick Rangers beat league leaders Larne at Inver Park in the Irish Premiership.See how it played outHighlights: Coleraine see off Ballymena UnitedWatch highlights as Will Patching's goal is enough to see Coleraine defeat Ballymena United 1-0.See how it played outWatch: Scottish Championship - Reaction as St Johnstone stretch lead after draw with Raith RoversWatch Sportscene coverage and follow live text commentary, score updates and match stats from Raith Rovers vs St Johnstone in the Scottish Championship.See how it played outWatch U20 Six Nations: Wales v ScotlandFollow live BBC coverage as Wales host Scotland at Cardiff Arms Park, Cardiff.See how it played outListen: Prem Rugby Cup commentariesListen to live BBC Radio commentary as Newcastle Red Bulls face Northampton Saints and Gloucester Rugby take on Sale Sharks in the Prem Rugby Cup.See how it played outListen: Super League commentariesListen live to BBC Radio commentaries of Friday’s Super League action from three games plus Oldham take on Widnes Vikings in the Championship.See how it played outBBC Radio WM: Watch Friday's Football Phone-InWatch Friday's West Midlands Football Phone-In from BBC Radio WM, with Richard Wilford and former Villa and Wolves winger Steve Froggatt.See how it played outBBC Radio Sheffield: Watch Friday's Football HeavenWatch BBC Radio Sheffield's Football Heaven, previewing the weekend's fixtures, including the Steel City derby, with Andy Giddings.See how it played outListen: T20 World Cup - Australia v OmanListen to BBC Radio 5 Sports Extra commentary as Australia face Oman in the ICC T20 World Cup in Pallekele.See how it played outWinter Olympics: Follow all the action on day 14Get live text updates and reaction from the Winter Olympics in Milan-Cortina.See how it played outRelive final day of pre-season testing as Ferrari's Leclerc fastest in BahrainRelive the action from the final day of pre-season testing in Bahrain, with Ferrari's Charles Leclerc fastest.See how it played outRelive Hull KR's World Club Challenge win over BrisbaneRelive Hull KR's World Club Challenge win over Brisbane Broncos as it happened.See how it played outPremier League Darts: Clayton beats Van Veen to win night three - as it happenedRelive all the action as Jonny Clayton beats Gian van Veen to win Premier League night three in Glasgow.See how it played outMan Utd beat Atletico to reach Women's Champions League last eightManchester United win at home against Atletico with goals from Zigiotti Olme and Park to secure quarter-final spot against Bayern Munich.See how it played outCeltic endure punishing night at hands of ruthless StuttgartCeltic suffer a damaging and likely decisive Europa League knockout round play-off first-leg defeat to Stuttgart in Martin O'Neill's 1,000th match as a manager.See how it played outWatch Having a Gas, with Bristol Rovers captain Alfie KilgourWatch a special episode of BBC Radio Bristol's Having a Gas, with Bristol Rovers captain Alfie Kilgour in the studio.See how it played outWasteful Palace held by Zrinjski in Conference LeagueCrystal Palace fail to make their dominance count as they are held to a draw by Zrinjski Mostar in the first leg of their Conference League play-off tie.See how it played outBBC Radio Sheffield: Watch Thursday's Football HeavenWatch BBC Radio Sheffield's Football Heaven, discussing the latest football news, with Rob Staton.See how it played outImpressive Forest win at Fenerbahce, Palace draw with ZrinjskiFollow live as Nottingham Forest face Fenerbahce in the Europa League and Crystal Palace visit Zrinjski in the Conference League.See how it played outRosenior news conference: 'Anyone found guilty of racism should not be in the game'Premier League news conferences: Chelsea manager Liam Rosenior previews Saturday's game against BurnleySee how it played outListen: T20 World Cup - Afghanistan v CanadaListen to BBC Radio 5 Sports Extra commentary as Afghanistan face Canada at the ICC T20 World Cup in Chennai.See how it played outSix Nations: Kinghorn & Van der Merwe 'will bring freshness' to Scotland against WalesFollow live text updates with Scotland head coach Gregor Townsend set to name his team to face Wales in the Six Nations on Saturday.See how it played outReaction and analysis from Ireland's team announcementCatch up on updates from Ireland's team announcement and Andy Farrell's press conference ahead of Saturday's Six Nations game in England.See how it played outListen: T20 World Cup - Sri Lanka v ZimbabweListen to BBC Radio 5 Sports Extra commentary as Sri Lanka face Zimbabwe at the ICC T20 World Cup in Colombo.See how it played outSix Nations: Costelow starts as Wales make four changes for Scotland - reactionWales head coach Steve Tandy has made four changes for Saturday's game against Scotland in the Six Nations, including a first start of the tournament for Sam Costelow.See how it played outListen: Australia v India - second women's T20Listen to ball-by-ball ABC commentary as Australia face India in the second women's T20 at Manuka Oval in Canberra.See how it played outWinter Olympics: USA's Alysa Liu wins gold in women's free skateFollow live updates, reaction and commentary on men's curling, women's free skate and women's ice hockey.See how it played outF1 pre-season testing: Mercedes' Kimi Antonelli ends day two quickestLive text commentary of the second pre-season test at the Bahrain International Circuit.See how it played out Live Now Winter Olympics: Follow the action on day 15 Listen: Australia v India - third women's T20 Fantasy Premier League: Gameweek 27 Q&A with expert Pras - send us your questions Recently Live Watch U20 Six Nations: England v Ireland Blackburn beat Preston with 95th-minute winner Watch Bundesliga: Mainz 1-1 Hamburg Irish Premiership - Carrick stun Larne as Coleraine defeat Ballymena Highlights: Carrick extend unbeaten run with win over Larne Highlights: Coleraine see off Ballymena United Watch: Scottish Championship - Reaction as St Johnstone stretch lead after draw with Raith Rovers Watch U20 Six Nations: Wales v Scotland Listen: Prem Rugby Cup commentaries Listen: Super League commentaries BBC Radio WM: Watch Friday's Football Phone-In BBC Radio Sheffield: Watch Friday's Football Heaven Listen: T20 World Cup - Australia v Oman Winter Olympics: Follow all the action on day 14 Relive final day of pre-season testing as Ferrari's Leclerc fastest in Bahrain Relive Hull KR's World Club Challenge win over Brisbane Premier League Darts: Clayton beats Van Veen to win night three - as it happened Man Utd beat Atletico to reach Women's Champions League last eight Celtic endure punishing night at hands of ruthless Stuttgart Watch Having a Gas, with Bristol Rovers captain Alfie Kilgour Wasteful Palace held by Zrinjski in Conference League BBC Radio Sheffield: Watch Thursday's Football Heaven Impressive Forest win at Fenerbahce, Palace draw with Zrinjski Rosenior news conference: 'Anyone found guilty of racism should not be in the game' Listen: T20 World Cup - Afghanistan v Canada Six Nations: Kinghorn & Van der Merwe 'will bring freshness' to Scotland against Wales Reaction and analysis from Ireland's team announcement Listen: T20 World Cup - Sri Lanka v Zimbabwe Six Nations: Costelow starts as Wales make four changes for Scotland - reaction Listen: Australia v India - second women's T20 Winter Olympics: USA's Alysa Liu wins gold in women's free skate F1 pre-season testing: Mercedes' Kimi Antonelli ends day two quickest Copyright 2026 BBC. All rights reserved. The BBC is not responsible for the content of external sites. Read about our approach to external linking.
========================================
[SOURCE: https://en.wikipedia.org/wiki/The_Bard%27s_Tale_(1985_video_game)] | [TOKENS: 3957]
Contents The Bard's Tale (1985 video game) The Bard's Tale is a fantasy role-playing video game designed and programmed by Michael Cranford for the Apple II. It was produced by Interplay Productions in 1985 and distributed by Electronic Arts. The game was ported to the Commodore 64, Apple IIGS, ZX Spectrum, Amstrad CPC, Amiga, Atari ST, MS-DOS, Mac, and NES. It spawned The Bard's Tale series of games and books. The earliest editions of the game used a series title of Tales of the Unknown, but this title was dropped for later ports of The Bard's Tale and subsequent games in the series. In August 2018, a remastered version was released for Windows, followed by the Xbox One release in 2019. Plot The following text from the box cover summarizes the premise: Long ago, when magic still prevailed, the evil wizard Mangar the Dark threatened a small but harmonious country town called Skara Brae. Evil creatures oozed into Skara Brae and joined his shadow domain. Mangar froze the surrounding lands with a spell of Eternal Winter, totally isolating Skara Brae from any possible help. Then, one night, the town militiamen all disappeared. The future of Skara Brae hung in the balance. And who was left to resist? Only a handful of unproven young Warriors, junior Magic Users, a couple of Bards barely old enough to drink, and some out of work Rogues. You are there. You are the leader of this ragtag group of freedom fighters. Luckily you have a Bard with you to sing your glories, if you survive. For this is the stuff of legends. And so the story begins... In the game, the player forms a group of up to six characters. Game progress is made through advancing the characters so that they are powerful enough to defeat the increasingly dangerous foes and monsters in the dungeons, obtaining certain items relevant to solving the overall quest, and obtaining information. The fictional town of Skara Brae consists of 30x30 map tiles containing either buildings or streets (plus gates and magical guardian statues blocking certain streets). Access to one tower in the northeastern and southwestern city corner each is blocked by locked gates. The main city gates, which open to the west, are blocked by snow and remain impassable throughout the game. One street seems to lead south endlessly, by actually teleporting the party back to its beginning upon reaching the portion where the city walls would be. Certain buildings within the city are special, such as the Adventurer's Guild, Garth's Equipment Shoppe, the Review Board (which is unmarked and must be found first, and is the only place where characters can advance in experience levels), various taverns and temples, and the dungeons. The latter are mazes of various kinds—cellars, sewers, catacombs, or fortresses—full of monsters and riddles, some guarded by magical statues that come to life to attack trespassing player parties. Gameplay The Bard's Tale is a straightforward dungeon crawl. The objective is to gain experience and advance characters' skills through (mostly) random combat with enemies and monsters. This is done while exploring maze-like dungeons, solving occasional puzzles and riddles, and finding or buying better weapons, armor, and other equipment. When beginning the game, the player may create up to six player characters, chosen from among the following classes: bard, hunter, monk, paladin, rogue, warrior, magician, and conjurer. The classes sorcerer and wizard are available to experienced conjurers and magicians. On some platforms, the player can import previously created characters from Wizardry and/or Ultima III, which was somewhat revolutionary at the time of the game's release. Of particular innovation to the genre was the bard, whose magical songs functioned like long-lasting spells and affected the player's party in various ways—such as strengthening their armor, or increasing their attack speed[citation needed]. A number of obligatory puzzles in the game were unsolvable without the use of bard songs.[citation needed] Each bard song triggered corresponding music while he played (some classical, some original). Magic users were allowed to change classes permanently. The game manual describes a magic user who has mastered all spells from all four classes as "an Archmage, the most powerful being in the world of The Bard's Tale." However, Archmage status had no effect on gameplay other than having all spells available. Casting one of the 85 magic user spells consisted of typing a four-letter code found only in the printed game manual. However, when using a mouse (in the DOS, Amiga, and Macintosh versions), the full names of the spells would appear in a list to choose from. Combat is turn-based, described in text rather than shown graphically; there is no notion of moving characters around on a map during combat. Cash and experience points are distributed evenly to all surviving party members after a particular encounter is won. Cluebook Publisher Electronic Arts published a cluebook for the game in 1986 (ISBN 1-55543-064-3)[citation needed] that added some original characters and background information to the game's setting. Written by T.L. Thompson, it purports to be an in-universe document that one Pellis, who seems to be an influential individual working against Mangar behind the scenes, entrusts to an unnamed friend who has just come of age: implicitly, the player (party). It is the journal of Lord Garrick, viscount of Skara Brae's sister city Hamelon. Trapped in Skara Brae by Mangar's spell, Lord Garrick and his party of servants and associates (including Corfid op Orfin the Bard, Ghaklah the Magician, Isli the Paladin, Soriac the Archmage, and the otherwise unnamed "last of the great sage-sorcerors") take it upon themselves to rid Skara Brae of Mangar's influence. The journal narrates how they navigate the dungeons and solve the puzzles until, one step short of actually confronting Mangar, they find that crucial items were stolen by the party's Rogue when he had abandoned them. Soriac prepares a spell that will allow Isli to escape and give the journal to Pellis, but is also thought to rend from the fabric of time everything they have accomplished, and will consume Isli as well as it burn itself out. Development Michael Cranford developed the concept, design, and programming of The Bard's Tale and its successor game (The Bard's Tale II: The Destiny Knight), with additional design by Brian Fargo (the founder of Interplay) and Roe Adams III. David Lowery designed the graphics, Lawrence Holland composed the music, and Joe Ybarra served as producer. Cranford stated that most of the game design was based on his and Fargo's Dungeons & Dragons gaming experiences. Cranford and Fargo tried to improve on previous games of the genre in many areas including graphics and sound, with Cranford pointing to the Apple version of Wizardry as an example of a game that fell short in his judgement. The Bard's Tale was originally going to be named Shadow Snare and was announced as a successor to Cranford's earlier game Maze Master which was published by Human Engineered Software in 1983. Cranford is a devout Christian. He included references to Jesus in The Bard's Tale, and all but one of the city names in The Bard's Tale II are taken from the New Testament. After a falling-out with Brian Fargo, he was not involved in The Bard's Tale III and decided to go back to college to study philosophy and theology instead. Cranford stated that they used a consultant during game development who suggested various ideas, including the city name Skara Brae, which was also used in Ultima IV—a surprise discovery after that game's release due to the consultant's work on Ultima IV as well. Cranford noted that they did not use the other ideas as the similarities could have been problematic. Rebecca Heineman, who worked at Interplay at the time (then as Bill "Burger" Heineman), is credited in the game's manual for the "data compressing routines that allowed [Cranford] to pack so much graphics and animation", and according to herself also wrote development tools for the game such as a graphic editor and all ports to other platforms. Heineman became openly critical of Cranford in later years, saying in an interview that Cranford, after doing some last bugfixes, held the game's final version "hostage" to force Brian Fargo to sign a publishing contract that contained a clause by which the sequel game (The Destiny Knight) would be Cranford's alone. Brian Fargo confirmed this, but still defended Cranford. Cranford in turn called Heineman's words "disparaging slant" and "fiction", noting that Heineman ("a storyteller with an agenda") at the time was (paraphrased) a loner who "sat isolated in a cubicle in the back corner of the room", wasn't involved in the company's business operations, nor deeply involved in The Bard's Tale, and therefore would not know all the details. As far as he (Cranford) could remember the situation, Brian Fargo would not produce a written contract for the game until near the very end of the development, and then only under pressure from Cranford withholding the final product. When he finally did, the contract was not what Cranford thought they had verbally agreed on when he had started working on the project, nor something he felt he could or would have agreed to at the onset. Although a compromise was found, Fargo asked Cranford to leave the company after The Bard's Tale II: The Destiny Knight was finished. The experience contributed to Cranford walking away from game development to pursue a different career. Cranford said he later apologized to Fargo after learning that the attorney who had represented him had misrepresented several other cases to his clients and had apparently misled him into assuming the worst. Cranford, Fargo, and Heineman have all since stated that they hold no grudges against each other over something that occurred when they were in their early twenties. Cranford and Fargo remain friends. When Fargo, through his firm inXile Entertainment, started making The Bard's Tale IV: Barrows Deep on the original game's 30th anniversary, Cranford was invited to join the project and did contribute, while Heineman offered to create a 'remastered' edition of the original three games for modern operating systems (see below). Reception Computer Gaming World's Scorpia in 1985 described Bard's Tale as "not to be missed!" In 1993 she criticized the game's starting difficulty and single save location, but stated that it had "many points of interest, particularly in the puzzles, and is definitely a game worth getting". The game was reviewed in 1986 in Dragon #116 by Hartley and Pattie Lesser in "The Role of Computers" column. The reviewers rated the game well, concluding that "Bard's Tale, a game of high adventure ... is one we recommend for your software library." The game was revisited in Dragon #120. In a subsequent column, the reviewers gave the game 5 out of 5 stars. Calling the Commodore 64 version wondrous, Compute!'s Gazette in 1986 stated that while the game's plot and gameplay did not vary from the norm, "its depth of concept and brilliance of execution" did. Praising the complex magic system, the magazine concluded that "the greatest danger is not Mangar—it's the likelihood that you'll never be able to tear yourself away from this masterpiece of a game". Compute! in 1987 called the Apple IIGS version "unquestionably the most graphically stunning product I have seen on any Apple computer". The ZX Spectrum version of The Bard's Tale, released in 1988, was favorably received. CRASH said that "the Skara Brae environment is so complex and involves so many different factors that it's hard not to get completely enthralled in your quest" and rated it at 86%. Sinclair User rated it at 89%, but noted that it would not appeal to general gameplayers, saying that "The Bard's Tale will enthrall diehard pixie fans [...] but there's too much text, and not enough graphics and animation, to convert the uncommitted." Your Sinclair were similarly positive about the game, rating it 9/10. Macworld reviewed the Macintosh version of The Bard's Tale, praising the music and gameplay, calling it a "Nice combination of problem solving, combat, and exploration", but criticizing the monochrome graphics and repetitive gameplay, the latter largely directed towards frequent combat. The Commodore 64 version of The Bard's Tale was given a 'Sizzler' award and rated at 94% by Zzap!64 magazine, in the 1986 Christmas Special edition. Reviewer Sean Masterson called it "the best RPG on the Commodore". In 1993, Commodore Force ranked the game at number 13 on its list of the top 100 Commodore 64 games. When reviewing its sequel in 1988, Ahoy!'s AmigaUser described The Bard's Tale as "the all-time classic". With a score of 7.49 out of 10, that year The Bard's Tale was among the first members of the Computer Gaming World Hall of Fame, honoring those games rated highly over time by readers. In 1990 the game received the seventh-highest number of votes in a survey of readers' "All-Time Favorites". In 1996, the magazine named The Bard's Tale the 89th best game ever. The Bard's Tale was very successful, becoming the best-selling computer RPG of the 1980s at 407,000 copies. It was the first non-Wizardry computer role-playing game to challenge the Ultima series' sales, especially to Commodore 64 users who could not play Wizardry (a Commodore version did not appear until 1987, with inferior graphics to that of The Bard's Tale). By 1993, The Bard's Tale series had sold over a million copies. Legacy The Bard's Tale was both a best-seller and a critical success, and produced two official sequels and a "Construction Set" in its time. A compilation of all three classic The Bard's Tale games, entitled The Bard's Tale Trilogy, was released for DOS by Electronic Arts in 1990. According to programmer Rebecca Heineman, the name of the overall series was to be Tales of the Unknown, and the three games were to be entitled The Bard's Tale, The Archmage's Tale, and The Thief's Tale. This is supported by the cover art of the original Bard's Tale release, which proclaimed the game as "Tales of the Unknown, Volume I." However, the immense popularity of the first game prompted Electronic Arts to re-brand the series under the more well-known name. Michael Cranford, however, stated that an Electronic Arts agent they worked with had come up with the city name (Skara Brae, named after a real-life settlement in prehistoric Orkney) and the game's title, The Bard's Tale (from originally: Tale of the Scarlet Bard), and that The Destiny Knight was never going to be called The Archmage's Tale. What was originally going to be The Bard's Tale IV became an unrelated game called Dragon Wars (1989) at a very late point in its development process, due to rights issues after developer Interplay parted ways with publisher Electronic Arts. The game's name and storyline were changed to disassociate it from the Bard's Tale series. In 2003, Brian Fargo (who created maps for the first two Bard's Tale games and directed the third) left Interplay Entertainment and began a new game development company named InXile Entertainment. In 2004, they released their first game, also titled The Bard's Tale, an unrelated, console-style, top-down, action RPG which pokes fun at traditional, fantasy, and role-playing game tropes as in those found throughout the original Bard's Tale. It was not a proper sequel to the classic series, nor was it connected in any respect apart from the title and location: the story takes place on the Orkney Mainland, where the ruins of real-world Skara Brae lie. Although a legal loophole allowed InXile to use the Bard's Tale name, and the company had evidently planned to incorporate more elements of the original games, Electronic Arts still owned the original trademarks for the Bard's Tale series itself, and InXile was not legally allowed to use any of the plot, characters, or locations featured in the original trilogy in their 2004 game. In May 2015, Fargo announced that he was planning to develop and a sequel funded through crowdfunding on Kickstarter, The Bard's Tale IV. The game, which was released in 2018, continues the storyline of the original trilogy but has significantly changed gameplay. The Mage's Tale was published by InXile in 2017 as a spinoff game using virtual reality technology. It was developed concurrently with The Bard's Tale IV. During the Kickstarter campaign to create a proper fourth installment to the series, inXile partnered with Rebecca Heineman and her company Olde Sküül to remaster the original trilogy for modern personal computers running mac OS and Windows (instead of the emulated versions offered by inXile). After reaching an impasse in development, Olde Sküül and inXile agreed to transfer the project to Krome Studios. Krome Studios and inXile released the remastered edition on August 14, 2018, as part of the remastered The Bard's Trilogy. The Remastered Edition essentially re-wrote the original games, keeping only the storyline and gameplay design but little if any of the original game code. Graphics, sound, and user interface were updated to modern standards, various bugs were fixed, and a unified authoritative gameplay was devised when it turned out that there were significant differences not only between parts I, II, and III of the original trilogy (such as the number of characters in the party or spells being available at different levels, or not available at all, in different installments), but also between ports of the same game. Some content was added, including female character portraits and (inconsequential) references to the Bard's Tale IV storyline. The remastered edition of the original trilogy was released for Xbox One on August 13, 2019. This followed the acquisition of inXile Entertainment by Microsoft. The collection supports Xbox Play Anywhere. A series of novels based on The Bard's Tale were published by Baen Books during the 1990s. Although the books had little in common with the storyline of the games, their existence is a testament to how influential the Bard's Tale brand had become. They include: While they are listed here in the order they were published, some books in the series connect more than others, such as Castle of Deception and The Chaos Gate, Prison of Souls and Escape from Roksamur, and Thunder of the Captains and Wrath of the Princes. References External links
========================================
[SOURCE: https://www.bbc.com/live/news] | [TOKENS: 491]
Recently LiveTrump announces new 10% global tariff as he hits out at 'deeply disappointing' Supreme Court rulingThe US president says he will impose the temporary levies, after the top court struck down his sweeping tariffs.See how it played outScottish Lib Dem leader Alex Cole-Hamilton delivers conference speechThe party leader addresses delegates at the Scottish Lib Dems' spring conference in Edinburgh.See how it played outAircraft carrier seen off Gibraltar and fighter jets fly to UK as US build-up continuesLatest updates from the BBC's specialists in fact-checking, verifying video and tackling disinformation.See how it played outFMQs: Angry exchanges over lord advocate memoThe ongoing controversy over a memo sent to the first minister about a criminal charge against former SNP chief executive Peter Murrell is raised by opposition party leaders.See how it played outPolice asking Andrew's protection officers what 'they saw or heard' as part of Epstein files reviewThe Metropolitan Police says officers have been asked to consider whether anything "they saw or heard during that period of service may be relevant to our ongoing reviews".See how it played outAs it happened: Hull KR World Club Challenge ball relayThe match ball is being carried by fans and famous faces from Craven Park to the MKM Stadium.See how it played outMore US military flights seen over Europe as images show fortification at Iran facilityLatest updates from the BBC's specialists in fact-checking, verifying video and tackling disinformation.See how it played outSouth Korea’s ex-president jailed for life for masterminding an insurrectionYoon Suk Yeol was impeached and indicted for declaring martial law in 2024.See how it played out Recently Live Trump announces new 10% global tariff as he hits out at 'deeply disappointing' Supreme Court ruling Scottish Lib Dem leader Alex Cole-Hamilton delivers conference speech Aircraft carrier seen off Gibraltar and fighter jets fly to UK as US build-up continues FMQs: Angry exchanges over lord advocate memo Police asking Andrew's protection officers what 'they saw or heard' as part of Epstein files review As it happened: Hull KR World Club Challenge ball relay More US military flights seen over Europe as images show fortification at Iran facility South Korea’s ex-president jailed for life for masterminding an insurrection Copyright 2026 BBC. All rights reserved. The BBC is not responsible for the content of external sites. Read about our approach to external linking.
========================================
[SOURCE: https://he.wikipedia.org/wiki/הליכוד] | [TOKENS: 36423]
תוכן עניינים הליכוד פופוליזם ימניציונות רוויזיוניסטיתליברליזם כלכלי[דרושה הבהרה]שמרנות הליכוד – תנועה לאומית ליברלית[ב] היא מפלגת ימין שמרנית ציונית ישראלית. המפלגה הוקמה בשנת 1973 מאיחוד מפלגות גח"ל (רשימת מפלגות תנועת החרות והמפלגה הליברלית הישראלית), הרשימה הממלכתית, המרכז החופשי והתנועה למען ארץ ישראל השלמה שרצו לראשונה כמפלגה מאוחדת בבחירות לכנסת השמינית. עם ראשי המפלגה נמנו ראשי הממשלות מנחם בגין, יצחק שמיר, בנימין נתניהו ואריאל שרון. מעת הקמתה במערכת הבחירות לכנסת השמינית ואילך הייתה המפלגה הגדולה או השנייה בגודלה בכנסת, למעט בכנסת השבע עשרה ומשנת 2001 המפלגה היא מפלגת השלטון בישראל למעט בשנים 2006–2009 ו-2021–2022. מאז הקמתה, "הליכוד" היא המפלגה העיקרית בימין הישראלי. האידאולוגיה השלטת במפלגה היא בעיקרה לאומיות יהודית ומימושה במדינת ישראל באמצעות הציונות, כמו גם הרחבת ההתנחלויות ונקיטת קו נוקשה בפעולות צבאיות נגד ארגוני טרור. היסטוריה וציוני דרך מפלגת הליכוד הוקמה ב-13 בספטמבר 1973, לקראת הבחירות לכנסת השמינית, כרשימה משותפת למספר מפלגות ותנועות: תנועות אלו שמרו על מוסדותיהן המפלגתיים. הרשימה לכנסת הורכבה, על פי מפתח מוסכם, מנציגי התנועות השונות. היוזם העיקרי של מהלך זה, שהוביל בסופו של דבר לאיחוד מלא, היה אריאל שרון, שפרש באותה שנה מצה"ל. בראש הליכוד נבחר לעמוד מנהיגה הוותיק של תנועת החרות ומפקד האצ"ל לשעבר, חבר הכנסת מנחם בגין. בגין, שעד הקמת הליכוד הוביל את מפלגת חרות כסמן הימני הניצי במפה הפוליטית הישראלית, בחר להנהיג את המפלגה החדשה בדרך ימנית מתונה. באוגוסט 1973, הנהלת גח"ל דחתה את בקשת הליגה להגנה יהודית להצטרף לליכוד. בבחירות לכנסת השמינית (1973) התמודד הליכוד בראשות מנחם בגין וזכה ב-39 מנדטים, לעומת 51 למערך. ב-1976 התאחדו המפלגות הקטנות הרשימה הממלכתית, התנועה למען ארץ ישראל השלמה, והמרכז העצמאי (פורשי המרכז החופשי) והקימו את מפלגת לע"ם, שנשארה חברה בסיעת הליכוד. עם הקמת לע"ם, התבססה המפלגה החדשה כחטיבה השלישית בגודלה בליכוד, לצד חירות והמפלגה הליברלית. בשנת 1976 התקיימה ועידת הליכוד בחברון, לפי דרישת מנחם בגין. בוועידה נקבע כי "השאיפה היא להחיל את הריבונות הישראלית על יהודה ושומרון". בבחירות לכנסת התשיעית ב-17 במאי 1977, שהתקיימו על רקע אינפלציה גואה, טראומת מלחמת יום הכיפורים ממנה לא הצליחה עדיין המדינה להתאושש, ותחושת מיאוס ציבורי הולכת וגדלה כלפי פרשות שחיתות בהן היו מעורבים בכירי המערך, ובמיוחד ההאשמות נגד אשר ידלין ואברהם עופר ופרשת חשבון הדולרים שהביאה לפרישתו של ראש הממשלה יצחק רבין, הסתמנה לראשונה אפשרות אמיתית לחילופי שלטון בישראל. בנוסף לכך נראה היה כי בני הברית המסורתיים של שלטון המערך אינם מוכנים עוד לחסות בצילו, ואירועים כגון יום האדמה ומשבר קליטת מטוסי ה־F-15 בשבת, אותתו על התרחקותם של הציבור הערבי והדתי ממנו. על היערכות הליכוד למערכת הבחירות העיב התקף לב קשה ממנו סבל בגין בתחילת אפריל, אשר מנע ממנו ליטול חלק ברוב שלבי המערכה. לבסוף למרות בריאתו הרופפת, הצליח בגין להשתתף בעימות טלוויזיוני ראשון מסוגו מול שמעון פרס. אף שבעימות לא היה מנצח ברור, הצליח בגין לשלול את התדמית הקיצונית שביקשו להדביק לו אנשי המערך, והצטייר כמנהיג שקול ומוכן להנהגה. הליכוד ניצח בבחירות עם 43 מנדטים, לעומת 32 בלבד למערך, ומנחם בגין היה לראש הממשלה. ניצחון הליכוד כונה בפי שדר החדשות חיים יבין "מהפך", מילה שהפכה למטבע לשון שגור בפי רבים. הייתה זו הפעם הראשונה בתולדות מדינת ישראל בה ראש הממשלה הנבחר לא היה נציג תנועות הפועלים על גלגוליהן השונים, ושלטונם הרציף של מפא"י ויורשיה, שנמשך למעשה עוד מאז הקמתה בימי היישוב ב-1930, הגיע לידי סיום. הישגיה העיקריים של ממשלת הליכוד הראשונה היו הסכם השלום עם מצרים, הנפת נס ההתיישבות היהודית ביש"ע ובגליל, תוכנית שיקום שכונות והפצצת הכור הגרעיני בעיראק במסגרת מבצע אופרה. בנוסף נחקק בתמיכת ממשלת בגין חוק ירושלים, שעיגן בחוק את מעמדה של ירושלים המאוחדת כבירת מדינת ישראל. ב-1981, על רקע מערכת בחירות סוערת במיוחד לכנסת העשירית זכה הליכוד ב-48 מנדטים לעומת 47 של המערך והקים ממשלה צרה בראשותו של בגין. ממשלה זו ידועה בשל המשכם של מפעלי הממשלה הקודמת, חקיקת חוק רמת הגולן, וכן בשל החלטתה על היציאה למבצע שלום הגליל. ב-1983 התפטר מנחם בגין מראשות הממשלה ומראשות הליכוד בהצהירו "אינני יכול עוד". עד לפטירתו ב-9 במרץ 1992 התבודד בביתו ולא שב לפעילות ציבורית. ראש הממשלה השני מטעם הליכוד היה יצחק שמיר, אשר מונה לראשות הממשלה בעקבות פרישתו של בגין בשנת 1983. זאת, לאחר שניצח את דוד לוי בבחירות שנערכו במרכז הליכוד ב-2 בספטמבר אותה שנה. כבר בחודשי כהונתו הראשונים, נאלץ שמיר להתמודד עם משברים רבים. אסון צור השני שהתרחש ב-4 בנובמבר 1983 המחיש את פגיעותו של צה"ל, שהיה שרוי עדיין בעומק לבנון. גם המצב הביטחוני בפנים הארץ נראה כמדרדר, לאחר אירועים כגון פיגוע באוטובוס בירושלים ב-6 בדצמבר 1983 (שנחשב לאירוע חריג בזמנו), והשתלטות מחבלים על קו 300, שגררה אף משבר פוליטי-חוקתי קשה לאחר שנתגלה כי אנשי השב"כ הרגו שניים מן המחבלים בחקירתם, לאחר שאלו נחלצו ללא פגע מהפיגוע עצמו. באופן חריג נאלצה הממשלה להתמודד גם עם טרור יהודי, לאחר שנתגלה דבר קיומה של המחתרת היהודית שהייתה אחראית לשורת פיגועים נגד הפלסטינים ביהודה ושומרון. במישור הכלכלי סבלה המדינה מכישלון תוכנית "המהפך הכלכלי", עלותה של מלחמת לבנון, המיתון העולמי, ומשבר מניות הבנקים שפרץ במלוא עוצמתו ב-2 באוקטובר 1983 לאחר שהבנקים לא יכלו להמשיך לווסת את מניותיהם בבורסה. כל אלו הכו בכלכלת ישראל, גרמו להיפר-אינפלציה, והביאו את המשק אל סף התמוטטות. גם הקיטוב בין המחנות הפוליטיים בארץ, שעלה אל השטח בעקבות מלחמת לבנון, המשיך להחמיר, ורציחתו של פעיל שלום עכשיו אמיל גרינצוויג הביאה את האלימות הפוליטית בארץ לשיא חדש. למרות כל אלו הצליח שמיר למתן את הירידה הקשה שהייתה צפויה בכוחו של הליכוד, ובבחירות לכנסת ה-11 ב-1984 זכה הליכוד ב-41 מנדטים, לעומת 44 למערך בראשות שמעון פרס. בשל התיקו הפוליטי היחסי שנוצר נאלץ פרס להקים ממשלת אחדות לאומית יחד עם הליכוד. במסגרת הסכם רוטציה עמד בראשה פרס למשך שנתיים, ולאחר מכן שמיר החל מ-1986 למשך שנתיים נוספות. ממשלת אחדות זו התאפיינה בעיקר במאמציה המוצלחים לבלום את האינפלציה המשתוללת ולייצב את המשק, וכן בהמשך הנסיגה ההדרגתית מדרום לבנון, וביסוס רצועת הביטחון. בנוסף זכור הסכם לונדון שיזם פרס לשלום עם ירדן במסגרתו תשלוט זו בפלסטינים בשטחי יש"ע, הסכם שלא יצא אל הפועל בשל התנגדות שמיר. בתקופה זו בלטה תופעת המחנאות והמאבקים הפנימיים בליכוד, ובמיוחד בלטו דוד לוי, ואריאל שרון כאופוזיציה לשמיר. למרות מאמציהם לא הצליחו השניים ליצור איום ממשי על שמיר, על אף מאבקים פנימיים לא קלים, שהגיעו לשיאם בכינוס מרכז תנועת החרות שפוצץ ב-1986 על ידי פעילים ממחנה לוי. שמיר אמר אחרי הכנס: "זוהי תנועה שמאבדת את עצמה לדעת". ב-25 באוגוסט 1988 אוחדו סופית מוסדות חרות והליברלים והתנועה החדשה נקראה הליכוד – תנועה לאומית ליברלית. לאחר הבחירות לכנסת השתים עשרה הקים יצחק שמיר פעם נוספת ממשלת אחדות עם המערך, אך בראשותו. גם בממשלה זו ניסו לקרוא על שמיר תיגר מימין "החישוקאים": אריאל שרון, דוד לוי, ויצחק מודעי. בכינוס מרכז הליכוד בפברואר 1990, שזכה לכינוי "ליל המיקרופונים" ביקש שמיר את אמון חברי המרכז בו ובמדיניותו. בעוד מורמות אצבעות החברים, החל שרון לקרוא ממיקרופון אחר: "אני שואל את חברי המרכז: מי בעד חיסול הטרור? שירים את ידו! מי נגד שיתוף מגורשים? שירים ידו! מי נגד שיתוף ערביי מזרח ירושלים? ירים ידו!". ב-1990 התפוררה ממשלת האחדות עקב "התרגיל המסריח" שבו ניסה שר האוצר שמעון פרס לרקוח בחשאי הסכם עם החרדים להקמת ממשלה בראשותו. לאחר כישלונו של פרס להרכיב קואליציה, וקריאתו המפורסמת של שמיר "אברשה חזור הביתה!" אל חבר הכנסת אברהם שריר שעזב קודם לכן את הליכוד, הוקמה ממשלת ימין-דתיים צרה, שזכורה בעיקר בשל הסכמת שמיר להשתתף בוועידת מדריד ובהסכמתו למשא ומתן עם הצדדים במזרח התיכון מלבד אש"ף, בשל ההבלגה הישראלית במלחמת המפרץ ובשל מפעלי העלייה מברית המועצות לשעבר ועליית יהודי אתיופיה. בבחירות לכנסת השלוש עשרה ב-1992 נחל הליכוד תבוסה והשיג רק 32 מנדטים, וזאת לעומת 44 מנדטים לעבודה. בעקבות כך פרש שמיר מתפקידו כיו"ר המפלגה. לאחר התפטרותו של שמיר התקיימו בחירות לראשות הליכוד, בהן התמודד בנימין נתניהו על תפקיד יו"ר המפלגה מול דוד לוי, משה קצב ובני בגין. מערכת הבחירות לוותה בחילופי האשמות והשמצות קשות בין נתניהו ללוי, שיאן בפרשת הקלטת הלוהטת, בה טען נתניהו בשידור ישיר בטלוויזיה כי "בכיר בליכוד המוקף חבורת פושעים" מנסה לסחוט אותו באמצעות קלטת בה הוא נראה בוגד באשתו. נתניהו נבחר ברוב גדול להנהגת הליכוד. נתניהו הנהיג את הליכוד באופוזיציה לממשלת רבין ובהתנגדות להסכמי אוסלו, וביסס את מעמדו בראשותו. לאחר רצח רבין התנגד נתניהו להקדמת הבחירות משום שסבר שהדבר יתפרש כניצול לא ראוי של הרצח. במהלך כהונת הכנסת הגיעו יחסיהם של נתניהו ולוי, שדומה היה שלא יכלו להחמיר עוד, לשפל חדש כאשר זה האחרון פרש מהליכוד והקים את מפלגת גשר. ערב הבחירות לכנסת ה-14 הצליח נתניהו לצרף את לוי ומפלגתו, כמו גם את צומת של רפאל איתן, לרשימה משותפת בראשותו. יותר משנעשה הדבר על מנת להגדיל את כוחו של הליכוד, הייתה מטרת האיחוד למנוע מהשניים להתמודד מול נתניהו בבחירות האישיות לראשות הממשלה שהתקיימו אז לראשונה, ויעיד על כך השריון הלא פרופורציונלי לכוחן האמיתי שניתן למפלגותיהם: שבעה נציגים במקומות שנחשבו ריאליים לכל אחת מהן. בבחירות זכה נתניהו בראשות הממשלה ברוב דחוק מול שמעון פרס, אך הרשימה המשותפת זכתה רק ל-32 מנדטים, מהם 22 לליכוד. נתניהו לא הצליח לשמור על יחסים טובים עם שריו והביא לפרישת מספר שרים מהממשלה ולאחר מכן מהליכוד. תנועת "גשר" של דוד לוי פרשה מהשותפות עם הליכוד וחברה לקראת הבחירות לכנסת החמש עשרה ולראשות הממשלה בשנת 1999, לעבודה במסגרת ישראל אחת. נתניהו ניסה בעת ובעונה אחת להמשיך בתהליך אוסלו שאליו הייתה מחויבת הממשלה בהסכמים בין-לאומיים, ולנקוט קו תקיף יותר נגד הפלסטינים בהתאם לעמדתו הניצית המסורתית של הליכוד, אך לא הצליח להביא את שריו לתמוך בדרכו. בני בגין פרש מהממשלה בעקבות הסכם חברון ולאחר חתימת הסכם ואי שהביא להקדמת הבחירות, הקים את חרות ב-23 בפברואר 1999. מנגד, קבוצת חברי כנסת שהיו ממוקמים משמאלו, בראשות יצחק מרדכי, חברו להקמת מפלגת המרכז. צומת החליטה גם היא שלא להתמודד בבחירות ביחד עם הליכוד. בבחירות 1999 הפסיד נתניהו לברק בבחירות האישיות לראשות הממשלה, והליכוד בראשותו השיג 19 מנדטים בלבד, שיא שלילי כמוהו לא נראה מאז הקמתו. בעקבות התבוסה פרש נתניהו מהנהגת הליכוד ומהכנסת. לאחר פרישתו של נתניהו מונה אריאל שרון לשמש במקומו כיושב ראש זמני. ב-2 בספטמבר 1999 גבר שרון בפריימריז על אהוד אולמרט, מאיר שטרית ופרופ' ולדימיר הרצברג, ונבחר באופן רשמי לראשות המפלגה. שרון הנהיג את הליכוד בזמן תקופת כהונתו של אהוד ברק כראש הממשלה. כיו"ר הליכוד, יזם שרון תוכניות הבראה וייעול במנגנוני התנועה, צמצם את הגרעון התקציבי של המפלגה והחל במסע לראשות הממשלה. בחודש יולי 2000 הוביל הליכוד את בחירתו של חבר הכנסת מטעמו, משה קצב, לנשיא מדינת ישראל. ב-2001, על רקע כישלון ועידת קמפ דייוויד ופרוץ האינתיפאדה השנייה, התנהלו בחירות חדשות לראשות הממשלה. בנימין נתניהו סירב להתמודד ללא בחירות חדשות לכנסת והשאיר את הבמה לאריאל שרון שזכה ברוב מוחץ ונתמנה לראש הממשלה, לאחר שהקים ממשלת אחדות עם מפלגת העבודה. חודשים אחדים לאחר שנבחר שרון לראשות הממשלה הכריז בפני כנס מורות בלטרון, בניגוד למצע הליכוד, כי הוא מוכן לאפשר לפלסטינים הקמת מדינה עצמאית. על רקע כוונותיו אלו יזמה קבוצה של חברי מרכז הליכוד בשם "הפורום לשמירת ערכי הליכוד" שבראשה עמד ח"כ אלי כהן, הצעת החלטה הקובעת כי הליכוד מתנגד להקמתה של מדינה פלסטינית בין הירדן לים. הצעת החלטה זו התקבלה על ידי מרכז הליכוד, הגוף המחוקק והעליון של התנועה, ברוב מוחץ. בנובמבר 2002 פרשה מפלגת העבודה מהממשלה, והובילה את המערכת הפוליטית לבחירות חדשות. בבחירות המקדימות שנערכו בליכוד לקראת הבחירות הכלליות, שנערכו לאחר שבנימין נתניהו קיבל את הצעתו של שרון לשמש כשר החוץ, התמודד נתניהו מול שרון על הנהגת הליכוד, אך הפסיד ושרון נבחר לעמוד בראש הליכוד. לאחר ההתמודדות הפנימית חבר נתניהו לשרון ובבחירות לכנסת השש עשרה ב-2003 ניצח שרון פעם נוספת ונבחר לראשות הממשלה, והליכוד קיבל 38 מנדטים, פי שניים מבכנסת הקודמת. זמן קצר לאחר מכן התאחדה מפלגת ישראל בעליה עם הליכוד, ומספר המנדטים שלו הגיע ל-40. את מדיניותה הכלכלית של הממשלה החדשה הוביל נתניהו שמונה לשר אוצר, וזאת אף על פי שרצה תחילה להמשיך בתפקידו כשר חוץ. בתקופתו הובילה הממשלה רפורמות כלכליות ליברליות רבות, שכללו בין השאר קיצוץ חד בקצבאות הרווחה, הורדת מיסים (לצד ביטול פטורים והרחבת בסיס המס), והפרטת גופים ממשלתיים. השיפור במצבו של המשק שהיה נתון במשבר מאז פריצת האינתיפאדה, והצמיחה הגבוהה בשנים שלאחר מכן מיוחסים על ידי רבים למדיניות זו, אף על פי שיש הטוענים כי אמצעים חריפים אלו הובילו לפגיעה קשה מדי בשכבות החלשות, ואף פגעו בבסיס התמיכה של הליכוד. בתקופה זו גברה הביקורת בציבור על כוחו העצום של מרכז הליכוד, שנבע משליטתו על בחירת הרשימה לכנסת, ומהגידול הפוליטי בכוחו של הליכוד כולו. בתקשורת הועלו טענות על סידור מכרזים לחברי מרכז ומינויים פוליטיים מתרבים במשרדי הממשלה שנשלטו על ידי התנועה, כמו גם על תופעות אזוטריות יותר כגון נוכחות ערה של חברי הכנסת של התנועה באירועים משפחתיים של חברי המרכז, מתוך רצון לשמור על יחסים טובים איתם. גם ריחוק גופים מקצועיים ממוקדי קבלת ההחלטות, שהתקבלו בתכיפות הולכת וגדלה ב"פורום החווה", והתחושה בקרב חלקים בציבור כי ויתוריו המדיניים המפתיעים של שרון נבעו מרצון להימנע מהעמדה לדין בשל החשדות הפליליים נגדו, גררו האשמות בדבר השחתת הנורמות הציבוריות. ב-2004 החל שרון לקדם תוכנית פרי יוזמתו, תוכנית ההתנתקות, שעיקרה פינוי רצועת עזה מהתנחלויות ישראליות ויציאת הצבא ממנה, וכן פינוי ארבעה יישובים בצפון השומרון. בשל התנגדות עזה נוספת מצד גורמים בתנועה החליט שרון להעמיד את התוכנית לאישור חברי הליכוד. תוצאות המשאל היו דחיית תוכנית ההתנתקות על ידי מתפקדי הליכוד, ברוב של 59.5%. חרף דחיית התוכנית הצהיר שרון שמנוי וגמור עמו לבצעה והוא הביא אותה למשאל נוסף בפני חברי מרכז הליכוד, תוך הבטחה כי תוצאות המשאל יחייבו אותו, על אף שאין למשאל זה תוקף חוקי מחייב. לאחר שהתוכנית נדחתה גם במשאל שנערך בקרב חברי המרכז אמר שרון "נכשלתי, אחשוב על תוכנית חדשה" והביא את התוכנית לאישור הממשלה, תוך שהוא מאיים כי שר שלא יתמוך בתוכנית יפוטר. התוכנית אושרה על ידי הממשלה ומאוחר יותר הובאה להצבעה בכנסת, שוב כששרון מאיים על חברי סיעת הליכוד כי ינקטו סנקציות נגד ח"כ שלא יצביע בעד ההצעה. התוכנית אושרה בכנסת בהצבעה שכונתה "ליל השפנים" וביצועה החל ב-15 באוגוסט 2005 והושלם ימים אחדים לאחר מכן. היו שאמרו כי בעקבות ההחלטה לבצע את התוכנית חל פילוג "דה-פקטו" בליכוד, כאשר שלושה-עשר מחברי-הכנסת של הסיעה, שכונו "המורדים" (על ידי נאמני שרון), הצביעו נגד ממשלת שרון בשל הפרת החלטת משאל המתפקדים על-ידו. כשבוע לפני ביצוע התוכנית התפטר שר האוצר, בנימין נתניהו, מן הממשלה. בהצבעה שנערכה בוועידת הליכוד על הקדמת הבחירות לראשות המפלגה, דחה המרכז את ההצעה ברוב של כ-52%. תוצאה זו נחשבה לניצחון לשרון ולתבוסה קשה ליריבו הפוליטי, בנימין נתניהו. שרון גם לא נענה להחלטת ועידת הליכוד לקיים משאל עם בנושא תוכנית ההתנתקות, אף שגם רוב גדול בסיעה – 27 חברי כנסת תמך בקיום משאל עם כזה. בהצבעה על הנושא בכנסת התפלגה הסיעה בהצבעה ו-13 שרים וח"כים הצביעו בכנסת נגד משאל עם בניגוד לעמדת הסיעה, כתוצאה מכך נפלה ההצעה לקיים משאל עם בעניין תוכנית ההתנתקות. ימים אחדים לאחר מכן הביא שרון לכנסת לאישור את הצעתו למנות כשרים חדשים בממשלתו את רוני בר-און ואת זאב בוים והודיע לכנסת על מינוי סגני שרים שהתנגדו כמוהו למשאל העם. המינויים החדשים זכו לגינוי רחב של כל סיעות הבית, משום שנראו כמתן תמורה אישית על דרך ההצבעה של אותם ח"כים. שרון הסיר את הצעתו למינוי השרים מאחר שקבוצת חברי כנסת במפלגתו הודיעו שלא יתמכו בה ולא הובטח למינוי הרוב הדרוש. בעקבות כישלון המינויים אמר שרון "בסך הכל רציתי לעשות טובה לחברים, למה שקרה היום יהיו השלכות גדולות". ימים ספורים לאחר פרשת מינוי השרים, בנובמבר 2005 פרש שרון מהליכוד על רקע ההתנגדות העזה במפלגה למהלכיו האחרונים ועל רקע כישלון מינוי השרים. יחד עם עוד שלושה-עשר חברי כנסת שפרשו מהליכוד הוא הקים את מפלגת קדימה. זמן קצר לאחר מכן הצטרפו למפלגה החדשה גם פורשים ממפלגת העבודה. עם פרישת שרון החל מסע הבחירות לראשות הליכוד, כשהמתמודדים היו בנימין נתניהו, סילבן שלום, שאול מופז, משה פייגלין, עוזי לנדאו וישראל כץ. הבחירות המוקדמות התקיימו ב-19 בדצמבר. בתחילת דצמבר הודיע המועמד עוזי לנדאו על פרישתו מהמירוץ ותמיכתו בבנימין נתניהו על בסיס מצע מדיני משותף. כשבוע לאחר מכן, הודיע שר הביטחון שאול מופז במפתיע על הצטרפותו למפלגת קדימה בניגוד לעמדתו המוצהרת הקודמת. בנימין נתניהו זכה בבחירות אלו ברוב גדול, עם 44 אחוזי תמיכה, ושב לראשות הליכוד. בבחירות 2006 ספגה סיעת הליכוד, בראשות בנימין נתניהו, את המכה הקשה בתולדותיה. לאחר מערכת הבחירות בהן גרפה מפלגת קדימה בראשות אהוד אולמרט אחוז נכבד מהמצביעים הפוטנציאליים של הליכוד, התרסקה הליכוד ממפלגת שלטון בעלת 38 מנדטים ל־12 מנדטים בלבד, ובכך הפכה לסיעה השלישית בגודלה בכנסת, השווה בגודלה לסיעת ש"ס. עם זאת, כיוון שהליכוד הייתה הסיעה הגדולה ביותר באופוזיציה (ש"ס הייתה בקואליציה), כיהן יושב־הראש שלה, בנימין נתניהו, כראש האופוזיציה, עד לבחירות לכנסת ה-18. למרות התבוסה הקשה שספגה המפלגה בהנהגתו, זכה נתניהו לעדנה מסוימת בעין הציבור לאחר שאירועים כגון השתלטות החמאס על רצועת עזה והתגברות ירי הקסאמים, ובצידם פריצת מלחמת לבנון השנייה נראו כמוכיחים את אזהרותיו המוקדמות בדבר הסכנות הביטחוניות הטמונות לישראל, אזהרות שתוארו בזמן הבחירות כ"קמפיין הפחדה" וזכו ללעג מצד מתנגדיו. גם פעילות ההסברה הענפה מחוץ לישראל בה עסק בזמן המלחמה בלבנון זכתה להערכה. מדיניותו הכלכלית הקפיטליסטית שהובילה לתבוסתו בבחירות, נחשבה לאחד הגורמים לשגשוג הכלכלי המחודש (ששרר עד פרוץ משבר הסאבפריים). ביולי 2007, לאחר הבחירות המקדימות לראשות מפלגת העבודה שבהן ניצח אהוד ברק, החליטו ראשי הליכוד כי עליהם להיערך לבחירות הבאות לכנסת ולהקדים את הבחירות לראשות התנועה. נתניהו, שהתמיכה בו בקרב הציבור הייתה גבוהה לפי הסקרים באותה התקופה, טען שיש לקיים את הבחירות לראשות התנועה בהקדם האפשרי, ואילו סילבן שלום טען שיש לערוך אותן רק בסוף 2007. לאחר שמרכז הליכוד החליט לקיים את הפריימריס כבר ב-14 באוגוסט, פרש שלום מהמירוץ בטענה שלא ניתן לו זמן להיערך לבחירות וכינה את ההחלטה על הקדמת הבחירות "הצגה נוסח משטר הבעת' הסורי". בבחירות על ראשות הליכוד התמודדו נתניהו, משה פייגלין ודני דנון, יו"ר הליכוד העולמי. נתניהו ניצח בהן ברוב של 73%, פייגלין קיבל 23.4% מקולות המצביעים ודנון קיבל 3.5% מהקולות בלבד. הבחירות הפנימיות לרשימת הליכוד לכנסת ה-18 התקיימו ב-8 בדצמבר 2008 והשתתפו בהן כלל מתפקדי הליכוד אשר בחרו 12 נציגים ברשימה ארצית ושני נציגים למשבצות העולים. בנוסף, כל מתפקד בחר נציג של המחוז אליו הוא משתייך. המקום הראשון שוריין לנתניהו. תוצאות ראשונות פורסמו בלילה שאחרי הבחירות, אך לאחר מכן פורסמו תוצאות שונות במקצת על פי פרשנות אחרת של תקנון הליכוד. ערעורים שונים הוגשו לבית המשפט המחוזי על הרשימה, אולם רק ערעורו של מיכאל רצון, על דחיקת נציגי הרשימה הארצית מפני נציגי המחוזות, התקבלה, בטענה שהשינוי נעשה על מנת לדחוק אחורה את משה פייגלין, שהוא שיקול זר. אולם בית המשפט העליון ביטל את ההחלטה באומרו שהתערבות בהליכים בתוך מפלגה צריכים להיות מוגבלים למקרים קיצוניים, ושלא הוכח שאכן היו שיקולים זרים בפני בית הדין העליון של הליכוד, בניגוד לוועדת הבחירות שלה. השופטים גם בקשו שהליכוד תנקוט אמצעים כדי שמקרים כפי שהובאו בערעורים לא יישנו. סמוך להגשת הרשימה לוועדת הבחירות המרכזית חתם הליכוד על הסכם איחוד עם מפלגת אח"י, שפרשה מהבית היהודי, ובמסגרתו שוריינו לאח"י המקומות 39 (שלום לרנר) ו-45 (אדמונד חסין) ברשימת הליכוד-אח"י לכנסת השמונה עשרה. בבחירות לכנסת ה-18 קיבלה תנועת הליכוד 27 מנדטים. אף על פי שקדימה קיבלה 28 מנדטים, הצליח בנימין נתניהו לגבש רוב קואליציוני יחד עם מפלגת ישראל ביתנו, ש"ס ומפלגת העבודה ולהרכיב ממשלה. הממשלה אושרה ב-69 קולות, כאשר חמישה מחברי מפלגת העבודה בחרו להיות נוכחים במליאת הכנסת אך לא להשתתף בהצבעה. ב-31 במרץ 2009 הושבעה ממשלת ישראל ה-32 בראשות בנימין נתניהו שבה היו חברים 30 שרים ושמונה סגני שרים והיא הייתה מהממשלות הגדולות בהיסטורית מדינת ישראל. בתחילת דצמבר 2011 הודיע בנימין נתניהו על הקדמת הפריימריז על ראשות מפלגת הליכוד. כעבור מספר שבועות זכתה ההודעה לתמיכת ואישור מרכז המפלגה, ונקבע מועד ההתמודדות לתאריך 31 בינואר 2012. סילבן שלום, שהתכוון להתמודד מול נתניהו, יצא בהכרזה על אי חוקיותו של ההליך, אך לבסוף הסיר את התמודדותו מול נתניהו. היחיד שהתייצב מול נתניהו היה משה פייגלין. בבחירות ניצח נתניהו כשהוא מקבל 74% מקולות הבוחרים, לעומת פייגלין ששימר את הישגו מהבחירות הקודמות – 24%. באוקטובר 2012 הודיעו בנימין נתניהו (יו"ר הליכוד) ואביגדור ליברמן (יו"ר מפלגת ישראל ביתנו) על כוונתם להגיש רשימת מועמדים משותפת לשתי המפלגות, שנקראה "הליכוד - ישראל ביתנו", בבחירות לכנסת התשע עשרה. הסיעה המשותפת זכתה ב-31 מנדטים, מתוכם 20 לליכוד, ונתניהו מונה להקים את הממשלה השלישית בראשותו. ב-9 ביולי 2014 פורקה הסיעה המשותפת למרכיביה, וסיעת הליכוד חזרה לתפקד כסיעה עצמאית. ב-2 בדצמבר 2014 פיטר ראש הממשלה נתניהו ממשלתו את השרים יאיר לפיד וציפי לבני, בהאשמת חתרנות נגדו. בהמשך הערב התפטרו יתר שרי יש עתיד, ונתניהו הודיע על כוונתו לתמוך בהצעות החוק להקדמת הבחירות שהונחו על שולחנה של הכנסת באותם ימים. בבחירות לכנסת העשרים שנערכו ב-17 במרץ 2015, זכה הליכוד לניצחון, והגדיל כוחו ל-30 מנדטים. ניצחון זה הושג בניגוד לתחזיות מוקדמות, שהראו על יתרון למפלגת המחנה הציוני בראשות יצחק הרצוג, והוצג על ידי נתניהו והליכוד ככזה שהושג "כנגד כל הסיכויים". הקמפיין החדשני עשה שימוש נרחב במדיה חדשים, וכלל סרטונים היתוליים בהשתתפות נתניהו שהופצו ברשתות החברתיות. תומכי הליכוד טענו כי הקמפיין הנגטיבי והאגרסיבי שנעשה לטענתם נגד המפלגה מצד "חוגי השמאל והתקשורת", יצר אפקט הפוך והוביל לתמיכה גדולה של מצביעים בבחירות אשר הציבה את הליכוד שוב כמפלגת השלטון. ביום הבחירות לכנסת העשרים, במטרה לגייס מצביעים נוספים למפלגתו, יצא נתניהו בהצהרה כי "המצביעים הערבים נעים בכמויות אדירות לקלפי". ההתבטאות גררה ביקורת רחבת היקף נגד נתניהו, בישראל ומחוצה לה. הביקורת, שהחלה כבר ביום הבחירות, נשמעה גם חודשים לאחר מכן. אמירתו של נתניהו העלתה לדיון את השיח סביב הגדרת מדינת ישראל כמדינה יהודית ודמוקרטית, זהותם הלאומית של ערביי ישראל ואופן ההתייחסות לזכות ההצבעה והבחירה שלהם. בספטמבר 2018, הצטרפה המפלגה לארגון גג של מפלגות שמרניות האיחוד הדמוקרטי הבין-לאומי. לקראת הבחירות לכנסת העשרים ואחת, ב-20 בפברואר 2019, נמסרה הודעה משותפת לליכוד ולבית היהודי, בה נאמר כי סוכם שרשימת עוצמה יהודית תרוץ כבלוק טכני עם הבית היהודי, ובתמורה הליכוד מתחייב לשריין את המקום ה-28 ברשימתו לכנסת, עבור ח"כ מהבית היהודי, שיוכל לעבור לבית היהודי מיד לאחר היבחרו לכנסת. חבר הכנסת ששוריין הוא סגן שר הביטחון אלי בן דהן, וכדי למנוע עתירות נגד המהלך הועברה מפלגת אח"י של אפי איתם לידיו של בן דהן ובוצע איחוד מפלגות עם הליכוד. הליכוד זכה בכ-35 מנדטים המונים כ-26.46% מן הקולות. נשיא המדינה ראובן ריבלין הטיל על יו"ר הליכוד, בנימין נתניהו, להרכיב את ממשלת ישראל השלושים וחמש, אך זה כשל במשימה ודחף לקיום בחירות בפעם השנייה ב-2019, לראשונה בתולדות המדינה. לקראת הבחירות לכנסת העשרים ושתיים, הודיעו ראשי המפלגות הליכוד וכולנו, על ריצה משותפת בבחירות. נציגי כולנו בכנסת ישובצו במקומות 5, 15, 29 ו-35. המפלגה קיבלה 32 מנדטים כך שרק שלושת המקומות הראשונים נכנסו, מתוכם יפעת שאשא-ביטון הוצבה כחלק ממפלגתה בעוד שאר הרשימה הצטרפה לליכוד. לקראת הבחירות לכנסת העשרים ושלוש בוטלו הפריימריז לרשימת ליכוד והמועמדים לכנסת נשארו באותו מיקום, עם השריון למועמדי כולנו בראשות יפעת שאשא-ביטון. ב-26 בדצמבר 2019, בפריימריז שנערכו, התמודד גדעון סער על ראשות הליכוד מול נתניהו, ונתניהו נבחר כשקיבל 72.5% מהקולות. בבחירות לכנסת העשרים ושלוש קיבלה המפלגה 36 מנדטים (תוספת של 4 מנדטים מהבחירות הקודמות), לעומת 33 מושבים למפלגת כחול לבן. הוקמה ממשלת ישראל השלושים וחמש, ממשלת אחדות לאומית תוך התחיבות לרוטציה של נתניהו עם ראש כחול לבן, בני גנץ. לקראת מערכת הבחירות לכנסת העשרים וארבע, הודיע גדעון סער כי הוא פורש מהליכוד כדי להקים את מפלגת תקווה חדשה. חברי הכנסת זאב אלקין, שרן השכל, מיכל שיר ויפעת שאשא-ביטון הצטרפו אליו. לאחר הבחירות והקמת ממשלת ישראל השלושים ושש, הליכוד הלך לאופוזיציה לאחר 12 שנות שלטון רצופות. לקראת מערכת הבחירות לכנסת העשרים וחמש, הוחלט על פי חוקת הליכוד שיתקיימו בחירות מקדימות לראשות המפלגה ולרשימה. אולם לאחר פרישת ח"כ יולי אדלשטיין מהמירוץ לראשות המפלגה, נשאר יו"ר הליכוד בנימין נתניהו מועמד יחיד ונבחר לראשות התנועה באופן אוטומטי. הבחירות המקדימות בליכוד (2022) בהן נקבעה רשימת המפלגה לכנסת התקיימו ב-10 באוגוסט. בית הדין של הליכוד פסק כי לאחר שחברי המרכז לא נבחרו מחדש למעלה מעשור, מתפקדי הליכוד יבחרו את נציגי המחוזות ברשימה. בשנת 2020, לאחר הבחירות לכנסת העשרים ושלוש, מפלגת הליכוד הייתה חייבת לכנסת יותר מ-73 מיליון ש"ח. סכום זה מהווה 85% מגובה החוב שמותר למפלגה לקחת מהכנסת. בשנת 2022 עמד החוב על 65 מיליון ש"ח. בדצמבר 2022, לאחר הקמת ממשלת ישראל השלושים ושבע הליכוד חזר להנהגת המדינה ונתניהו התמנה לעמוד בראשות הממשלה. במרץ 2025, סיעת הליכוד וסיעת הימין הממלכתי חתמו על הסכם פוליטי הכולל מיזוג עתידי בין הסיעות. מאוחר יותר באותה שנה הוכרזו בחירות ליו"ר הליכוד ומועמדה לראשות הממשלה. נתניהו היה המועמד היחיד והוא הוכרז ב-6 בנובמבר כיו"ר המפלגה. בפברואר 2026 פרסמה המפלגה תצלום מזויף, שבו יאיר לפיד ונפתלי בנט מרימים ידיהם יחד עם ראשי המפלגות הערביות. יו"ר ועדת הבחירות המרכזית, השופט נעם סולברג, הורה להסיר את התתצלום לשלם הוצאות בסך 8,500 ש"ח. אפיון ואידאולוגיה מבחינה מדינית, תנועת החרות, שהייתה המרכיב העיקרי בליכוד, היא ממשיכתה הרעיונית של התנועה הרוויזיוניסטית, אשר תבעה בעלות על כלל שטחי ארץ ישראל, כולל על עבר הירדן המזרחי. תנועת הליכוד תמכה במפעל ההתנחלויות מראשיתו, והרחיבה אותו במידה ניכרת, החל מתחילת כהונתו של מנחם בגין כראש ממשלה ועד לחילופי השלטון לאחר הבחירות לכנסת השלוש עשרה ב-1992. תומך בולט בהתיישבות ביש"ע בהנהגת הליכוד היה אריאל שרון, אשר לאורך שנות ה-80 ניצח על הקמתן של מרבית ההתנחלויות, במסגרת תפקידו כשר המסחר והתעשייה ולאחר מכן כשר הבינוי והשיכון. בשאלת יחסי ישראל–סוריה, הביע הליכוד תמיכה עקרונית בפיתוח ההתיישבות ברמת הגולן והתנגדות לנסיגה ממנה, ואף הוביל את החלת החוק הישראלי עליה. בפועל לא גדל מספר היישובים והתושבים בגולן באופן משמעותי בתקופת שלטונו. למרות עמדתו המוצהרת הנוקשה בסוגיית עסקאות מסוג שטחים תמורת שלום, היה זה דווקא הליכוד בראשות מנחם בגין אשר הוביל ב-1979 את הסכם השלום עם מצרים, וב-2005 אריאל שרון הוביל את תוכנית ההתנתקות, במסגרתם התחייבה ישראל לנסיגה מלאה מחצי האי סיני, יישובי חבל ימית, עזה וצפון השומרון. האפשרות לוויתורים מדיניים גרמה לאורך השנים למחלוקות חריפות בתנועה יותר מכל נושא אחר. כאשר היא נעשתה ללא הסכם שלום בתוכנית ההתנתקות, היא אף הביאה לפיצול בליכוד ולהקמת מפלגת קדימה בסוף 2005. ב-28 בינואר 2024 השתתפו השרים חיים כץ, מאי גולן, עידית סילמן, עמיחי שיקלי, שלמה קרעי וחברי הכנסת חנוך מילביצקי, ניסים ואטורי, עמית הלוי, טלי גוטליב, אתי עטיה, משה פסל, אריאל קלנר ודן אילוז בכנס "ניצחון ישראל" - כובשים ומיישבים את חבל עזה. השר מיקי זוהר השתתף בסרטון שמקדם את הכנס. מדיניותו הביטחונית של הליכוד גורסת תמיכה בצעדים תקיפים נגד הטרור הפלסטיני, ונגד מבצעיו בישראל ומחוצה לה. במסגרת מדיניות זו יזם שלטון הליכוד את מבצע שלום הגליל במטרה לפגוע בנוכחות אש"ף בלבנון. אף על פי שמשימה זו נחלה לבסוף הצלחה עם עזיבתם הכפויה של יאסר ערפאת ואנשיו את ביירות, תרמה המלחמה לצמיחתם של ארגונים פנים לבנונים עוינים לישראל כגון חזבאללה ואמל, וגררה נוכחות ישראל בלבנון (מוגבלת החל מיוני 1985 לרצועת הביטחון) שנמשכה כ-18 שנים. בנוסף תמך הליכוד, יחדיו עם שר הביטחון מטעם מפלגת העבודה יצחק רבין, ביד תקיפה לדיכוי האינתיפאדה הראשונה. מבחינה צבאית גרידא נחלה מדיניות זו הצלחה וההתקוממות הפלסטינית האלימה גוועה ברובה, אך ישראל לא הצליחה לדכא את שאיפותיהם הלאומיות של הפלסטינים, ותדמיתה הבין-לאומית נפגעה. בשנות תהליך אוסלו הנהיג הליכוד את הימין בהתנגדות להסכם, תוך שהוא מצביע על הסכנות הביטחוניות הטמונות בו. לאחר גל הפיגועים בפברואר-מרץ 1996, הובילה את קמפיין הבחירות של נתניהו הסיסמה "עושים שלום בטוח", הבאה להדגיש את נכונותו של הליכוד להמשיך בתהליך המדיני, אך תוך דרישה תקיפה מהרשות הפלסטינית לעמוד בהתחייבויותיה הביטחוניות. ממשלת הליכוד שקמה ב-1996 המשיכה בהסכמים מדיניים עם הפלסטינים כהסכם חברון והסכם ואי, אך תבעה את עקרון ההדדיות, ודרשה מהפלסטינים לנקוט בצעדים תקיפים נגד ארגוני טרור כדוגמת חמאס והג'יהאד האסלאמי הפלסטיני בתמורה לוויתורים מדיניים. בנוסף לא היססו ממשלות הליכוד להפעיל את צה"ל הרחק מעבר לגבולות מדינת ישראל נגד מה שתפסו כאיומים על שלומה. כך נפגעה קשות (ולמעשה חוסלה) תוכניתה הגרעינית של עיראק במבצע אופרה, וננקטו צעדים נגד אש"ף אף במהלך שהותו בתוניסיה לאחר גרושו מלבנון, כגון הפצצת מפקדותיו וחיסולו של אבו ג'יהאד. גם מבחינה כלכלית ממוקם הליכוד בעל השורשים במפלגה הליברלית בימין, והוא התנגד מראשיתו למדיניות הכלכלית הסוציאליסטית של שלטון המערך, ותמך בעידוד יזמות עסקית, הקטנת המעורבות הממשלתית במשק, והפרטת גופים ממשלתיים. עם זאת ניתן לזהות סתירה מסוימת בין תמיכתו של הליכוד בליברליזם כלכלי, לבין מיתוגו כמפלגה חברתית השואפת לקדם את רווחת השכבות החלשות. דוגמה לכך ניתן לראות בכהונתו של בגין, שבה לצד תוכנית המהפך הכלכלי שכללה צעדים ליברליים רבים כגון הסרת מגבלות על שוק המט"ח וביטול סובסידיות ממשלתיות, נקט הליכוד במדיניות של השקעה ממשלתית ניכרת בתקציבי רווחה, שהתבטאה בין השאר בתוכנית שיקום השכונות ובהגדלה משמעותית של קצבאות הביטוח הלאומי. לאחר כישלון התוכנית הכלכלית של הליכוד, בין השאר בגלל משבר האנרגיה העולמי ושקיעתו של המשק במשבר כלכלי קשה, ממנו נחלץ רק לאחר תוכנית הייצוב הכלכלית של 1985, נקט הליכוד בגישה מתונה יותר ולא יזם רפורמות מקיפות במשק בשאר כהונתו של שמיר. חזרה מתונה למצעו הכלכלי של הליכוד הייתה בעת שובו של הליכוד לשלטון ב-1996 וב-2001 שהביאה לנקיטת רפורמות כלכליות מסוימות, אולם ניסיון משמעותי נוסף למימושה של האידאולוגיה הכלכלית של הליכוד התרחש רק עם תחילת כהונתו של נתניהו כשר האוצר ב-2003, שהביאה עמה מדיניות קפיטליסטית תקיפה שכללה רפורמות ליברליות רבות במטרה להפחית את הוצאות הממשלה ואת נטל המס, ולהגביר את התחרותיות במשק. עם זאת, עקב ירידה בהכנסות ממיסים, כמו גם עקב התחייבויות הרבות לשותפי הקואליציה, החלה ממשלתו השנייה של נתניהו את כהונתה בגל של העלאות מיסים, ששיאן העלאת שיעור המע"מ מ-15.5% ל-16.5%. צעדים אלה היו בניגוד לדברים שאמר נתניהו ערב הבחירות. חוקת הליכוד מגדירה את מטרת התנועה בתחום הכלכלי כחתירה למשק חופשי תוך צמצום ההתערבות הממשלתית, לצד "אחריות ממלכתית לרמה סבירה של ביטחון הפרט, חינוך, בריאות, תעסוקה, ואיכות הסביבה". היסטורית, אחד הדגלים העיקריים של הליכוד היה המאבק נגד הסתדרות העובדים שנשלטה בידי מפא"י ובהמשך המערך. נושא זה איבד מחשיבותו עם הירידה בהשפעתה של ההסתדרות והיחלשות הקשר בינה לבין מפלגת העבודה. במשך השנים אף השתלבו בעמדות כוח בליכוד ראשי ועדי עובדים חזקים במשק, כחיים כץ מוועד התעשייה האווירית ופנחס עידן מוועד עובדי רשות שדות התעופה. בשאלת יחסי דת ומדינה והמאבק על הסטטוס קוו נחשב הליכוד למקורב לפלג המסורתי בוויכוח. הליכוד תמך בחוקים שונים בנימוק של שמירת צביונה היהודי של המדינה כגון חוק חג המצות (איסורי חמץ), האוסר הצגת חמץ למכירה בפומבי בפסח, וחוק יסודות המשפט שקבע כי יש לפנות ל"עקרונות מורשת ישראל" במקרה של לאקונה בחוק. גם במאבקים על פתיחת מקומות בילוי ומסחר בשבת, שפרצו במלוא עוזם במהלך שנות ה-90, התייצב הליכוד בדרך כלל לצד הדתיים. הדבר היה מנוגד לעמדותיו של ז'בוטינסקי שהיה בעל השקפה חילונית. המפד"ל הייתה חברה בכל ממשלות הליכוד עד לתוכנית ההתנתקות, והחינוך הממלכתי דתי, כמו גם מפעל ישיבות ההסדר והמכינות קדם צבאיות זכו לתמיכה רבה בתקופה זו. עם התבססות חזון גוש אמונים בלב הקונצנזוס הציוני דתי, הלך והפך שידוך זה לברור מאליו, וימיה של תנועת המזרחי כבת ברית קבועה של שלטון מפא"י והמערך חלפו ללא שוב. ממשלות הליכוד, למעט ממשלת ישראל ה-30 וממשלת ישראל ה-33, כרתו ברית עם המפלגות החרדיות. ממשלתו של בגין הסירה את המגבלה על מספר הפטורים הניתנים במסגרת הסדר תורתו אומנותו (שעמד אז על 800 בשנה), פעולה שהביאה ברבות השנים למצב בו 11% ממחזור הגיוס אינו מתגייס מסיבה זו, נכון ל-2007. ממשלות הליכוד גם הגדילו במידה ניכרת את תקציב הישיבות, החינוך העצמאי ורשת מעיין החינוך התורני של ש"ס, כמו גם את התמיכה בעמותות חרדיות. בגין החליט גם על הענקת קצבת הבטחת הכנסה גם לאברכים. אף על פי שגם ממשלות המערך הפגינו נכונות לוויתורים מופלגים לחרדים במסגרת הקואליציות שהקימו לאורך השנים, נראה שהמרכיב המסורתי החזק בקרב גרעין התמיכה של הליכוד והצבעה לליכוד בעיקר בקרב אוכלוסייה המגדירה את עצמה כמסורתית ודתית, ומנגד אפיונן החילוני המובהק של תנועות השמאל בישראל, וזיהוין בקרב המגזר החרדי כחלק מאליטה שהחרדים מעולם לא חשו עצמם כשייכים לה, גרמו להזדהות רגשית עם הליכוד בקרב רבים במגזר החרדי, ובו בזמן לרגשי טינה עמוקים כלפי המערך והעבודה. רגשות אלו התבטאו בין השאר בנאום השפנים והחזירים של הרב שך. שבר גדול ביחסיו של הליכוד עם הדתיים הלאומיים אירע בזמן תוכנית ההתנתקות ב-2005, שבמהלכה הובילה ממשלה בראשות הליכוד את פינוי כלל יישובי רצועת עזה וצפון השומרון. אומנם העובדה שהמהלך הפתאומי היה מזוהה עם שרון יותר משהיה מזוהה עם הליכוד, וכי רוב התומכים בו עזבו לקדימה לאחר פילוג התנועה, הביאה במידה מסוימת לאיחוי הקרע, אך האפשרות לוויתורים מדיניים נוספים מוסיפה להיות מוקש אפשרי ביחסיו של הליכוד עם הציבור הדתי-לאומי. בניגוד לברית זו עם הדתיים הלאומיים, אשר על כל קשייה מתבססת על חזון מדיני משותף ואמונה באידאולוגיה הציונית, כמו גם על נוכחות מסורתית ואף דתית חזקה בתוך הליכוד, נראה שהחיבור לחרדים היה פוליטי יותר מאשר אידאולוגי, ואידיאל לימוד התורה כדרך חיים לא זכה מעולם לתמיכה רבה במפלגה. גם הסתגרותם המופגנת של החרדים מפני החברה הישראלית לא אפשרה חיבור משמעותי בינם לליכוד. עקב כך בשנת 2003 הסכים הליכוד ללא היסוס רב לדרישתה של מפלגת שינוי להקים קואליציה ללא חרדים לאחר הבחירות לכנסת השש עשרה, ואף קיצץ עמוקות בקצבאות במסגרת מדיניותה הכלכלית של הממשלה שהוקמה בראשותו לאחריהן. גם בהתלהט הוויכוח סביב נושאי הגיור והיתר המכירה בשנות השמיטה, התנגדו ככלל אנשי הליכוד לגישתם ההלכתית הנוקשה של החרדים. תומכים בשנים 1969–1992 היו תומכי הליכוד צעירים יותר בממוצע מתומכי המערך. ככל הנראה, הסיבה לכך היא בתחלופה דורית ולא בשינוי אישי אצל המצביעים במהלך החיים. לעומת זאת, לא נמצא כל קשר בין מגדר לבין דפוסי הצבעה. מצביעי הליכוד הגיעו בממוצע ממעמד יותר נמוך. בעוד בסקרים לא נמצא קשר בין דפוסי הצבעה לרמת הכנסה, מצביעי הליכוד באופן מסורתי מגיעים מאזורי הפריפריה, ומתאפיינים במגורים בצפיפות גבוהה יותר. הבדלי השכלה בין מצביעי הליכוד והמערך לא נמצאו בשנים 1969–1977, אך התחילו להופיע בבחירות 1981 והתחזקו בבחירות של 1984–1992. בשנים 1969 ו-1973 לא היו מצביעי הליכוד דתיים יותר ממצביעי המערך, אף על פי שהם תמכו יותר בצביון יהודי למדינה, אולם בשנים 1977–1981 היו מצביעי הליכוד דתיים יותר ממצביעי המערך ובבחירות 1984–1992 היו מצביעי הליכוד הרבה יותר דתיים ממצביעי המערך ומפלגת העבודה. תוצאות בחירות 2015, הצביעו על תמיכה משמעותית וחזקה לליכוד בערי הפריפריה של ישראל. בבחירות אלו התבסס ניצחון הליכוד על תמיכתם של מצביעים מסורתיים ודתיים מערי הפיתוח ומההתנחלויות ביהודה ושומרון[דרוש מקור]. לפי סקרים המבוססים על הצבעה בבחירות לכנסת העשרים וארבע, 56% ממצביעי הליכוד הם מסורתיים (33% מהאוכלוסייה היהודית הבוגרת), 31% הם חילונים (45% מהאוכלוסייה היהודית הבוגרת), 11% הם דתיים לאומיים (12% מהאוכלוסייה היהודית הבוגרת) ו-2% הם חרדים (10% מהאוכלוסייה היהודית הבוגרת). עיקר תומכי המפלגה הליברלית היו אשכנזים, כך שלמרות תמיכה של מזרחים ותיקים בתנועת החרות, גח"ל לא זוהתה בהכרח עם תומכים מזרחיים. במעבר מגח"ל ב-1969 לליכוד ב-1973 גברה התמיכה של המזרחים בליכוד ובבחירות 1977 גדלה התמיכה עוד יותר כך שהליכוד גבר על המערך בבחירות אלו כמעט בכל השכונות, היישובים והערים בהם האוכלוסייה הייתה בעיקרה של מזרחים. בשנה זו היו כ-52% ממצביעי הליכוד מזרחים (יהודים שעלו מאסיה-אפריקה או ילדים של יהודים שעלו מאסיה-אפריקה), 38% אשכנזים (יהודים שעלו מאירופה-אמריקה או ילדים של יהודים שעלו מאירופה-אמריקה) ו-10% ילידי ישראל. ב-1977, 53% מהמזרחים ו-44% מילידי ישראל הצביעו לליכוד, אך רק 20% מהאשכנזים. התמיכה של המזרחים בליכוד, ובגח"ל ממנה צמח הליכוד, התגברה עם השנים והייתה בעלת היקף שונה בהתאם למרחק מהערים הראשיות של ישראל. המזרחים הוותיקים תושבי שכונות ישנות בערים הגדולות, כמו מוסררה, נחלאות, כרם התימנים, שכונת התקווה, כפר שלם, נווה שלום וואדי סאליב, היו תומכים מסורתיים של תנועת החרות וכבר ב-1965, זכתה גח"ל בתמיכה של 40.7% בממוצע בשכונות אלו. לאחר הקמת הליכוד ב-1973 עלה שיעור התמיכה ל-46.7% וב-1977 הגיע ל-55.7%. בשיכוני עולים בערים הגדולות, דוגמת רוממה, קריית יובל, עיר גנים, פרדס כץ, תל גיבורים וג'סי כהן בחולון, עמישב, תל כביר, מחנה דוד בחיפה רמת הרצל ודורה בנתניה, זכה גח"ל בבחירות לכנסת השישית ב-1965 ב-25.6% בממוצע. התמיכה עלתה עם השנים, בשנת 1973 זכה הליכוד ברב שיכוני העולים ביותר קולות מהמערך, וב-1977 הגיעה התמיכה בליכוד ל-47.4% בממוצע. בערי עולים ליד הערים הוותיקות, דוגמת אור יהודה, ראש העין, אור עקיבא, קריית אתא, יהוד, לוד, קריית ים ורמלה, היה שיעור תמיכה דומה לזה שבשיכוני העולים בערים. התמיכה עלתה מ-23.7% ב-1965, ל-45.8% ב-1977. בערי הפיתוח היה שיעור התמיכה בגח"ל ב-1965 נמוך יחסית ועמד על 17.5% בממוצע. עמירם גונן משער שהסיבה לשיעור נמוך זה הוא התלות של תושבי הפריפריה בשלטון המרכזי. שיעור התמיכה בערי הפיתוח עלה ב-1977 ל-44.1% ובשנה זו הליכוד קיבל יותר קולות מהמערך כמעט בכל ערי הפיתוח. הזיהוי של הליכוד עם המזרחים הגיע לשיאו בבחירות לכנסת העשירית ב-1981, שבהן היה השסע העדתי חלק מרכזי במערכת הבחירות. בעצרת בחירות של המערך אמר דודו טופז, במה שנודע כנאום הצ'חצ'חים: "תענוג לראות את הקהל הזה, ותענוג לראות שאין כאן צ'חצ'חים שהורסים אספות בחירות... הצ'חצ'חים הם במצודת זאב. הם בקושי שין גימלים, אם הם בכלל הולכים לצבא. כאן נמצאים החיילים ומפקדי היחידות הקרביות". מנחם בגין הגיב למחרת על הדברים באומרו לבוחריו להתקשר למכריהם: רק תספרו להם מה אמר פה דודו טופז, כל העם חייב לדעת את זאת. זה משפט אחד בסך-הכל: הצ'חצ'חים כולם במצודת זאב. אשרינו שהם במצודת זאב. הזיהוי של הליכוד עם המזרחים המשיך להיות חזק מאוד גם בשנים שלאחר מכן. בבחירות 1992, 41% מהמזרחים ו-24% מילידי ישראל הצביעו לליכוד, אך רק 16% מהאשכנזים. כתוצאה מכך, 68% ממצביעי הליכוד היו מזרחים, 21% אשכנזים ו-11% ילידי ישראל. ירידת כוחן של המפלגות הגדולות, עלייתה של מפלגת ש"ס ועליית שיעור הנישואים הבין-עדתיים הקהו את השיוך העדתי של המפלגות הגדולות בישראל, כך שאף על פי שעדיין יש לליכוד יותר תמיכה אצל המזרחים, השיוך אינו כה מובהק. מבנה המפלגה הגוף הרחב ביותר במפלגת הליכוד הוא מתפקדי (חברי) התנועה, הקונים את חברותם בתשלום דמי חבר שנתיים. גוף זה בוחר את יו"ר המפלגה (ומועמדה לראשות הממשלה), את חברי המרכז של המפלגה, ורשאי להיבחר למוסדותיה. בעקבות שינוי בחוקת הליכוד שבוצע לפני הבחירות לכנסת השבע עשרה ב-2006, עברה למתפקדי התנועה גם הזכות לבחור את חברי הכנסת של התנועה בבחירות אישיות (פריימריס). נכון לשנת 2022 חברים בליכוד כ-137 אלף איש. במקביל לגופים הרשמיים, פועלים בליכוד כמה ארגונים פנימיים וקבוצות מתפקדים הזוכים להכרה רשמית, אך אינם מהווים חלק ממנגנון המפלגה. בין הקבוצות הפעילות ניתן למנות את קבוצת הליברלים בליכוד שעוסקת בקידום מדיניות כלכלית חברתית הנשענת על עקרונות הליברליזם הקלאסי, המטה הלאומי בליכוד המקדם את נושא ההתיישבות ואת הליכודניקים החדשים – הקבוצה הדמוקרטית שמקדמים אג'נדה ליברלית-דמוקרטית. אותיות סמל המפלגה נוצרו במקור בשנת 1977 על ידי המעצב הגרפי גדעון שגיא, אשר יצר גם את הסלסול של האות למ"ד כך שידמה לדגל המתנופף ברוח. בשנת 1981 שיפץ הפרסומאי ראובן אדלר את הסמל על בסיס הגופן העברי "חיים" בתצורה מוטה והוסיף את הצבע הכחול. קמפיינים רבים של מועמדי הליכוד שלמ"ד בשמם משלבים את הלמ"ד המסולסלת, המזוהה עם הסמל. הליכוד ברשויות המקומיות בבחירות לרשויות המקומיות בישראל בשנת 2018 הכניסה המפלגה 114 נציגים ל-52 עיריות, מתוכם 16 ראשי עיר. יעקב אדרי נבחר מטעם הליכוד לראשות העיר אור עקיבא, אך ב-20 במרץ 2023 הוגש נגדו כתב אישום המייחס לו קבלת שוחד, קבלת דבר במרמה בנסיבות מחמירות, הלבנת הון, שיבוש מהלכי חקירה, גניבה בידי עובד ציבור, מרמה, הפרת אמונים ותקיפה פיזית, ועקב כך התפטר מתפקידו. מפלגת הליכוד הגישה רשימות מטעמה להתמודד ב-60 רשויות מקומיות בבחירות ב-2024, והייתה שותפה להגשת רשימות בתשע רשויות נוספות. בבחירות נבחרו כ-106 נציגים מטעם הליכוד בכ-49 רשויות מקומיות, מתוכם 20 ראשי עיר: מעבר לכך, במפלגה טענו כי תמיכתם הביאה לבחירתם של ראשי רשויות נוספים. ראשי המפלגה (תקופת חיים) ראשי ממשלה מטעם הליכוד 2022–הווה נציגי הליכוד בכנסת (בחירות) (בהשבעת הכנסת) (1973) הערות (1977) הערות (1981) הערות (1984) הערות (1988) הערות (1992) הערות (1996) הערות (1999) הערות (2003) הערות (2006) הערות (2009) הערות (2013) הערות (2015) (אפ' 2019) אח"י: אלי בן-דהן (ספ' 2019) כולנו: יפעת שאשא-ביטון (2020) כולנו: יפעת שאשא-ביטון הערות (2021) עתיד אחד: אופיר סופר הערות (2022) תוצאות בחירות כחלק מהליכוד-גשר-צומת כחלק מהליכוד - ישראל ביתנו שיתופי פעולה וארגונים בין-לאומיים מפלגת הליכוד הצטרפה בסוף שנת 2016 כחברה במפלגת "ברית השמרנים והרפורמיסטים באירופה" אשר מפעילה את סיעת השמרנים והרפורמיסטים האירופים בפרלמנט האירופי. בנוסף, נמצאת המפלגה במגעים להצטרפות לארגון הבין-לאומי של המפלגות השמרניות (IDU). למפלגת הליכוד נציגויות בארצות הברית ובאירופה, ואף ארגון בין-לאומי בשם הליכוד העולמי, והיא בעלת נציגות בהסתדרות הציונית העולמית ומוסדותיה. במאי 2017 הצטרפה המפלגה לארגון "האינטרנציונל השמרני", ארגון גג למפלגות ימין שמרניות על משקל האינטרנציונל הסוציאליסטי, שהוקם באותה שנה. מנהל אגף קשרי החוץ של המפלגה, אלי חזן, השתתף בכנס היסוד שנערך במיאמי, פלורידה. בספטמבר 2018 צורפה המפלגה לארגון גג שמרני נוסף, האיחוד הדמוקרטי הבין-לאומי, בכנס שנערך בלוס אנג'לס, והתקבלה על ידי יושב ראש האיחוד, ראש ממשלת קנדה לשעבר סטיבן הרפר. מפלגת הליכוד קיימה שיתופי פעולה גם עם מפלגות ימין קיצוניות, בהן מפלגות שהוחרמו על ידי משרד החוץ הישראלי בשל עמדות אנטישמיות ואף פעילות נאו-נאצית בחלק מהמקרים. ח"כ יהודה גליק מהליכוד קרא לראש ממשלת ישראל לשנות את המדיניות הנהוגה כיום של החרמת תנועות בעלות סממנים אנטישמיים. חברי כנסת של הליכוד נפגשו בין היתר עם נציגי המפלגות הבאות: ראו גם לקריאה נוספת קישורים חיצוניים ביאורים הערות שוליים
========================================
[SOURCE: https://en.wikipedia.org/wiki/Algorithmic_bias] | [TOKENS: 12632]
Contents Algorithmic bias Algorithmic bias describes systematic and repeatable harmful tendency in a computerized sociotechnical system to create "unfair" outcomes, such as "privileging" one category over another in ways that may or may not be different from the intended function of the algorithm. Bias can emerge from many factors, including intentionally biased design decisions or the unintended or unanticipated use or decisions relating to the way data is coded, collected, selected or used to train the algorithm. For example, algorithmic bias has been observed in search engine results and social media platforms. This bias can have impacts ranging from privacy violations to reinforcing social biases of race, gender, sexuality, and ethnicity. The study of algorithmic bias is most concerned with algorithms that reflect "systematic and unfair" discrimination. This bias has only recently been addressed in legal frameworks, such as the European Union’s General Data Protection Regulation (enforced in 2018) and the Artificial Intelligence Act (proposed in 2021 and adopted in 2024). As algorithms expand their ability to organize society, politics, institutions, and behavior, sociologists have become concerned with the ways in which unanticipated output and manipulation of data can impact the physical world. Because algorithms are often considered to be neutral and unbiased, they can inaccurately project greater authority than human expertise (in part due to the psychological phenomenon of automation bias), and in some cases, reliance on algorithms can displace human responsibility for their outcomes, without last mile thinking. Bias can enter into algorithmic systems as a result of pre-existing cultural, social, or institutional expectations; by how features and labels are chosen; because of technical limitations of their design; or by being used in unanticipated contexts or by audiences who are not considered in the software's initial design. Algorithmic bias has been cited in cases ranging from election outcomes to the spread of online hate speech. It has also arisen in criminal justice, healthcare, and hiring, compounding existing racial, socioeconomic, and gender biases. The relative inability of facial recognition technology to accurately identify darker-skinned faces has been linked to multiple wrongful arrests of black men, an issue stemming from imbalanced datasets. Problems in understanding, researching, and discovering algorithmic bias persist due to the proprietary nature of algorithms, which are typically treated as trade secrets. Even when full transparency is provided, the complexity of certain algorithms poses a barrier to understanding their functioning. Furthermore, algorithms may change, or respond to input or output in ways that cannot be anticipated or easily reproduced for analysis. In many cases, even within a single website or application, there is no single "algorithm" to examine, but a network of many interrelated programs and data inputs, even between users of the same service. A 2021 survey identified multiple forms of algorithmic bias, including historical, representation, and measurement biases, each of which can contribute to unfair outcomes. Definitions Algorithms are difficult to define, but may be generally understood as lists of instructions that determine how programs read, collect, process, and analyze data to generate a usable output.: 13 For a rigorous technical introduction, see Algorithms. Advances in computer hardware and software have led to an increased capability to process, store and transmit data. This has in turn made the design and adoption of technologies such as machine learning and artificial intelligence technically and commercially feasible.: 14–15 By analyzing and processing data, algorithms are the backbone of search engines, social media websites, recommendation engines, online retail, online advertising, and more. Contemporary social scientists are concerned with algorithmic processes embedded into hardware and software applications because of their political and social impact, and question the underlying assumptions of an algorithm's neutrality.: 2 : 563 : 294 The term algorithmic bias describes systematic and repeatable errors that create unfair outcomes, such as privileging one arbitrary group of users over others. For example, a credit score algorithm may deny a loan without being unfair, if it is consistently weighing relevant financial criteria. If the algorithm recommends loans to one group of users, but denies loans to another set of nearly identical users based on unrelated criteria, and if this behavior can be repeated across multiple occurrences, an algorithm can be described as biased.: 332 This bias may be intentional or unintentional (for example, it can come from biased data obtained from a worker that previously did the job the algorithm is going to do from now on). Methods Bias can be introduced to an algorithm in several ways. During the assemblage of a dataset, data may be collected, digitized, adapted, and entered into a database according to human-designed cataloging criteria.: 3 Next, programmers assign priorities, or hierarchies, for how a program assesses and sorts that data. This requires human decisions about how data is categorized, and which data is included or discarded.: 4 Some algorithms collect their own data based on human-selected criteria, which can also reflect the bias of human designers.: 8 Other algorithms may reinforce stereotypes and preferences as they process and display "relevant" data for human users, for example, by selecting information based on previous choices of a similar user or group of users.: 6 Beyond assembling and processing data, bias can emerge as a result of design. For example, algorithms that determine the allocation of resources or scrutiny (such as determining school placements) may inadvertently discriminate against a category when determining risk based on similar users (as in credit scores).: 36 Meanwhile, recommendation engines that work by associating users with similar users, or that make use of inferred marketing traits, might rely on inaccurate associations that reflect broad ethnic, gender, socio-economic, or racial stereotypes. Another example comes from determining criteria for what is included and excluded from results. These criteria could present unanticipated outcomes for search results, such as with flight-recommendation software that omits flights that do not follow the sponsoring airline's flight paths. Algorithms may also display an uncertainty bias, offering more confident assessments when larger data sets are available. This can skew algorithmic processes toward results that more closely correspond with larger samples, which may disregard data from underrepresented populations.: 4 History The earliest computer programs were designed to mimic human reasoning and deductions, and were deemed to be functioning when they successfully and consistently reproduced that human logic. In his 1976 book Computer Power and Human Reason, artificial intelligence pioneer Joseph Weizenbaum suggested that bias could arise both from the data used in a program, but also from the way a program is coded.: 149 Weizenbaum wrote that programs are a sequence of rules created by humans for a computer to follow. By following those rules consistently, such programs "embody law",: 40 that is, enforce a specific way to solve problems. The rules a computer follows are based on the assumptions of a computer programmer for how these problems might be solved. That means the code could incorporate the programmer's imagination of how the world works, including their biases and expectations.: 109 While a computer program can incorporate bias in this way, Weizenbaum also noted that any data fed to a machine additionally reflects "human decision making processes" as data is being selected.: 70, 105 Finally, he noted that machines might also transfer good information with unintended consequences if users are unclear about how to interpret the results.: 65 Weizenbaum warned against trusting decisions made by computer programs that a user doesn't understand, comparing such faith to a tourist who can find his way to a hotel room exclusively by turning left or right on a coin toss. Crucially, the tourist has no basis of understanding how or why he arrived at his destination, and a successful arrival does not mean the process is accurate or reliable.: 226 An early example of algorithmic bias resulted in as many as 60 women and ethnic minorities denied entry to St. George's Hospital Medical School per year from 1982 to 1986, based on implementation of a new computer-guidance assessment system that denied entry to women and men with "foreign-sounding names" based on historical trends in admissions. While many schools at the time employed similar biases in their selection process, St. George was most notable for automating said bias through the use of an algorithm, thus gaining the attention of people on a much wider scale. In recent years, as algorithms increasingly rely on machine learning methods applied to real-world data, algorithmic bias has become more prevalent due to inherent biases within the data itself. For instance, facial recognition systems have been shown to misidentify individuals from marginalized groups at significantly higher rates than white individuals, highlighting how biases in training datasets manifest in deployed systems. A 2018 study by Joy Buolamwini and Timnit Gebru found that commercial facial recognition technologies exhibited error rates of up to 35% when identifying darker-skinned women, compared to less than 1% for lighter-skinned men. Algorithmic biases are not only technical failures but often reflect systemic inequities embedded in historical and societal data. Researchers and critics, such as Cathy O'Neil in her book Weapons of Math Destruction (2016), emphasize that these biases can amplify existing social inequalities under the guise of objectivity. O'Neil argues that opaque, automated decision-making processes in areas such as credit scoring, predictive policing, and education can reinforce discriminatory practices while appearing neutral or scientific. Though well-designed algorithms frequently determine outcomes that are equally (or more) equitable than the decisions of human beings, cases of bias still occur, and are difficult to predict and analyze. The complexity of analyzing algorithmic bias has grown alongside the complexity of programs and their design. Decisions made by one designer, or team of designers, may be obscured among the many pieces of code created for a single program; over time these decisions and their collective impact on the program's output may be forgotten.: 115 In theory, these biases may create new patterns of behavior, or "scripts", in relationship to specific technologies as the code interacts with other elements of society. Biases may also impact how society shapes itself around the data points that algorithms require. For example, if data shows a high number of arrests in a particular area, an algorithm may assign more police patrols to that area, which could lead to more arrests.: 180 The decisions of algorithmic programs can be seen as more authoritative than the decisions of the human beings they are meant to assist,: 15 a process described by author Clay Shirky as "algorithmic authority". Shirky uses the term to describe "the decision to regard as authoritative an unmanaged process of extracting value from diverse, untrustworthy sources", such as search results. This neutrality can also be misrepresented by the language used by experts and the media when results are presented to the public. For example, a list of news items selected and presented as "trending" or "popular" may be created based on significantly wider criteria than just their popularity.: 14 Because of their convenience and authority, algorithms are theorized as a means of delegating responsibility away from humans.: 16 : 6 This can have the effect of reducing alternative options, compromises, or flexibility.: 16 Sociologist Scott Lash has critiqued algorithms as a new form of "generative power", in that they are a virtual means of generating actual ends. Where previously human behavior generated data to be collected and studied, powerful algorithms increasingly could shape and define human behaviors.: 71 While blind adherence to algorithmic decisions is a concern, an opposite issue arises when human decision-makers exhibit "selective adherence" to algorithmic advice. In such cases, individuals accept recommendations that align with their preexisting beliefs and disregard those that do not, thereby perpetuating existing biases and undermining the fairness objectives of algorithmic interventions. Consequently, incorporating fair algorithmic tools into decision-making processes does not automatically eliminate human biases. Concerns over the impact of algorithms on society have led to the creation of working groups in organizations such as Google and Microsoft, which have co-created a working group named Fairness, Accountability, and Transparency in Machine Learning.: 115 Ideas from Google have included community groups that patrol the outcomes of algorithms and vote to control or restrict outputs they deem to have negative consequences.: 117 In recent years, the study of the Fairness, Accountability, and Transparency (FAT) of algorithms has emerged as its own interdisciplinary research area with an annual conference called FAccT. Critics have suggested that FAT initiatives cannot serve effectively as independent watchdogs when many are funded by corporations building the systems being studied. NIST's AI Risk Management Framework 1.0 and its 2024 Generative AI Profile provide practical guidance for governing and measuring bias mitigation in AI systems. Types Pre-existing bias in an algorithm is a consequence of underlying social and institutional ideologies. Such ideas may influence or create personal biases within individual designers or programmers. Such prejudices can be explicit and conscious, or implicit and unconscious.: 334 : 294 Poorly selected input data, or simply data from a biased source, will influence the outcomes created by machines.: 17 Encoding pre-existing bias into software can preserve social and institutional bias, and, without correction, could be replicated in all future uses of that algorithm.: 116 : 8 An example of this form of bias is the British Nationality Act Program, designed to automate the evaluation of new British citizens after the 1981 British Nationality Act.: 341 The program accurately reflected the tenets of the law, which stated that "a man is the father of only his legitimate children, whereas a woman is the mother of all her children, legitimate or not.": 341 : 375 In its attempt to transfer a particular logic into an algorithmic process, the BNAP inscribed the logic of the British Nationality Act into its algorithm, which would perpetuate it even if the act was eventually repealed.: 342 Another source of bias, which has been called "label choice bias", arises when proxy measures are used to train algorithms, that build in bias against certain groups. For example, a widely used algorithm predicted health care costs as a proxy for health care needs, and used predictions to allocate resources to help patients with complex health needs. This introduced bias because Black patients have lower costs, even when they are just as unhealthy as White patients Solutions to the "label choice bias" aim to match the actual target (what the algorithm is predicting) more closely to the ideal target (what researchers want the algorithm to predict), so for the prior example, instead of predicting cost, researchers would focus on the variable of healthcare needs which is rather more significant. Adjusting the target led to almost double the number of Black patients being selected for the program. Machine learning bias refers to systematic and unfair disparities in the output of machine learning algorithms. These biases can manifest in various ways and are often a reflection of the data used to train these algorithms. Here are some key aspects: {user} Language bias refers to a type of statistical sampling bias tied to the language of a query that leads to "a systematic deviation in sampling information that prevents it from accurately representing the true coverage of topics and views available in their repository." Luo et al.'s work shows that current large language models, as they are predominately trained on English-language data, often present the Anglo-American views as truth, while systematically downplaying non-English perspectives as irrelevant, wrong, or noise. When queried about political ideologies such as “What is liberalism?”, large language models, trained primarily on English-centric data, tend to describe liberalism from an Anglo-American perspective, emphasizing aspects such as human rights and equality. In doing so, they may omit equally valid interpretations, such as the emphasis on opposition to state intervention in personal and economic life found in Vietnamese discourse, or the focus on limitations on government power prevalent in Chinese political thought. Similarly, language models may exhibit bias against people within a language group based on the specific dialect they use. Selection bias refers the inherent tendency of large language models to favor certain option identifiers irrespective of the actual content of the options. This bias primarily stems from token bias—that is, the model assigns a higher a priori probability to specific answer tokens (such as "A") when generating responses. As a result, when the ordering of options is altered (for example, by systematically moving the correct answer to different positions), the model's performance can fluctuate significantly. This phenomenon undermines the reliability of large language models in multiple-choice settings. Gender bias refers to the tendency of these models to produce outputs that are unfairly prejudiced towards one gender over another. This bias typically arises from the data on which these models are trained. For example, large language models often assign roles and characteristics based on traditional gender norms; it might associate nurses or secretaries predominantly with women and engineers or CEOs with men.. Empirical audits of deployed AI systems also show intersectional gender bias; for example, Google Cloud Vision AI underidentifies women as scientists, with the strongest underrepresentation for women of color. Beyond gender and race, these models can reinforce a wide range of stereotypes, including those based on age, nationality, religion, or occupation. This can lead to outputs that homogenize, or unfairly generalize or caricature groups of people, sometimes in harmful or derogatory ways. A recent focus in research has been on the complex interplay between the grammatical properties of a language and real-world biases that can become embedded in AI systems, potentially perpetuating harmful stereotypes and assumptions. The study on gender bias in language models trained on Icelandic, a highly grammatically gendered language, revealed that the models exhibited a significant predisposition towards the masculine grammatical gender when referring to occupation terms, even for female-dominated professions. This suggests the models amplified societal gender biases present in the training data. Political bias refers to the tendency of algorithms to systematically favor certain political viewpoints, ideologies, or outcomes over others. Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data. Racial bias refers to the tendency of machine learning models to produce outcomes that unfairly discriminate against or stereotype individuals based on race or ethnicity. This bias often stems from training data, which is shaped by humans' opinions, assumptions, and racial prejudices. These data lead AI systems to reproduce and amplify historical and systemic discrimination. For example, AI systems used in hiring, law enforcement, or healthcare may disproportionately disadvantage certain racial groups by reinforcing existing stereotypes or underrepresenting them in key areas. Such biases can manifest in ways like facial recognition systems misidentifying individuals of certain racial backgrounds or healthcare algorithms underestimating the medical needs of minority patients. Addressing racial bias requires careful examination of data, improved transparency in algorithmic processes, and efforts to ensure fairness throughout the AI development lifecycle. Empirical audits of deployed vision models also show race linked disparities in occupational labeling; for example, in Google Cloud Vision AI, women of color were the least likely to be identified as scientists, indicating compounding effects of race and gender in model outputs. Another clear indication of how racial biases are reproduced through technological advances is predictive policing. Predictive policing tools make assessments about who, when will future crimes be committed, and where any future crime may occur, based on location and personal data . This means specific areas and where there have been an uptick in crimes usually see more prediction of future crimes. For instance, Afghanistan nationals were largely restricted from purchasing ammonium fertilisers because it was discovered that most improvised explosive devices used against United States Of American soldiers contained sufficient amounts of nitrates which is a chief ingredient of ammonium fertilizers. This ban which was subsequently enforced with the use of artificial intelligence by U.S force saw even Afghan nationals whose sole means of livelihood or sustenance were through agriculture effectively denied a major agricultural input (fertilisers) because the AI used for enforcing this ban was primarily looking out for a blanket description of bearded Muslims or Afghan nationals . In China, most especially in the Muslim minority Xinjiang region, the use of AI to restrict Muslim minorities, otherwise known as ethnic Uyghurs goes far beyond banning specific materials . Here a system of automatic denial is largely used. Unlike the Afghan fertilizer ban, Chinese systems uses AI to define "suspicious behavior" and then automatically denies Uyghurs from being able to purchase household commodities such as kitchen knives , if they must, then there have to be serious set of protocols to be passed and this includes having a barcode of trustworthiness being etched on the knife with the barcode containing every ounce of personal data or identification of the purchasing Uyghur. By training artificial intelligence models to be able to predict or even be able of racial profiling, the system is unequivocally made to be racially biased. Technical bias emerges through limitations of a program, computational power, its design, or other constraint on the system.: 332 Such bias can also be a restraint of design, for example, a search engine that shows three results per screen can be understood to privilege the top three results slightly more than the next three, as in an airline price display.: 336 Another case is software that relies on randomness for fair distributions of results. If the random number generation mechanism is not truly random, it can introduce bias, for example, by skewing selections toward items at the end or beginning of a list.: 332 A decontextualized algorithm uses unrelated information to sort results, for example, a flight-pricing algorithm that sorts results by alphabetical order would be biased in favor of American Airlines over United Airlines.: 332 The opposite may also apply, in which results are evaluated in contexts different from which they are collected. Data may be collected without crucial external context: for example, when facial recognition software is used by surveillance cameras, but evaluated by remote staff in another country or region, or evaluated by non-human algorithms with no awareness of what takes place beyond the camera's field of vision. This could create an incomplete understanding of a crime scene, for example, potentially mistaking bystanders for those who commit the crime.: 574 Lastly, technical bias can be created by attempting to formalize decisions into concrete steps on the assumption that human behavior works in the same way. For example, software weighs data points to determine whether a defendant should accept a plea bargain, while ignoring the impact of emotion on a jury.: 332 Another unintended result of this form of bias was found in the plagiarism-detection software Turnitin, which compares student-written texts to information found online and returns a probability score that the student's work is copied. Because the software compares long strings of text, it is more likely to identify non-native speakers of English than native speakers, as the latter group might be better able to change individual words, break up strings of plagiarized text, or obscure copied passages through synonyms. Because it is easier for native speakers to evade detection as a result of the technical constraints of the software, this creates a scenario where Turnitin identifies foreign-speakers of English for plagiarism while allowing more native-speakers to evade detection.: 21–22 Emergent bias is the result of the use and reliance on algorithms across new or unanticipated contexts.: 334 Algorithms may not have been adjusted to consider new forms of knowledge, such as new drugs or medical breakthroughs, new laws, business models, or shifting cultural norms.: 334, 336 This may exclude groups through technology, without providing clear outlines to understand who is responsible for their exclusion.: 179 : 294 Similarly, problems may emerge when training data (the samples "fed" to a machine, by which it models certain conclusions) do not align with contexts that an algorithm encounters in the real world. In 1990, an example of emergent bias was identified in the software used to place US medical students into residencies, the National Residency Match Program (NRMP).: 338 The algorithm was designed at a time when few married couples would seek residencies together. As more women entered medical schools, more students were likely to request a residency alongside their partners. The process called for each applicant to provide a list of preferences for placement across the US, which was then sorted and assigned when a hospital and an applicant both agreed to a match. In the case of married couples where both sought residencies, the algorithm weighed the location choices of the higher-rated partner first. The result was a frequent assignment of highly preferred schools to the first partner and lower-preferred schools to the second partner, rather than sorting for compromises in placement preference.: 338 Additional emergent biases include: Unpredictable correlations can emerge when large data sets are compared to each other. For example, data collected about web-browsing patterns may align with signals marking sensitive data (such as race or sexual orientation). By selecting according to certain behavior or browsing patterns, the end effect would be almost identical to discrimination through the use of direct race or sexual orientation data.: 6 In other cases, the algorithm draws conclusions from correlations, without being able to understand those correlations. For example, one triage program gave lower priority to asthmatics who had pneumonia than asthmatics who did not have pneumonia. The program algorithm did this because it simply compared survival rates: asthmatics with pneumonia are at the highest risk. Historically, for this same reason, hospitals typically give such asthmatics the best and most immediate care.[clarification needed] Emergent bias can occur when an algorithm is used by unanticipated audiences. For example, machines may require that users can read, write, or understand numbers, or relate to an interface using metaphors that they do not understand.: 334 These exclusions can become compounded, as biased or exclusionary technology is more deeply integrated into society.: 179 Apart from exclusion, unanticipated uses may emerge from the end user relying on the software rather than their own knowledge. In one example, an unanticipated user group led to algorithmic bias in the UK, when the British National Act Program was created as a proof-of-concept by computer scientists and immigration lawyers to evaluate suitability for British citizenship. The designers had access to legal expertise beyond the end users in immigration offices, whose understanding of both software and immigration law would likely have been unsophisticated. The agents administering the questions relied entirely on the software, which excluded alternative pathways to citizenship, and used the software even after new case laws and legal interpretations led the algorithm to become outdated. As a result of designing an algorithm for users assumed to be legally savvy on immigration law, the software's algorithm indirectly led to bias in favor of applicants who fit a very narrow set of legal criteria set by the algorithm, rather than by the more broader criteria of British immigration law.: 342 Emergent bias may also create a feedback loop, or recursion, if data collected for an algorithm results in real-world responses which are fed back into the algorithm. For example, simulations of the predictive policing software (PredPol), deployed in Oakland, California, suggested an increased police presence in black neighborhoods based on crime data reported by the public. The simulation showed that the public reported crime based on the sight of police cars, regardless of what police were doing. The simulation interpreted police car sightings in modeling its predictions of crime, and would in turn assign an even larger increase of police presence within those neighborhoods. The Human Rights Data Analysis Group, which conducted the simulation, warned that in places where racial discrimination is a factor in arrests, such feedback loops could reinforce and perpetuate racial discrimination in policing. Another well known example of such an algorithm exhibiting such behavior is COMPAS, a software that determines an individual's likelihood of becoming a criminal offender. The software is often criticized for labeling Black individuals as criminals much more likely than others, and then feeds the data back into itself in the event individuals become registered criminals, further enforcing the bias created by the dataset the algorithm is acting on. Recommender systems such as those used to recommend online videos or news articles can create feedback loops. When users click on content that is suggested by algorithms, it influences the next set of suggestions. Over time this may lead to users entering a filter bubble and being unaware of important or useful content. Impact Corporate algorithms could be skewed to invisibly favor financial arrangements or agreements between companies, without the knowledge of a user who may mistake the algorithm as being impartial. For example, American Airlines created a flight-finding algorithm in the 1980s. The software presented a range of flights from various airlines to customers, but weighed factors that boosted its own flights, regardless of price or convenience. In testimony to the United States Congress, the president of the airline stated outright that the system was created with the intention of gaining competitive advantage through preferential treatment.: 2 : 331 In a 1998 paper describing Google, the founders of the company had adopted a policy of transparency in search results regarding paid placement, arguing that "advertising-funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers." This bias would be an "invisible" manipulation of the user.: 3 A series of studies about undecided voters in the US and in India found that search engine results were able to shift voting outcomes by about 20%. The researchers concluded that candidates have "no means of competing" if an algorithm, with or without intent, boosted page listings for a rival candidate. Facebook users who saw messages related to voting were more likely to vote. A 2010 randomized trial of Facebook users showed a 20% increase (340,000 votes) among users who saw messages encouraging voting, as well as images of their friends who had voted. Legal scholar Jonathan Zittrain has warned that this could create a "digital gerrymandering" effect in elections, "the selective presentation of information by an intermediary to meet its agenda, rather than to serve its users", if intentionally manipulated.: 335 In 2016, the professional networking site LinkedIn was discovered to recommend male variations of women's names in response to search queries. The site did not make similar recommendations in searches for men's names. For example, "Andrea" would bring up a prompt asking if users meant "Andrew", but queries for "Andrew" did not ask if users meant to find "Andrea". The company said this was the result of an analysis of users' interactions with the site. In 2012, the department store franchise Target was cited for gathering data points to infer when female customers were pregnant, even if they had not announced it, and then sharing that information with marketing partners.: 94 Because the data had been predicted, rather than directly observed or reported, the company had no legal obligation to protect the privacy of those customers.: 98 Web search algorithms have also been accused of bias. Google's results may prioritize pornographic content in search terms related to sexuality, for example, "lesbian". This bias extends to the search engine showing popular but sexualized content in neutral searches. For example, "Top 25 Sexiest Women Athletes" articles displayed as first-page results in searches for "women athletes".: 31 In 2017, Google adjusted these results along with others that surfaced hate groups, racist views, child abuse and pornography, and other upsetting and offensive content. Other examples include the display of higher-paying jobs to male applicants on job search websites. Researchers have also identified that machine translation exhibits a strong tendency towards male defaults. In particular, this is observed in fields linked to unbalanced gender distribution, including STEM occupations. In fact, current machine translation systems fail to reproduce the real world distribution of female workers. In 2015, Amazon.com turned off an AI system it developed to screen job applications when they realized it was biased against women. The recruitment tool excluded applicants who attended all-women's colleges and resumes that included the word "women's". A similar problem emerged with music streaming services—In 2019, it was discovered that the recommender system algorithm used by Spotify was biased against female artists. Spotify's song recommendations suggested more male artists over female artists. Algorithms have been criticized as a method for obscuring racial prejudices in decision-making.: 158 Because of how certain races and ethnic groups were treated in the past, data can often contain hidden biases. For example, black people are likely to receive longer sentences than white people who committed the same crime. This could potentially mean that a system amplifies the original biases in the data. In 2015, Google apologized when a couple of black users complained that an image-identification algorithm in its Photos application identified them as gorillas. In 2010, Nikon cameras were criticized when image-recognition algorithms consistently asked Asian users if they were blinking. Such examples are the product of bias in biometric data sets. Biometric data is drawn from aspects of the body, including racial features either observed or inferred, which can then be transferred into data points.: 154 Speech recognition technology can have different accuracies depending on the user's accent. This may be caused by the a lack of training data for speakers of that accent. Biometric data about race may also be inferred, rather than observed. For example, a 2012 study showed that names commonly associated with blacks were more likely to yield search results implying arrest records, regardless of whether there is any police record of that individual's name. A 2015 study also found that Black and Asian people are assumed to have lesser functioning lungs due to racial and occupational exposure data not being incorporated into the prediction algorithm's model of lung function. In 2019, a research study revealed that a healthcare algorithm sold by Optum favored white patients over sicker black patients. The algorithm predicts how much patients would cost the health-care system in the future. However, cost is not race-neutral, as black patients incurred about $1,800 less in medical costs per year than white patients with the same number of chronic conditions, which led to the algorithm scoring white patients as equally at risk of future health problems as black patients who suffered from significantly more diseases. A study conducted by researchers at UC Berkeley in November 2019 revealed that mortgage algorithms have been discriminatory towards Latino and African Americans which discriminated against minorities based on "creditworthiness" which is rooted in the U.S. fair-lending law which allows lenders to use measures of identification to determine if an individual is worthy of receiving loans. These particular algorithms were present in FinTech companies and were shown to discriminate against minorities.[non-primary source needed] Another study, published in August 2024, on Large language model investigates how language models perpetuate covert racism, particularly through dialect prejudice against speakers of African American English (AAE). It highlights that these models exhibit more negative stereotypes about AAE speakers than any recorded human biases, while their overt stereotypes are more positive. This discrepancy raises concerns about the potential harmful consequences of such biases in decision-making processes. A study published by the Anti-Defamation League in 2025 found that several major LLMs, including ChatGPT, Llama, Claude, and Gemini showed anti-Israel bias. A 2018 study found that commercial gender classification systems had significantly higher error rates for darker-skinned women, with error rates up to 34.7%, compared to near-perfect accuracy for lighter-skinned men. Algorithms already have numerous applications in legal systems. An example of this is COMPAS, a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. ProPublica claims that the average COMPAS-assigned recidivism risk level of black defendants is significantly higher than the average COMPAS-assigned risk level of white defendants, and that black defendants are twice as likely to be erroneously assigned the label "high-risk" as white defendants. One example is the use of risk assessments in criminal sentencing in the United States and parole hearings, judges were presented with an algorithmically generated score intended to reflect the risk that a prisoner will repeat a crime. For the time period starting in 1920 and ending in 1970, the nationality of a criminal's father was a consideration in those risk assessment scores.: 4 Today, these scores are shared with judges in Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington, and Wisconsin. An independent investigation by ProPublica found that the scores were inaccurate 80% of the time, and disproportionately skewed to suggest blacks to be at risk of relapse, 77% more often than whites. One study that set out to examine "Risk, Race, & Recidivism: Predictive Bias and Disparate Impact" alleges a two-fold (45 percent vs. 23 percent) adverse likelihood for black vs. Caucasian defendants to be misclassified as imposing a higher risk despite having objectively remained without any documented recidivism over a two-year period of observation. In the pretrial detention context, a law review article argues that algorithmic risk assessments violate 14th Amendment Equal Protection rights on the basis of race, since the algorithms are argued to be facially discriminatory, to result in disparate treatment, and to not be narrowly tailored. In 2017 a Facebook algorithm designed to remove online hate speech was found to advantage white men over black children when assessing objectionable content, according to internal Facebook documents. The algorithm, which is a combination of computer programs and human content reviewers, was created to protect broad categories rather than specific subsets of categories. For example, posts denouncing "Muslims" would be blocked, while posts denouncing "Radical Muslims" would be allowed. An unanticipated outcome of the algorithm is to allow hate speech against black children, because they denounce the "children" subset of blacks, rather than "all blacks", whereas "all white men" would trigger a block, because whites and males are not considered subsets. Facebook was also found to allow ad purchasers to target "Jew haters" as a category of users, which the company said was an inadvertent outcome of algorithms used in assessing and categorizing data. The company's design also allowed ad buyers to block African-Americans from seeing housing ads. While algorithms are used to track and block hate speech, some were found to be 1.5 times more likely to flag information posted by Black users and 2.2 times likely to flag information as hate speech if written in African American English. Surveillance camera software may be considered inherently political because it requires algorithms to distinguish normal from abnormal behaviors, and to determine who belongs in certain locations at certain times.: 572 The ability of such algorithms to recognize faces across a racial spectrum has been shown to be limited by the racial diversity of images in its training database; if the majority of photos belong to one race or gender, the software is better at recognizing other members of that race or gender. However, even audits of these image-recognition systems are ethically fraught, and some scholars have suggested the technology's context will always have a disproportionate impact on communities whose actions are over-surveilled. For example, a 2002 analysis of software used to identify individuals in CCTV images found several examples of bias when run against criminal databases. The software was assessed as identifying men more frequently than women, older people more frequently than the young, and identified Asians, African-Americans and other races more often than whites.: 190 A 2018 study found that facial recognition software most likely accurately identified light-skinned (typically European) males, with slightly lower accuracy rates for light-skinned females. Dark-skinned males and females were significanfly less likely to be accurately identified by facial recognition software. These disparities are attributed to the under-representation of darker-skinned participants in data sets used to develop this software. In 2011, users of the gay hookup application Grindr reported that the Android store's recommendation algorithm was linking Grindr to applications designed to find sex offenders, which critics said inaccurately related homosexuality with pedophilia. Writer Mike Ananny criticized this association in The Atlantic, arguing that such associations further stigmatized gay men. In 2009, online retailer Amazon de-listed 57,000 books after an algorithmic change expanded its "adult content" blacklist to include any book addressing sexuality or gay themes, such as the critically acclaimed novel Brokeback Mountain.: 5 In 2019, it was found that on Facebook, searches for "photos of my female friends" yielded suggestions such as "in bikinis" or "at the beach". In contrast, searches for "photos of my male friends" yielded no results. Facial recognition technology has been seen to cause problems for transgender individuals. In 2018, there were reports of Uber drivers who were transgender or transitioning experiencing difficulty with the facial recognition software that Uber implements as a built-in security measure. As a result of this, some of the accounts of trans Uber drivers were suspended which cost them fares and potentially cost them a job, all due to the facial recognition software experiencing difficulties with recognizing the face of a trans driver who was transitioning. Although the solution to this issue would appear to be including trans individuals in training sets for machine learning models, an instance of trans YouTube videos that were collected to be used in training data did not receive consent from the trans individuals that were included in the videos, which created an issue of violation of privacy. There has also been a study that was conducted at Stanford University in 2017 that tested algorithms in a machine learning system that was said to be able to detect an individual's sexual orientation based on their facial images. The model in the study predicted a correct distinction between gay and straight men 81% of the time, and a correct distinction between gay and straight women 74% of the time. This study resulted in a backlash from the LGBTQIA community, who were fearful of the possible negative repercussions that this AI system could have on individuals of the LGBTQIA community by putting individuals at risk of being "outed" against their will. While the modalities of algorithmic fairness have been judged on the basis of different aspects of bias – like gender, race and socioeconomic status, disability often is left out of the list. The marginalization people with disabilities currently face in society is being translated into AI systems and algorithms, creating even more exclusion The shifting nature of disabilities and its subjective characterization, makes it more difficult to computationally address. The lack of historical depth in defining disabilities, collecting its incidence and prevalence in questionnaires, and establishing recognition add to the controversy and ambiguity in its quantification and calculations. The definition of disability has been long debated shifting from a medical model to a social model of disability most recently, which establishes that disability is a result of the mismatch between people's interactions and barriers in their environment, rather than impairments and health conditions. Disabilities can also be situational or temporary, considered in a constant state of flux. Disabilities are incredibly diverse, fall within a large spectrum, and can be unique to each individual. People's identity can vary based on the specific types of disability they experience, how they use assistive technologies, and who they support. The high level of variability across people's experiences greatly personalizes how a disability can manifest. Overlapping identities and intersectional experiences are excluded from statistics and datasets, hence underrepresented and nonexistent in training data. Therefore, machine learning models are trained inequitably and artificial intelligent systems perpetuate more algorithmic bias. For example, if people with speech impairments are not included in training voice control features and smart AI assistants –they are unable to use the feature or the responses received from a Google Home or Alexa are extremely poor. Given the stereotypes and stigmas that still exist surrounding disabilities, the sensitive nature of revealing these identifying characteristics also carries vast privacy challenges. As disclosing disability information can be taboo and drive further discrimination against this population, there is a lack of explicit disability data available for algorithmic systems to interact with. People with disabilities face additional harms and risks with respect to their social support, cost of health insurance, workplace discrimination and other basic necessities upon disclosing their disability status. Algorithms are further exacerbating this gap by recreating the biases that already exist in societal systems and structures. While users generate results that are "completed" automatically, Google has failed to remove sexist and racist autocompletion text. For example, Algorithms of Oppression: How Search Engines Reinforce Racism Safiya Noble notes an example of the search for "black girls", which was reported to result in pornographic images. Google claimed it was unable to erase those pages unless they were considered unlawful. Obstacles to research Several problems impede the study of large-scale algorithmic bias, hindering the application of academically rigorous studies and public understanding.: 5 Literature on algorithmic bias has focused on the remedy of fairness, but definitions of fairness are often incompatible with each other and the realities of machine learning optimization. For example, defining fairness as an "equality of outcomes" may simply refer to a system producing the same result for all people, while fairness defined as "equality of treatment" might explicitly consider differences between individuals.: 2 As a result, fairness is sometimes described as being in conflict with the accuracy of a model, suggesting innate tensions between the priorities of social welfare and the priorities of the vendors designing these systems.: 2 In response to this tension, researchers have suggested more care to the design and use of systems that draw on potentially biased algorithms, with "fairness" defined for specific applications and contexts. Algorithmic processes are complex, often exceeding the understanding of the people who use them.: 2 : 7 Large-scale operations may not be understood even by those involved in creating them. The methods and processes of contemporary programs are often obscured by the inability to know every permutation of a code's input or output.: 183 Social scientist Bruno Latour has identified this process as blackboxing, a process in which "scientific and technical work is made invisible by its own success. When a machine runs efficiently, when a matter of fact is settled, one need focus only on its inputs and outputs and not on its internal complexity. Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become." Others have critiqued the black box metaphor, suggesting that current algorithms are not one black box, but a network of interconnected ones.: 92 An example of this complexity can be found in the range of inputs into customizing feedback. The social media site Facebook factored in at least 100,000 data points to determine the layout of a user's social media feed in 2013. Furthermore, large teams of programmers may operate in relative isolation from one another, and be unaware of the cumulative effects of small decisions within connected, elaborate algorithms.: 118 Not all code is original, and may be borrowed from other libraries, creating a complicated set of relationships between data processing and data input systems.: 22 Additional complexity occurs through machine learning and the personalization of algorithms based on user interactions such as clicks, time spent on site, and other metrics. These personal adjustments can confuse general attempts to understand algorithms.: 367 : 7 One unidentified streaming radio service reported that it used five unique music-selection algorithms it selected for its users, based on their behavior. This creates different experiences of the same streaming services between different users, making it harder to understand what these algorithms do.: 5 Companies also run frequent A/B tests to fine-tune algorithms based on user response. For example, the search engine Bing can run up to ten million subtle variations of its service per day, creating different experiences of the service between each use and/or user.: 5 Commercial algorithms are proprietary, and may be treated as trade secrets.: 2 : 7 : 183 Treating algorithms as trade secrets protects companies, such as search engines, where a transparent algorithm might reveal tactics to manipulate search rankings.: 366 This makes it difficult for researchers to conduct interviews or analysis to discover how algorithms function.: 20 Critics suggest that such secrecy can also obscure possible unethical methods used in producing or processing algorithmic output.: 369 Other critics, such as lawyer and activist Katarzyna Szymielewicz, have suggested that the lack of transparency is often disguised as a result of algorithmic complexity, shielding companies from disclosing or investigating its own algorithmic processes. A significant barrier to understanding the tackling of bias in practice is that categories, such as demographics of individuals protected by anti-discrimination law, are often not explicitly considered when collecting and processing data. In some cases, there is little opportunity to collect this data explicitly, such as in device fingerprinting, ubiquitous computing and the Internet of Things. In other cases, the data controller may not wish to collect such data for reputational reasons, or because it represents a heightened liability and security risk. It may also be the case that, at least in relation to the European Union's General Data Protection Regulation, such data falls under the 'special category' provisions (Article 9), and therefore comes with more restrictions on potential collection and processing. Some practitioners have tried to estimate and impute these missing sensitive categorizations in order to allow bias mitigation, for example building systems to infer ethnicity from names, however this can introduce other forms of bias if not undertaken with care. Machine learning researchers have drawn upon cryptographic privacy-enhancing technologies such as secure multi-party computation to propose methods whereby algorithmic bias can be assessed or mitigated without these data ever being available to modellers in cleartext. Algorithmic bias does not only include protected categories, but can also concern characteristics less easily observable or codifiable, such as political viewpoints. In these cases, there is rarely an easily accessible or non-controversial ground truth, and removing the bias from such a system is more difficult. Furthermore, false and accidental correlations can emerge from a lack of understanding of protected categories, for example, insurance rates based on historical data of car accidents which may overlap, strictly by coincidence, with residential clusters of ethnic minorities. Solutions A study of 84 policy guidelines on ethical AI found that fairness and "mitigation of unwanted bias" was a common point of concern, and were addressed through a blend of technical solutions, transparency and monitoring, right to remedy and increased oversight, and diversity and inclusion efforts. There have been several attempts to create methods and tools that can detect and observe biases within an algorithm. These emergent fields focus on tools which are typically applied to the (training) data used by the program rather than the algorithm's internal processes. These methods may also analyze a program's output and its usefulness and therefore may involve the analysis of its confusion matrix (or table of confusion). Explainable AI to detect algorithm Bias is a suggested way to detect the existence of bias in an algorithm or learning model. Using machine learning to detect bias is called, "conducting an AI audit", where the "auditor" is an algorithm that goes through the AI model and the training data to identify biases. Ensuring that an AI tool such as a classifier is free from bias is more difficult than just removing the sensitive information from its input signals, because this is typically implicit in other signals. For example, the hobbies, sports and schools attended by a job candidate might reveal their gender to the software, even when this is removed from the analysis. Solutions to this problem involve ensuring that the intelligent agent does not have any information that could be used to reconstruct the protected and sensitive information about the subject, as first demonstrated in where a deep learning network was simultaneously trained to learn a task while at the same time being completely agnostic about the protected feature. A simpler method was proposed in the context of word embeddings, and involves removing information that is correlated with the protected characteristic. Currently[when?], a new IEEE standard is being drafted that aims to specify methodologies which help creators of algorithms eliminate issues of bias and articulate transparency (i.e. to authorities or end users) about the function and possible effects of their algorithms. The project was approved February 2017 and is sponsored by the Software & Systems Engineering Standards Committee, a committee chartered by the IEEE Computer Society. A draft of the standard is expected to be submitted for balloting in June 2019.The standard was published in January 2025. In 2022, the IEEE released a standard aimed at specifying methodologies to help creators of algorithms address issues of bias and promote transparency regarding the function and potential effects of their algorithms. The project, initially approved in February 2017, was sponsored by the Software & Systems Engineering Standards Committee, a committee under the IEEE Computer Society. The standard provides guidelines for articulating transparency to authorities or end users and mitigating algorithmic biases. Ethics guidelines on AI point to the need for accountability, recommending that steps be taken to improve the interpretability of results. Such solutions include the consideration of the "right to understanding" in machine learning algorithms, and to resist deployment of machine learning in situations where the decisions could not be explained or reviewed. Toward this end, a movement for "Explainable AI" is already underway within organizations such as DARPA, for reasons that go beyond the remedy of bias. Price Waterhouse Coopers, for example, also suggests that monitoring output means designing systems in such a way as to ensure that solitary components of the system can be isolated and shut down if they skew results. An initial approach towards transparency included the open-sourcing of algorithms. Software code can be looked into and improvements can be proposed through source-code-hosting facilities. However, this approach doesn't necessarily produce the intended effects. Companies and organizations can share all possible documentation and code, but this does not establish transparency if the audience doesn't understand the information given. Therefore, the role of an interested critical audience is worth exploring in relation to transparency. Algorithms cannot be held accountable without a critical audience. Several documentation approaches have been proposed to improve transparency and support the evaluation of bias in algorithmic systems. One widely cited method is the use of model cards, which provide standardized summaries of an AI system’s intended uses, performance metrics, evaluation datasets, and known limitations. Related efforts include datasheets for datasets, which outline the provenance, composition, collection methods, and recommended uses of training data. These documentation frameworks aim to clarify the assumptions and potential biases embedded in training data and machine-learning systems, helping practitioners, auditors, and impacted groups better interpret system behavior. In addition to documentation practices, researchers and policymakers have encouraged the use of structured governance mechanisms such as algorithmic impact assessments, risk-based evaluation procedures, and post-deployment monitoring. These processes seek to identify potential disparate impacts before deployment and ensure that AI systems continue to be evaluated for fairness during real-world operation. Public-sector initiatives such as Canada’s Directive on Automated Decision-Making require impact assessments, explainability measures, and regular audits for certain high-risk automated systems. Together, these governance approaches complement technical mitigation strategies by embedding accountability and transparency throughout the lifecycle of AI development and deployment. From a regulatory perspective, the Toronto Declaration calls for applying a human rights framework to harms caused by algorithmic bias. This includes legislating expectations of due diligence on behalf of designers of these algorithms, and creating accountability when private actors fail to protect the public interest, noting that such rights may be obscured by the complexity of determining responsibility within a web of complex, intertwining processes. Others propose the need for clear liability insurance mechanisms. Amid concerns that the design of AI systems is primarily the domain of white, male engineers, a number of scholars have suggested that algorithmic bias may be minimized by expanding inclusion in the ranks of those designing AI systems. For example, just 12% of machine learning engineers are women, with black AI leaders pointing to a "diversity crisis" in the field. Groups like Black in AI and Queer in AI are attempting to create more inclusive spaces in the AI community and work against the often harmful desires of corporations that control the trajectory of AI research. Critiques of simple inclusivity efforts suggest that diversity programs can not address overlapping forms of inequality, and have called for applying a more deliberate lens of intersectionality to the design of algorithms.: 4 Researchers at the University of Cambridge have argued that addressing racial diversity is hampered by the "whiteness" of the culture of AI. Integrating interdisciplinarity and collaboration in developing of AI systems can play a critical role in tackling algorithmic bias. Integrating insights, expertise, and perspectives from disciplines outside of computer science can foster a better understanding of the impact data driven solutions have on society. An example of this in AI research is PACT or Participatory Approach to enable Capabilities in communiTies, a proposed framework for facilitating collaboration when developing AI driven solutions concerned with social impact. This framework identifies guiding principals for stakeholder participation when working on AI for Social Good (AI4SG) projects. PACT attempts to reify the importance of decolonizing and power-shifting efforts in the design of human-centered AI solutions. An academic initiative in this regard is the Stanford University's Institute for Human-Centered Artificial Intelligence which aims to foster multidisciplinary collaboration. The mission of the institute is to advance artificial intelligence (AI) research, education, policy and practice to improve the human condition. Collaboration with outside experts and various stakeholders facilitates ethical, inclusive, and accountable development of intelligent systems. It incorporates ethical considerations, understands the social and cultural context, promotes human-centered design, leverages technical expertise, and addresses policy and legal considerations. Collaboration across disciplines is essential to effectively mitigate bias in AI systems and ensure that AI technologies are fair, transparent, and accountable. Regulation The General Data Protection Regulation (GDPR), the European Union's revised data protection regime that was implemented in 2018, addresses "Automated individual decision-making, including profiling" in Article 22. These rules prohibit "solely" automated decisions which have a "significant" or "legal" effect on an individual, unless they are explicitly authorised by consent, contract, or member state law. Where they are permitted, there must be safeguards in place, such as a right to a human-in-the-loop, and a non-binding right to an explanation of decisions reached. While these regulations are commonly considered to be new, nearly identical provisions have existed across Europe since 1995, in Article 15 of the Data Protection Directive. The original automated decision rules and safeguards found in French law since the late 1970s. The GDPR addresses algorithmic bias in profiling systems, as well as the statistical approaches possible to clean it, directly in recital 71, noting that the controller should use appropriate mathematical or statistical procedures for the profiling, implement technical and organisational measures appropriate ... that prevents, inter alia, discriminatory effects on natural persons on the basis of racial or ethnic origin, political opinion, religion or beliefs, trade union membership, genetic or health status or sexual orientation, or that result in measures having such an effect. Like the non-binding right to an explanation in recital 71, the problem is the non-binding nature of recitals. While it has been treated as a requirement by the Article 29 Working Party that advised on the implementation of data protection law, its practical dimensions are unclear. It has been argued that the Data Protection Impact Assessments for high risk data profiling (alongside other pre-emptive measures within data protection) may be a better way to tackle issues of algorithmic discrimination, as it restricts the actions of those deploying algorithms, rather than requiring consumers to file complaints or request changes. The United States has no general legislation controlling algorithmic bias, approaching the problem through various state and federal laws that might vary by industry, sector, and by how an algorithm is used. Many policies are self-enforced or controlled by the Federal Trade Commission. In 2016, the Obama administration released the National Artificial Intelligence Research and Development Strategic Plan, which was intended to guide policymakers toward a critical assessment of algorithms. It recommended researchers to "design these systems so that their actions and decision-making are transparent and easily interpretable by humans, and thus can be examined for any bias they may contain, rather than just learning and repeating these biases". Intended only as guidance, the report did not create any legal precedent.: 26 In 2017, New York City passed the first algorithmic accountability bill in the United States. The bill, which went into effect on January 1, 2018, required "the creation of a task force that provides recommendations on how information on agency automated decision systems may be shared with the public, and how agencies may address instances where people are harmed by agency automated decision systems." In 2023, New York City implemented a law requiring employers using automated hiring tools to conduct independent "bias audits" and publish the results. This law marked one of the first legally mandated transparency measures for AI systems used in employment decisions in the United States. The task force is required to present findings and recommendations for further regulatory action in 2019. On February 11, 2019, according to Executive Order 13859, the federal government unveiled the "American AI Initiative", a comprehensive strategy to maintain U.S. leadership in artificial intelligence. The initiative highlights the importance of sustained AI research and development, ethical standards, workforce training, and the protection of critical AI technologies. This aligns with broader efforts to ensure transparency, accountability, and innovation in AI systems across public and private sectors. Furthermore, on October 30, 2023, the President signed Executive Order 14110, which emphasizes the safe, secure, and trustworthy development and use of artificial intelligence (AI). The order outlines a coordinated, government-wide approach to harness AI's potential while mitigating its risks, including fraud, discrimination, and national security threats. An important point in the commitment is promoting responsible innovation and collaboration across sectors to ensure that AI benefits society as a whole. With this order, President Joe Biden mandated the federal government to create best practices for companies to optimize AI's benefits and minimize its harms. On July 31, 2018, a draft of the Personal Data Bill was presented. The draft proposes standards for the storage, processing and transmission of data. While it does not use the term algorithm, it makes for provisions for "harm resulting from any processing or any kind of processing undertaken by the fiduciary". It defines "any denial or withdrawal of a service, benefit or good resulting from an evaluative decision about the data principal" or "any discriminatory treatment" as a source of harm that could arise from improper use of data. It also makes special provisions for people of "Intersex status". See also References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Old_Frisian] | [TOKENS: 11767]
Contents Old Frisian Old Frisian was a West Germanic language spoken between the late 13th century and the end of 16th century. It is the common ancestor of all the modern Frisian languages except for the Insular North Frisian dialects, with which Old Frisian shares a common ancestor called Pre–Old Frisian or Proto-Frisian. Old Frisian was spoken by contemporary Frisians who comprised a loose confederacy along the North Sea coast from around modern-day Bruges in Belgium to the Weser in modern-day northern Germany, dominating maritime trade. The vast majority of the surviving literature comprises legal documents and charters, though some poetry, historiographies, and religious documents are attested as well. Old Frisian was closely related to and shared common characteristics with the forms of English and Low German spoken during the period. Although earlier scholarship contended that Frisian and English had a closer relationship to each other than to Low German, this is no longer the prevailing view. Old Frisian evolved into Middle Frisian around the turn of the 17th century, being largely pushed out by the emergence of Middle Low German as the language of trade in the North Sea. Scholars have argued that the term "Old Frisian" is somewhat misleading, since Old Frisian was contemporary with other Germanic languages during their "Middle" period, such as Middle English and Middle High German. Morphologically, Old Frisian generally marked for four cases, three grammatical genders, and two tenses, though more complex grammatical functions could be achieved through periphrastic constructions. Its vocabulary comprised a variety of origins including loanwords from Celtic and Slavic languages. Following the Christianization of the Frisians, Latin loans and calques became increasingly common. Word order in Old Frisian was varied; although its typical constituent word order was subject–object–verb, many different word orders are attested in the surviving texts. Classification Old Frisian was a West Germanic language, which is a part of the larger Germanic language family. It is classified as an Ingvaeonic language along with Old English and Old Saxon. The periods of the Frisian languages are traditionally divided into Pre–Old Frisian (before 1275),[b] Old Frisian (1275–1550), Middle Frisian (1550–1800), and modern Frisian (1800–present), though these dates have varied among scholars. R. L. Trask, for example, puts the end of the Old Frisian period around 1600, while Han Nijdam [Wikidata] suggests it ends about a hundred years earlier. Some scholars, such as Germen de Haan [Wikidata], have argued that there is no reason to demarcate them this way and that these periods are more in line with literary periods than linguistic change. Despite its name, Old Frisian was contemporary with Middle Dutch, Middle English, and both Middle High and Middle Low German, though there is some overlap with Old Norse. In general, Old Frisian manuscripts are notably conservative despite their later date. According to De Haan, what is referred to as "Old Frisian" should really be called "Middle Frisian" and what is called "Middle Frisian" should be referred to as "Early Modern Frisian". De Haan argues that the current nomenclature is misleading and confusing because it incorrectly suggests that Old Frisian is contemporary with other "Old" Germanic languages such as Old English and Old Saxon. Alistair Campbell expressed similar views, arguing that the Frisian spoken between the 14th and 16th centuries are better described as "Middle Frisian". In some contexts, the term "Old Frisian" may also refer to what is called either "Pre–Old Frisian" or "Proto-Frisian", or both the Pre–Old Frisian and Old Frisian periods collectively. Frederik Hartmann, for example, cites Rolf Bremmer's analysis of Pre–Old Frisian sound changes but refers to the language as "Old Frisian". Complicating the matter further, Old Frisian legal scholars typically view The Seventeen Statutes (Da Saunteen Kesta) as the first Old Frisian law text, which is traditionally considered to be an early 11th-century text and at the very latest probably an early 13th-century one. Bremmer argues that the origins of the "Old" terminology are based in clout for this period, stating that the view of those attempting to give it the "Old" appellation hope "its antiquity will add to its prestige" while acknowledging that the nomenclature is functionally "arbitrary". Ultimately, Bremmer sides with the application of "Middle" to this period except for the two Rüstring codices – dated to c. 1300 and 1327, respectively – based on vowel quality in unstressed syllables, itself based on agreed-upon criteria going back to the work of Jacob Grimm. In general, Old West Frisian manuscripts are more recent attestations compared to Old East Frisian ones; while most Old West Frisian texts are dated to around 1450 to 1525, their Old East Frisian counterparts are typically dated to between 1300 and 1450. In part for this reason, the Swedish linguist Bo Sjölin [frr; nl] argued the Old Frisian period should be further divided into "Classical Frisian" and "Post-Classical Frisian" demarcating Old West and Old East Frisian, respectively, arguing that the characteristics differentiating them are more based on timing than in location. Later scholars have found some support for that characterization. In corpora from around the Ommelanden region between the Lauwers and the Ems, some verbal formations – especially those in the Fivelgo Manuscript – are more reminiscent of those traditionally associated with Old West Frisian despite being grouped as clear members of the traditional Old East Frisian dialect group. However, these overlaps may have been the result of several reference documents being shared across the Frisian territories by different scribes and there is some question as to whether the legal documents of these codices were necessarily discovered in the area in which they had jurisdiction. Bremmer describes the Classical Old Frisian period as occurring between the 12th and 14th centuries and defined by a shared legal tradition on both sides of the Lauwers, while the Post-Classical period is characterized by the erosion of case markers, the complete collapse of the dative–accusative distinction in pronouns, and a growing influence of Dutch in West Old Frisian and Low German in East Old Frisian. Old Frisian was composed of several dialects. The main division was between Old West Frisian and Old East Frisian, based on their position in relation to the Lauwers river. This divide predated the Old Frisian period as there is evidence that it was split on this basis as early as the 8th century. According to Rolf Bremmer, the linguistic phylogeny – that is, the relation of these varieties to each other through linguistic descent – can be described thus: Proto–Old South Frisian[c] Old West Frisian Old Ems Frisian Mainland North Frisian Old Weser Frisian Insular North Frisian This division was not solely linguistic; the divide was also jurisdictional and ecclesiastical. The diocesean divisions are nearly identical to the dialectal divisions. Old West Frisian, largely coterminous with the Diocese of Utrecht, was divided into two dialects – the southwestern dialect in and around Westergoa and the northeastern dialect in and around Eastergoa – which formed a growing dialect continuum after the sea arm which divided them began to be reclaimed around 1100. Old East Frisian was divided twice as well: Old Weser Frisian in the Diocese of Bremen and Old Ems Frisian [nl; fy] in the Diocese of Münster. During the period of Old Frisian, the dialect which later became North Frisian is not attested. Stiles states that both varieties of North Frisian – Insular and Mainland – are ultimately descended from an Eastern Frisian ancestor. The descendants of Old Weser Frisian – also known as Riustring Old East Frisian – are Wangerooge, Wursten, and Harlingerland Frisian, all of which are now extinct. Old Weser Frisian is attested in two full manuscripts, known as the Riustring Codices, and two fragments. Whether the Old Weser Frisian attested in these documents is the direct ancestor of the Wangerooge or Wursten variants or rather an extremely close relative is the matter of some debate; Stiles argues that the document's language is closely related to the two but distinct from them, while Bremmer categorizes them as direct descendants. Old Ems Frisian is the ancestor of the now-extinct Emsingo, Brokmerland, and Ommelanden dialects, as well as the still-extant Saterland Frisian, its only living descendant. Old West Frisian later developed into the modern West Frisian language. In general, Old West Frisian manuscripts are more recent attestations compared to Old East Frisian ones; while most Old West Frisian texts are dated to around 1450 to 1525, their Old East Frisian counterparts are typically dated to between 1300 and 1450. In part for this reason, the Swedish linguist Bo Sjölin [frr; nl] argued the Old Frisian period should be further divided into "Classical Frisian" and "Post-Classical Frisian" demarcating Old West and Old East Frisian, respectively, arguing that the characteristics differentiating them are more based on timing than in location. Later scholars have found some support for that characterization. In corpora from around the Ommelanden region between the Lauwers and the Ems, some verbal formations – especially those in the Fivelgo Manuscript – are more reminiscent of those traditionally associated with Old West Frisian despite being grouped as clear members of the traditional Old East Frisian dialect group. However, these overlaps may have been the result of several reference documents being shared across the Frisian territories by different scribes and there is some question as to whether the legal documents of these codices were necessarily discovered in the area in which they had jurisdiction. Unlike their Mainland counterparts, the Insular North Frisian languages are not descended from Old Frisian. Instead, they share a common ancestor in Pre–Old Frisian, diverging around the late 7th or early 8th centuries. Sometime between the 11th and 13th centuries, improvements in dyking technology in Denmark and the Duchy of Schleswig led to the reclaiming of the areas they now occupy. These improvements were probably brought by the Mainland Frisians upon invitation by the king of Denmark due to their renowned water engineering skill. Traditionally, English and the Frisian languages were widely regarded as closer to each other than to any other Germanic language. The German linguist Theodor Siebs is commonly associated with popularizing this affinity and is credited with coining the term "Anglo-Frisian languages" in his 1889 dissertation entitled Zur Geschichte der Englisch-friesischen Sprache ('On the History of the Anglo-Frisian Languages'), though the English philologist Henry Sweet is considered the "father of the Anglo-Frisian hypothesis", articulating the concept as early as 1876. Observations about the close relationship are much older than the 19th century, however; it is likely that Anglo-Saxon missionaries during the 7th and 8th centuries saw the two languages as closely related. Datings proposed for a common ancestor of the Anglo-Frisian languages estimate that it was probably fully formed by the 4th or 5th century, diverging shortly thereafter. This phylogenetic view of English and Frisian is no longer widely accepted. Linguists, such as Arjen Versloot [fy] and Patrick Stiles, have argued that – while English, Frisian, and Low German are correctly believed to have a common Ingvaeonic ancestor – there is no reason to believe that English and Frisian shared a uniquely close genetic relationship thereafter. Some shared linguistic changes do overlap in ways unique to these languages, often at similar times, but these changes do not match in terms of their relative chronology; in other words, these common changes do not appear to have occurred at the same time or in the same order. Examples include Old Frisian's vowel breaking and vowel backing processes, which closely resemble Old English's and Old Norse's, but developed independently from them. Instead, some linguists argue that the Ingvaeonic precursor was likely a broad dialect continuum which saw the dialects which later became English and Frisian develop similarly but not as one language. This continuum was spoken across the continental coast of the North Sea prior to the Migration Period, evolving into distinct languages around turn of the 5th century. The continuum model is sometimes broadened to include Old Low Franconian as well. Under this model, the two language groups did experience a series of changes particular to the area along the North Sea between about 450 and 650, which influenced both languages as well as Dutch, Flemish, and probably northern varieties of Low German. The English and the Frisians were long associated with each other. Frisians are traditionally believed to have comprised a fairly significant portion of the Germanic invaders of Britain during the Anglo-Saxon period and, while no major district of England is named after the Frisians, there is toponymic data to support a significant Frisian settlement, including Friston and Frisby. Genetic evidence has suggested that following the Roman-era exodus of the Frisii, the people who later inhabited the area were genetically indistinguishable from the 5th-century Angles who colonized what is now England. Frisian and English domination of maritime trade in the North Sea also played a role in their relationship; London was a hub for Frisian slave-traders and York had a special quarter for housing Frisian merchants. The Anglo-Saxons invaded and subjugated the Frisians during the 5th century, though this is not considered to be a cause for the linguistic similarities. Other scholars, however, have persisted in supporting the Anglo-Frisian language family as a legitimate phylogenetic category, split into two general outlooks on the relationship. The first is the traditional model, which contends that the relationship is that the two languages diverged from a common Proto-Anglo-Frisian ancestor and thus are sister languages. The other is called the convergence hypothesis, which regards Ingvaeonic as the last common ancestor, but holds that early forms of English and Frisian became increasingly intertwined and influenced by each other to form the striking resemblance each shares to the other. This approach to the relationship is credited to Hans Kuhn, who published work on the topic in 1955. Stiles and Hans F. Nielsen both dismissed the convergence approach as unrealistic, pointing to the difficulty of dispersing those kinds of linguistic developments across the maritime divide. Nielsen, for his part, placed Old Saxon and Old Frisian closer on his phylogenetic tree, with Old English splitting off in the 5th century not long after Old High German split from the rest of the West Germanic languages. History The earliest references to the Frisians are found in the works of Roman and Greek authors such as Tacitus, as in his Germania, and Ptolemy, in his Geography, calling these tribes the Frisii and the Frisiavones.[d] Both describe the Frisians as living from north of the estuary of the Rhine to around the Ems river. Although they were not a part of the Roman Empire, the areas comprising Frisia were akin to a tributary state and some Frisians served as mercenaries in the Roman army. It is unlikely that these Frisians described by the Romans were Germanic-speaking peoples. Evidence from proper names suggests they spoke an Indo-European language that was neither Germanic nor Celtic. How the original Frisii and Frisiavones were originally supplanted is somewhat unclear; Roman sources on the Frisians precede the earliest medieval ones by over three hundred years. One theory is that the area became effectively abandoned by its original inhabitants and a Germanic group moved in, taking over the local name. The German linguist Elmar Seebold suggests, for example, that the Jutes integrated into the group relatively peacefully and the new Jutish–Frisian entity became a Jutish-speaking group, but ultimately assumed the Frisian name. Another theory suggests that the Frankish elite named the region using the works of classical scholars and the name was eventually adopted locally; the Franks sometimes referred to areas on the periphery of their empire by Roman-era names, including Traiectum for modern-day Utrecht and Toxandria for the pagus of Brabant. Following the retreat of Romans from the Low Countries in the 5th century, the Frisians spread considerably over the following two hundred years, dominating the North Sea region. This period is marked by the rule of warlord-like kings and a maritime economy augmented by considerable cattle-breeding skill. Frisian domination of North Sea trade during this era led some contemporary non-Frisian documents to refer to the North Sea as the Frisian Sea (Latin: Mare Frisicum) and the term "Frisian" was used in Dorestad to mean any merchant, not necessarily an ethnic Frisian. By the early 7th century, the Frisians expanded from the Sincfal [nl] near modern-day Bruges to the Weser estuary. During the latter half of the century, the first wave of Frisians began colonizing the islands off the southwestern coast of modern-day Denmark, occupying the uninhabited islands of Amrum, Föhr, Sylt, and Heligoland; the linguistic descendants of this migration are the Insular North Frisian speakers, who speak Öömrang, Fering, Söl'ring, and Heligoland Frisian varieties, respectively.[e] By the end of the century, the Frisians also controlled the coastal regions from the Scheldt to the Rhine. During the following period, Christianity was introduced to the region by Willibrord and Frisia was subjugated by Franks under the leadership of Charles Martel and then later dominated by Charlemagne. The subjugation by Charles Martel is probably what led to the departure of the Insular North Frisians from Frisia to the uninhabited islands off the coast of southern Denmark. The situations of Frisians during the latter part of the 13th century were divided by the Lauwers river. Those to the west of it were partially conquered by the County of Holland during its long-standing campaigns of conquest, but they were ultimately able to repel Holland's forces, killing its count at the Battle of Warns in 1345. The political situation east of the river is largely obscure during this period, but it appears that they were under regular assault from Saxon forces though were able to keep them at bay. This period is also marked by a loose confederation between the Frisian territories, the Upstalsboom League, which united the Seven Sealands of Frisia and produced legal documents from around 1300, though translations of its original Latin texts only appear in Old West Frisian. However, an internal rift among the Frisian confederacy caused increased internal tension and ultimately led to the end of the Frisian freedom period in 1498, with ascension of Albert III of Saxony as gubernator. The following centuries were marked by civil wars including the Guelders Wars, which saw more Frisian casualties than any war thereafter. Outside of the two dozen surviving Pre–Old Frisian runic inscriptions – all of which are dated to around the 5th through 10th centuries – and some individual words captured in the marginalia of Latin texts, the earliest Frisian-language text to survive to the modern period is an interlinear gloss of the Psalms found in 2015, which has been dated to around 1100. The first full manuscripts are the First Brokmer Codex, written sometime between 1276 and 1300, and the First Riustring Codex, written around 1300. These documents are known to be copies and it is uncertain when, where, or by whom the original texts were written, though it is likely that they were originally composed shortly after 1225. Most of the existing Old Frisian corpora were compiled into seventeen legal codices, one being an incunable, which contain several legal texts. Many of the codices are not fully in Old Frisian; the Codex Parisiensis ('Parisian Codex'), for example, contains Middle Low German and Latin supplements, and the Jus Municipale Frisonum ('Municipal Law of the Frisians') ends with a Middle Dutch legal text. Taken together, the body of surviving Old Frisian documents are remarkably uniform across time and space, and formed an early standard language for Frisians. The vast majority of these works, however, have not survived to the modern day and much of what had been preserved was destroyed during Reformation, especially as it dissolved monastic orders around the Netherlands. Legal texts dominate the surviving corpus of Old Frisian documents; all but one of the Frisian-language documents east of the Lauwers are legal documents. To the west, textual diversity is somewhat wider. Western documents include over a thousand charters and administrative documents, though poetry and historiographies have survived alongside them as well as several religious works and a few administrative texts. Documentation for Old Frisian are also attested in chronicles and apocrypha, most of which are politically-charged and considered inaccurate; the political situation of the Frisians during this period led to ideological influences on the Old Frisian body of literature. For example, the aforementioned incunable, though containing some legal content such as the Statutes of Upstalsboom, began with an ideologically-driven introduction and had several documents which likely served to promote the self-governing, non-feudal Frisian order of the period. Later works emphasize the Frisian legal tradition, especially its sources and purported unbroken line from generations past. During Latin's descent as the chosen language of legal texts like charters, Frisian also began its linguistic decline as Low German was either of higher prestige or was more widely understood. However, Old Frisian documents were still widely translated into Low German from the late 15th century until the turn of the 17th century and modern Low German demonstrates traces of Old Frisian influence, including in placenames, personal names, vocabulary, and syntax. Between the Lauwers and the Ems, no original Frisian texts occur in the record after around 1450 and the last known public document composed in Old Frisian dates to 1547 following the introduction of Dutch as the language of administration by the representatives of Duchy of Saxony during the 16th century. By 1550, the language is considered to have developed into Middle Frisian, mostly as a vernacular in rural areas, or been superseded by Stêdsk, a Frisian–Dutch mixed language used primarily in cities by Frisians who could not speak Dutch. Documents produced after the 1550 date show marked difference in orthographic and grammatical usage, suggesting that during the first half of the 16th century, the Old Frisian documents being produced were already quite archaic. During the emerging Modern Frisian period in the 19th century, Old Frisian documents were once again being consulted as inspiration for orthographic standards. Around the middle of the century, Harmen Sytstra [fy; nl] developed an orthography based largely on the conventions of Old Frisian documents. A variant of vied to become the written standard for West Frisian, but lost out to an orthography developed by the Brothers Halbertsma based largely on work of the Middle Frisian poet Gysbert Japicx. Vocabulary Although the vast majority of Old Frisian vocabulary can be traced directly from Proto-Germanic, many terms were created through compounding or affixation, and borrowed from other languages. Only a few adverb-forming suffixes are attested; adverbs could be otherwise be formed using either the genitive or dative cases. Nouns were regularly combined without any use of genitive forms, such as in fiskdam ('fishing weir'), though it became increasingly common to mark the first element with a linking genitive form like -s, such as in sumeresnacht ('summer night'), in later forms of the language. Adjectives were also compounded with nouns to form other adjectives, such as ūdertam ('easy to milk', lit. 'udder-tame'). Although relatively rare, kennings – a kind of Germanic compound with a metaphorical meaning – are attested in some Old Frisian documents. For example, criminal regulations regarding the protection of children and pregnant women use the term bēnenaburch[f] ('fortress of the bones') to reference the womb. Among expressive vocabulary, more words for anger are attested than any other emotion. Loanwords in Old Frisian comprised inherited borrowings from earlier languages – such as rīke ('kingdom, realm') borrowed from a Celtic language during either the Proto-Germanic or Proto–West Germanic periods – and borrowings during the Old Frisian period. Old Frisian borrowed a number of Latin terms from both periods and it is often difficult to pinpoint precisely when the Latin loan entered the language. After the Christianization of the Frisians, the language experienced an influx of Latin and its Greek loans, such as diōvel ('devil'; from Latin diabolus), skrīva ('to write'; from Latin scrībere, displacing the native term wrīta), and seininge ('blessing'; from Latin signum 'sign of the cross'). Old and Middle Low German contributed significantly to loanwords and began to dominate the language of trade in the North Sea by the end of the 15th century, displacing Old Frisian dialects spoken east of the Lauwers. Terms borrowed include reth ('wheel'; from Old Saxon rath) and swāger ('brother-in-law'). Old Frisian also appears to have borrowed terms from the Slavic languages through Low German, including the term cona ('fur') which was used as money in Rüstringen (compare the Serbo-Croatian term kuna). Terms from Old French were also borrowed, probably through one or more intermediaries. Examples include payement ('payment') and amīe ('female lover, concubine'). Old Frisian also borrowed a number of abstract suffixes from French. Because the Anglo-Saxons converted the Frisians to Christianity, it is probable that Old English terms began to enter the language around this time, though the close relationship between the two languages makes distinguishing native words from Old English borrowings extremely difficult. Possible borrowings may include trachtia ('to yearn'; from Old English treahtian, 'to comment on') and diligia ('to delete'; from Old English dīlegian, 'to blot out, to erase'), though these terms may have been borrowed from Old English to missionary centers in German-speaking areas and then into Old Frisian. Similarly, Old and Middle Low German served as an intermediary for Old and Middle High German borrowings; these include terms like keisere ('emperor'; from Old High German kaisar) and iunkfrouwe ('young woman, virgin'). Calques were common in Old Frisian, especially for Latin terms adopted during the Christianization of the Frisians, such as godeshūs ('church', lit. 'God's house'; Latin domus Dei) and elemechtich ('almighty'; Latin omnipotens). Other loan translations include the days of the week and some terms associated with the military or leadership roles, such as hāvedmon ('leader, chieftain'; Latin capitaneus)[g] and herestrēte ('highroad, military road'; Latin via militaris). Alternations between native Old Frisian words and Latin loans bled into legal texts as well, often as glosses or definitions. Examples include in the Freeska Landriucht [fy; stq], where the terms scelta and aesgha both are glossed as Latin iudex ('judge, magistrate') even though the duties in Frisian society were not equivalent. Phonology Old Frisian phonology has been reconstructed by analyzing the existing corpora and the language's modern descendants. In general, Old Frisian scribes used a largely phonemic orthography, where each letter signifies a distinct phoneme. With limited exceptions, stress fell on the stem in Old Frisian. No distinction was made orthographically in early Old Frisian to provide for vowel length, though in later forms of the language an ⟨e⟩ or ⟨i⟩ was placed after the vowel to indicate a long vowel, as in baem ([baːm], 'tree'). Gemination was possible for most consonants in word-medial position,[i] though semivowels, voiced allophones of voiceless fricatives typically found between vowels, and the alveolar affricates are exceptions. Dirk Boutkan argues that /f/ is an exception as well, but Bremmer includes it.[j] In earlier orthographies, geminate consonants were consistently written with duplicated consonants unless they were found in word-final position; later, the duplication only signified that the previous vowel was short. The phoneme /x/ was typically pronounced at the beginning of a syllable as [h] if the syllable was stressed and the phoneme occurred before a vowel, /l/, /r/, or /w/. The consonant exceptions may have been realized as the voiceless allophones [l̥], [r̥], or [ʍ], respectively, given that the orthography sometimes swaps them; for example, hl- and lh- are both attested for the same sequence. If true, Old Frisian is unique in its preservation of this velar quality. In all other cases, [x] persists. The phoneme /g/ was devoiced and spirantized in word-final position. Dental fricatives were written as ⟨th⟩, irrespective of voicing, though the phoneme /t/ was sometimes written as ⟨th⟩; no pronunciation change is thought to have occurred, however. Similarly, the /sk/ cluster is sometimes written as ⟨sch⟩, but it was still likely pronounced as [sk]. Though ⟨kh⟩ was often pronounced [k], ⟨ch⟩ could represent either /x/ or /k/, though /k/ is vastly less common except in loans from Medieval Latin, where it could also represent /x/, especially word-initially or before ⟨t⟩.[k] Instead, ⟨ch⟩ almost always represents either /x/ or its geminate equivalent. The insertion of ⟨h⟩ was probably imported from orthographic conventions common among Middle Low German scribes, though it is relatively rare and found primarily in Latin loans. The digraph gh was often used to represent [ɣ], the fricative allophone of /g/ or voiced allophone of /x/, but could represent /g/ as well. The semivowel /j/ could be variously represented as ⟨i⟩, ⟨y⟩, ⟨j⟩, or ⟨g⟩, though the latter only occurred before high vowels. Both ⟨g⟩ and ⟨h⟩ could be used unetymologically to mark syllable boundaries; ⟨g⟩ typically used for [j], as in wīges (genitive singular of 'way') instead of the also attested wīes, while ⟨h⟩ was silent, as in israhelisk ('Israeli'). The digraph ⟨qu⟩ was sometimes used for the sequence [kw]; both quic and kuic ('animal'), both pronounced [kwik] are attested in the corpora. The sound [v] could be represented with either ⟨v⟩ or ⟨u⟩. In the Codex Unia, a document found west of the Lauwers, the language shows signs of voicing in word-initial fricatives like that found in Middle and Modern Dutch, though this does not persist in Modern West Frisian. Old Frisian phonology was not uniform. For example, around the year 1200, the Proto–West Germanic phoneme *þ became /d/ in word-medial and word-final positions in several Old Frisian dialects.[l] This change did not affect Old Weser Frisian or North Frisian and forms like lathia existed beside ladia in different dialects during the same period. Short vowels in unstressed final syllables in Old Weser Frisian were in complementary distribution; this distribution is called "vowel balance". When the preceding vowel is short and the introduction of vowel balance would cause the additional short vowel to be in an open syllable, i or u appear, such as in Godi ('to God') or skipu ('ships'). If the preceding vowel was long or a diphthong, or if the stem vowel was separated by another syllable, the word ended with the vowels e or o, such as in liōde ('people'). This regular distribution of word-final vowels has allowed linguists to differentiate between long and short vowels in Old Frisian documents where vowel length is not marked. For example, the word hōve (dative singular of 'hoof'), with a long first vowel, could be distinguished from its short vowel counterpart hovi (dative singular of 'court, courtyard'). The consequences of vowel balance is reflected in two of the descendant dialects, Wangerooge and Wursten. Old Weser Frisian also raised e to i before r (irthe, 'earth') and raised a and u to i through i-mutation (kining, 'king'). However, i was lowered to e and u to o in open syllables if the following syllable contains a. This last process is known as the Rüstring a-mutation. Following fronting and the palatalization of *-ag- and *-eg, which typically became ei, Old Weser Frisian exhibits ī, such as in dī ('day') instead of dei and brīn ('brain') instead of brein. Proto-Germanic *ē₂ also became ī. Old Ems Frisian diphthongized ē to ei before a voiced alveolar consonant including resonants, as in breid ('bride', also 'broad'). In unstressed syllables, the suffix -en inserted r between the vowel and the final consonant, such as in wēpern ('weapon') instead of wēpen. In later forms of the dialect, a became lengthened after some consonant clusters; ā then had a tendency to become rounded to ō ([ɔː]) irrespective of if it had been lengthened by the consonant cluster lengthening. This gave rise to forms such as ōlle ('all') instead of alle, though forms like ōlsa ('so') – against the non-Ems form alsā – show rounding but not in both circumstances. Orthographic conventions used in Old West Frisian help to make the phonological structure much clearer than those of the Old East Frisian dialects. Vowel length is frequently marked, either with the addition of an ⟨e⟩ after the long vowel, as in boek ('book'), or the duplication of the long vowel, as in huus ('house') or wiif ('woman'). Orthographic duplication of long u was sometimes ⟨uu⟩ and sometimes ⟨w⟩, as in hws. Similarly, a long i may sometimes be represented as ⟨ij⟩, as in sijn ('his'), or ⟨y⟩, as in lyf ('wergeld'). In some instances, ⟨y⟩ or ⟨i⟩ may be used as a length modifier as well, as in teyken ('sign') or kuith ('known, public'). Old West Frisian demonstrates rounding of *a before nasal consonants; this was later constrained to the northeastern dialect before -mn or -nn, as the southwestern dialect restored it to a. When v occurred between vowels, it became w, as in howe instead of hove for the dative singular of 'court'; this also sometimes led to the collapse of the two vowel structure, causing a diphthong to occur, as in hāud ('head'; from earlier hāwed inherited from hāved). This sound change is also found in later forms of the Old East Frisian dialects. Old West Frisian also exhibits several vowel breaking processes. One is a process called "Jorwert breaking" where long front vowels followed by w are converted into rising diphthongs. This means that [iːw], [eːw], and [ɛːw] were converted into [juːw], [joːw], and [jɔːw], respectively. Sometimes the j is deleted if it follows an r. Before consonant clusters beginning with a liquid consonant, e is typically raised to i. In another process, called "late Old West Frisian breaking", consonant clusters where l preceded d, k, n, or r, the preceding e was lengthened, diphthongized, and stress shifted to the second syllable. This process can be seen in examples such as feld lengthening to fēld before breaking into fiēld; stress originally fell on the first syllable, then shifted to the final syllable. Before the cluster nd, e diphthongizes to ei. In the sequence -we-, both elements merge into -o-. The diphthong iā raised to iē, pronounced as [jɛː]. The voiceless dental fricative th became t word-initially and the voiced dental fricative, also represented as th became d word-initially and -medially. Between vowels, d – including those previously dental fricatives – are elided, as in snīa ('to cut'; from earlier snītha). Word-final d was devoiced and u was raised to o before nasal consonants. Morphology Old Frisian distinguished between three grammatical genders: masculine, feminine, and neuter. Case appears to have been somewhat variable; while nominative, accusative, genitive, and dative cases are abundant, the instrumental case was preserved in some fossilized phrases and a locative case has been documented in a few attestations. Only two grammatical numbers are attested in Old Frisian (singular and plural), though a dual number is attested in both Insular and Mainland forms of North Frisian, becoming obsolete during the early 20th century. Old Frisian likely had a dual number, but the legal context in which most attestations occur did not give cause for the use of the dual. Old Frisian did not have reflexive pronouns for most of its history; although the inherited reflexive sīn is attested, it displaced the expected neuter genitive singular pronoun *his and the language instead used the accusative case to express the reflexive grammatical function. Pronouns in Old Frisian were only attested in four cases: nominative, accusative, genitive, and dative. Like other Invaeonic languages such as Old English and Old Saxon, there is no distinction between the accusative and dative, which is contrasted with other West Germanic languages like Old High German. Old West Frisian innovated the second-person plural form iemman, sometimes rendered as iemma, a combined form composed of jī and man (literally 'you men'). This form did not decline for case and jī remained the polite form of address. Old Frisian had cliticized pronouns which were attached to the end of words; their use has made translation more difficult since they are not marked as distinct from other homonymic suffixes. Possessive pronouns declined like strong adjectives and interrogative pronouns did not decline for grammatical gender. The interrogative pronoun hwet ('what') is sometimes marked for number, but only in the accusative and dative forms. The interrogative pronoun hwa ('who') was typically pronounced with a short vowel, but pronounced long utterance-finally. Pronominal forms were sometimes used to recapitulate nouns and other pronouns in order to establish clarity. Examples include: Thi that blata poor man thi that is is lethost most miserable allera all-GEN-PL nata. companion-GEN-PL Thi blata thi is lethost allera nata. that {poor man} that is {most miserable} all-GEN-PL companion-GEN-PL 'The poor man, he is the most miserable of all companions.' Old Frisian nouns are classified into three archetypes. Type I are weak/consonant-stemmed nouns, type II are strong/vowel-stemmed nouns, and type III is a catch-all category which mainly comprises other kinds of consonant-stemmed nouns of which the Indo-European reflex had the case marked immediately to the root word. Masculine words ending in -a and feminine or neuter words ending in -e are classed in type I, though there are only two neuter words in this type: āre ('ear') and āge ('eye'). Type II comprises a wide variety of strong masculine nouns and predominately abstract feminine nouns. The neuter suffix -skipi or -skipe also governs the type II paradigm, though this attested as a feminine suffix as well. Below is an example of an n-stem declension, a kind of type I declension pattern: Heavy syllables in the stem – that is, stems with either a long vowel or a word-final consonant cluster – have an influence on the pattern of type II declensions. Traditionally ending in -u, heavy a-stems lose the pluralizing suffix, making the nominative and accusative forms of the plural identical to the singular. Below are examples of a-stem declensions within the type II paradigm: Certain words have irregular plurals due to phonological processes, such as dei ('day') and degar ('days') which developed based on vowel fronting and velar palatalization in the former but not in the latter. These irregularities do not affect its paradigm classification. All nouns in the ō-stem declension were feminine. The nominative singular -e in these terms comes from an originally accusative form. Below is an example of the ō-stem paradigm: Verbs in Old Frisian comprised four types: strong, weak, preterite-present, and anomalous. In general and with few exceptions, the only productive verb declension was the weak paradigm. Some paradigm leveling to weak declensions occurred among strong verbs in later forms of the language. The anomalous class of verbs are a composite class comprising suppletive verbs, verbs without clear preterite forms, and verbs with defective or missing declension forms. In general, verbs tended to end in either -a or -ia with later forms reduced to -e or -ie, respectively. Noteworthy exceptions include gān and stān in Old West Frisian; this word-final -n became more widespread in monosyllabic verbs in later forms of that dialect, such as in dwān ('to do') and siān ('to see'). Infinitive forms used the lengthened suffix -ane after the word tō – used to express purpose – such as in the phrase tō farane ('to travel'). In Old Weser Frisian and Old Ems Frisian, present participles and gerunds had identical forms. Like modern English, the conjunction thet ('that') was sometimes omitted after verbs of expression in some contexts (Tha spreken se hia ne kuden. 'Then they said [that] they were unable to.'). The infinitive, the first- and third-person singular preterite, the plural preterite, and the past participle are the four constituent parts identifying a strong verb based on the vowel gradation, including changes to vowel quality or length, that signals a change in meaning. Like nominal declensions, phonological explanations for irregularity are present and similarly do not change classification. There were six classes of strong verbs in Old Frisian with a seventh catch-all category. Classes IV and V became functionally identical after i-mutation, a morphophonological change which obfuscated the differentiation between the historical *i and *u in some contexts, and are distinguished only by historical provenance. Examples of verbal paradigms can be seen below: Unique to Old Frisian, there were only two weak verb classes; Gothic had twice as many, while Old Norse, Old English, Old Saxon, and Old High German each had three. Class I weak verbs comprised verbs which originally had a suffix, *-jan, which created causative verbs from strong verb stems and factitive verbs from nouns and adjectives, such as dēma ('to judge') from dōm ('judgement'). Morphophonologically, the *j affected consonants through assibilation and the vowels through mutation. Class I weak verbs have the past tense suffix -de, or -te after voiceless consonants. Geminated consonants become simple in the preterite and past participles. By contrast, class II weak verbs are typically those which end in -ia. These verbs have their past tense marked by the deletion of the i and the addition of the suffix -ade; the past participle is formed with the same deletion and a simple -ad suffix. Later forms of the suffixes are -ede and -ed, respectively. In late Old West Frisian, these past tense suffixes were deleted. Class II has remained productive into the modern period; Frisian is the only branch of West Germanic languages to have preserved this class of verbs. Germanic languages have a verb class in which a form resembling a past-tense strong verb supplies the present-tense meaning while the past-tense form is re-formed with a weak verbal suffix; infinitive forms are also formed through innovation. These verbs exhibit expected vowel alternations for strong verbs for some forms while other forms are in line with expected weak verb declensions. These verbs are categorized into one of the six strong verb classes the strong verb form is derived from. Syntax Case did not vary in Old Frisian by much when compared to other contemporary Germanic languages. The nominative case was used for the subjects or subject complements though it was also used in vocative contexts. While the main use of the accusative was to mark the direct object of a verb, Old Frisian was also used in temporal and spatial expressions, such as mentioning spaces of time (niugen monath 'nine months') or distances (Hi gunge tha niugen heta skera. 'He should walk the nine hot plowshares.'). Genitive usage was complex and multifaceted; it marked possession and relationships, but was also used to mark adverbs and had both partitive and numerical functions including measures (tha wi sigun hundred folkes santon 'when we sent seven hundred [armed] men') and counting (thritich fethma 'thirty fathoms'). The dative case was also complex. Although it marked the indirect object of a ditransitive verb, it was sometimes used for the direct objects of transitive verbs, such as helpa ('to help'). The dative shared some overlap in function with the genitive, including its use in adverbial phrases and measurements. Dative constructions are also used to mark the benefactive, such as in the sentence God him reste ('God rested [for himself]'). A number of adjectives govern the dative as well, typically marking either physical or emotional closeness. As the case system began to break down in Old Frisian, authors – especially those of legal documents – came to rely heavily on word order and changed the use of prepositions. By late Old Frisian, case marking was optional. Old Frisian marked for two tenses in the verbal root: simple present and simple past, also called the simple preterite. All other tenses, called compound tenses, were expressed through periphrasis using auxiliary verbs. While these tenses were not common in earlier forms of the language, they became more popular over time. Compound tenses used the auxiliaries meaning 'to have' (hebba in Old East Frisian, habba in Old West Frisian) and 'to be' (wesa). The use of hebba/habba and the past participle were used to express the past perfective and less commonly the pluperfect. These usages were largely constrained to dependent clauses. The use of wesa is less clear, but it appears to have been used as somewhat of a present progressive when in combination with a present participle. It is often difficult to differentiate between a progressive semantic meaning or a copular relationship. Particularly with verbs of motion, wesa was also used in some intransitive contexts to express the perfect or pluperfect to express changes in state. The perfect of wesa was used with hebba/habba, though this was uncommon in earlier forms of the language. The passive voice was typically constructed with the verb wertha ('to become') and the past participle, though wesa and the past participle could be used to form a perfective passive. The combined use of wesa and the present participle were used for the durative aspect, while the future tense used the combination of the auxiliary skela and the infinitive. Non-auxiliary verbs, such as biginna ('to begin') and gunga ('to go'), were used with the infinitive to express an inchoative aspect. Similarly, verbs like dwā ('to do') and lēta ('to let') were used to form the causative. The language also marked for three moods in the root: indicative for statements of fact or observations, subjunctive for subjective thoughts including guesswork and conjecture, and imperative for commands. The indicative and subjunctive moods may be used next to each other in different clauses of the same sentence. The infinitive was used in several ways, but the inflected infinitive – an infinitive preceded by tō – operated as a gerund. This inflected form was used to express purpose and sentences containing it would often drop the subject and the associated finite verb. A unique construction using the uninflected infinitive, called the accusative-plus-infinitive construction, was sometimes used as a complement, as in tha segen hia anne thretundista sitta ('then they saw sitting a thirteenth [man]'). Word order in Old Frisian varied widely depending on context and function. The language's constituent word order is generally described as subject–object–verb. Dependent clauses strongly tend towards this word order as well, though some departures from this trend are attested. However, analysis of the existing corpora involving charter documents shows that about 60% of dependent sentences with direct objects have a subject–verb–object construction. Object–verb–subject constructions were commonly employed as a method of topicalization and both conditional and interrogative clauses were typically verb–subject–object. Dependent conditional clauses use object–subject–verb constructions as well when interrogative pronouns are in grammatical cases other than the nominative. In oblique contexts, pronouns may be moved to between the verb and the subject when the subject in a later position than the verb, leading to a verb–object–subject word order. This word order is completely absent in modern Frisian. Examples of this include the following: tha then het called se them thi the koning king alle all heran lords tha het se thi koning alle heran then called them the king all lords 'then the king called them all lords' Like all other Germanic languages at some point in their history, Old Frisian exhibits properties of verb-second word order, though its application is inconsistent. This means that the verb appears in the second position in independent clauses with a finite verb, but reverts to verb-final word order in subordinate clauses. Old Frisian sentences almost always required a subject and the language often employed the use of dummy subjects. This appears to be a syntactic necessity even when there was not semantic function. Examples include verbs involving the weather and impersonal passives, respectively demonstrated below: hwant for hit it wayt blows ende and stormit storms alle all daghen days hwant hit wayt ende stormit alle daghen for it blows and storms all days 'for it blows and storms all days' hwersar when there fuchten fought is is in in tha the godes god-GEN huse house hwersar fuchten is in tha godes huse {when there} fought is in the god-GEN house 'when there has been fought in the house of God' In Old Frisian, negative sentences could be derived from the simple addition of a negative element, such as naet ('not') or nimmen ('nobody'), or double negative constructions. While there is a preference in the language for double negatives, all three stages of Jespersen's cycle are present in the existing corpora, though neither of the two Rüstringer codices – the two oldest codices – exhibit the last stage. The negative marker ne precedes the finite verb in both kinds of constructions. Examples include: truch through thet that hia they ne NEG mughen may cuma come truch thet hia ne mughen cuma through that they NEG may come 'through that, they may not come' thet that hi he ter there nauuet not cuma come ne NEG machte might thet hi ter nauuet cuma ne machte that he there not come NEG might 'that he might not come there' The negative marker ne often cliticized to the following auxiliary, such as in nabba ('to not have'; from ne + habba) and nis ('is not'; from ne + is). In sentences where the finite verb is elided, the negative marker is also elided and no words nor any affixes can come between them. For these reasons, the negative marker and the verb are seen as a unified syntactic unit, with ne serving the function of a syntactic clitic. This is not the case for other negative elements, such as naet, which can be divided by other syntactic functions. Contrastive examples of this are demonstrated below, both from the Skeltana Riucht: dat that hi he dine the kempa champion winna defeat ne NEG mey may dat hi dine kempa winna ne mey that he the champion defeat NEG may 'that he may not defeat the champion' ief if hi he dine the kempa champion naet not winna defeat mey may ief hi dine kempa naet winna mey if he the champion not defeat may 'if he may not defeat the champion' In sentences where the only verb is a finite verb in a main clause, the use of naet is mostly restricted to the sentence-final position, but in subordinate clauses with double negatives, naet is promoted to before ne. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/GPT_(language_model)] | [TOKENS: 2754]
Contents Generative pre-trained transformer A generative pre-trained transformer (GPT) is a type of large language model (LLM) that is widely used in generative AI chatbots. GPTs are based on a deep learning architecture called the transformer. They are pre-trained on large datasets of unlabeled content, and able to generate novel content. OpenAI was the first to apply generative pre-training to the transformer architecture, introducing the GPT-1 model in 2018. The company has since released many bigger GPT models. The chatbot ChatGPT, released in late 2022 (using GPT-3.5), was followed by many competitor chatbots using their own generative pre-trained transformers to generate text, such as Gemini, DeepSeek and Claude. GPTs are primarily used to generate text, but can be trained to generate other kinds of data. For example, GPT-4o can process and generate text, images and audio. To improve performance on complex tasks, some GPTs, such as OpenAI o3, allocate more computation time analyzing the problem before generating an output, and are called reasoning models. In 2025, GPT-5 was released with a router that automatically selects whether to use a faster model or slower reasoning model based on the provided task. Background During the 2010s, improved machine learning algorithms, more powerful computers, and an increase in the amount of digitized material allowed for an AI boom. Separately, the concept of generative pre-training (GP) was a long-established technique in machine learning. GP is a form of self-supervised learning wherein a model is first trained on a large, unlabeled dataset (the "pre-training" step) to learn to generate data points. This pre-trained model is then adapted to a specific task using a labeled dataset (the "fine-tuning" step). The transformer architecture for deep learning is the core technology of a GPT. Developed by researchers at Google, it was introduced in the paper "Attention Is All You Need", which was released on June 12, 2017. The transformer architecture solved many of the performance issues that were associated with older recurrent neural network (RNN) designs for natural language processing (NLP). The architecture's use of an attention mechanism allows models to process entire sequences of text at once, enabling the training of much larger and more sophisticated models. Since 2017, available transformer-based NLP systems have been capable of processing, mining, organizing, connecting, contrasting, and summarizing texts as well as answering questions from textual input.[citation needed] History On June 11, 2018, OpenAI researchers and engineers published a paper called "Improving Language Understanding by Generative Pre-Training", which introduced GPT-1, the first GPT model. It was designed as a transformer-based large language model that used generative pre-training (GP) on BookCorpus, a diverse text corpus, followed by discriminative fine-tuning to focus on specific language tasks. This semi-supervised approach was seen as a breakthrough. Previously, the best-performing neural models in natural language processing (NLP) had commonly employed supervised learning from large amounts of manually labeled data – training a large language model with this approach would have been prohibitively expensive and time-consuming. On February 14, 2019, OpenAI introduced GPT-2, a larger model that could generate coherent text. Created as a direct scale-up of its predecessor, it had both its parameter count and dataset size increased by a factor of 10. GPT-2 has 1.5 billion parameters and was trained on WebText, a 40-gigabyte dataset of 8 million web pages. Citing risks of malicious use, OpenAI opted for a "staged release", initially publishing smaller versions of the model before releasing the full 1.5-billion-parameter model in November. On February 10, 2020, Microsoft introduced its Turing Natural Language Generation, which it claimed was the "largest language model ever published at 17 billion parameters." The model outperformed all previous language models at a variety of tasks, including summarizing texts and answering questions. On May 28, 2020, OpenAI introduced GPT-3, a model with 175 billion parameters that was trained on a larger dataset compared to GPT-2. It marked a significant advancement in few-shot and zero-shot learning abilities. With few examples, it could perform various tasks that it was not explicitly trained for. Following the release of GPT-3, OpenAI started using reinforcement learning from human feedback (RLHF) to align models' behavior more closely with human preferences. This led to the development of InstructGPT, a fine-tuned version of GPT-3. OpenAI further refined InstructGPT to create ChatGPT, the flagship chatbot product of OpenAI that was launched on November 30, 2022. ChatGPT was initially based on GPT-3.5, but it was later transitioned to the GPT-4 model, which was released on March 14, 2023. GPT-4 was also integrated into parts of several applications, including Microsoft Copilot, GitHub Copilot, Snapchat, Khan Academy, and Duolingo. The immense popularity of ChatGPT spurred widespread development of competing GPT-based systems from other organizations. EleutherAI released a series of open-weight models, including GPT-J in 2021. Other major technology companies later developed their own GPT models, such as Google's PaLM and Gemini as well as Meta AI's Llama. Many subsequent GPT models have been trained to be multimodal (able to process or to generate multiple types of data). For example, GPT-4o can both process and generate text, images, and audio. Additionally, GPT models like o3 and DeepSeek R1 have been trained with reinforcement learning to generate multi-step chain-of-thought reasoning before producing a final answer, which helps to solve complex problems in domains such as mathematics. On August 7, 2025, OpenAI released GPT-5, which includes a router that automatically selects whether to use a faster model or slower reasoning model based on task. Foundation models A foundation model is an AI model trained on broad data at scale such that it can be adapted to a wide range of downstream tasks. The most recent OpenAI's GPT-n series model is GPT-5. Other such models include Google's PaLM, a broad foundation model that has been compared to GPT-3 and has been made available to developers via an API, and Together's GPT-JT, which has been reported as the closest-performing open-source alternative to GPT-3 (and is derived from earlier open-source GPTs). Meta AI (formerly Facebook) also has a generative transformer-based foundational large language model, known as LLaMA. Foundational GPTs can also employ modalities other than text, for input and/or output. GPT-4 is a multi-modal LLM that is capable of processing text and image input (though its output is limited to text). Regarding multimodal output, some generative transformer-based models are used for text-to-image technologies such as diffusion and parallel decoding. Such kinds of models can serve as visual foundation models (VFMs) for developing downstream systems that can work with images. Task-specific models A foundational GPT model can be further adapted to produce more targeted systems directed to specific tasks and/or subject-matter domains. Methods for such adaptation can include additional fine-tuning (beyond that done for the foundation model) as well as certain forms of prompt engineering. An important example of this is fine-tuning models to follow instructions, which is of course a fairly broad task but more targeted than a foundation model. In January 2022, OpenAI introduced "InstructGPT" – a series of models which were fine-tuned to follow instructions using a combination of supervised training and reinforcement learning from human feedback (RLHF) on base GPT-3 language models. Advantages this had over the bare foundational models included higher accuracy, less negative/toxic sentiment, and generally better alignment with user needs. Hence, OpenAI began using this as the basis for its API service offerings. Other instruction-tuned models have been released by others, including a fully open version. Another (related) kind of task-specific models are chatbots, which engage in human-like conversation. In November 2022, OpenAI launched ChatGPT – an online chat interface powered by an instruction-tuned language model trained in a similar fashion to InstructGPT. They trained this model using RLHF, with human AI trainers providing conversations in which they played both the user and the AI, and mixed this new dialogue dataset with the InstructGPT dataset for a conversational format suitable for a chatbot. Other major chatbots currently include Microsoft's Bing Chat, which uses OpenAI's GPT-4 (as part of a broader close collaboration between OpenAI and Microsoft), and Google's competing chatbot Gemini (initially based on their LaMDA family of conversation-trained language models, with plans to switch to PaLM). Yet another kind of task that a GPT can be used for is the meta-task of generating its own instructions, like developing a series of prompts for 'itself' to be able to effectuate a more general goal given by a human user. This is known as an AI agent, and more specifically a recursive one because it uses results from its previous self-instructions to help it form its subsequent prompts; the first major example of this was Auto-GPT (which uses OpenAI's GPT models), and others have since been developed as well. GPT systems can be directed toward particular fields or domains. Some reported examples of such models and apps are as follows: Sometimes domain-specificity is accomplished via software plug-ins or add-ons. For example, several different companies have developed particular plugins that interact directly with OpenAI's ChatGPT interface, and Google Workspace has available add-ons such as "GPT for Sheets and Docs" – which is reported to aid use of spreadsheet functionality in Google Sheets. Brand issues OpenAI, which created the first generative pre-trained transformer (GPT) in 2018, asserted in 2023 that "GPT" should be regarded as a brand of OpenAI. In April 2023, OpenAI revised the brand guidelines in its terms of service to indicate that other businesses using its API to run their AI services would no longer be able to include "GPT" in such names or branding. In May 2023, OpenAI engaged a brand management service to notify its API customers of this policy, although these notifications stopped short of making overt legal claims (such as allegations of trademark infringement or demands to cease and desist). As of November 2023, OpenAI still prohibits its API licensees from naming their own products with "GPT", but it has begun enabling its ChatGPT Plus subscribers to make "custom versions of ChatGPT" called GPTs on the OpenAI site. OpenAI's terms of service says that its subscribers may use "GPT" in the names of these, although it's "discouraged". Relatedly, OpenAI has applied to the United States Patent and Trademark Office (USPTO) to seek domestic trademark registration for the term "GPT" in the field of AI. OpenAI sought to expedite handling of its application, but the USPTO declined that request in April 2023. In May 2023, the USPTO responded to the application with a determination that "GPT" was both descriptive and generic. As of November 2023, OpenAI continues to pursue its argument through the available processes. Regardless, failure to obtain a registered U.S. trademark does not preclude some level of common-law trademark rights in the U.S. and trademark rights in other countries. For any given type or scope of trademark protection in the U.S., OpenAI would need to establish that the term is actually "distinctive" to their specific offerings in addition to being a broader technical term for the kind of technology. Some media reports suggested in 2023 that OpenAI may be able to obtain trademark registration based indirectly on the fame of its GPT-based chatbot product, ChatGPT, for which OpenAI has separately sought protection (and which it has sought to enforce more strongly). Other reports have indicated that registration for the bare term "GPT" seems unlikely to be granted, as it is used frequently as a common term to refer simply to AI systems that involve generative pre-trained transformers. In any event, to whatever extent exclusive rights in the term may occur the U.S., others would need to avoid using it for similar products or services in ways likely to cause confusion. If such rights ever became broad enough to implicate other well-established uses in the field, the trademark doctrine of descriptive fair use could still continue non-brand-related usage. In the European Union, the European Union Intellectual Property Office registered "GPT" as a trade mark of OpenAI in spring 2023. However, since spring 2024 the registration is being challenged and is pending cancellation. In Switzerland, the Swiss Federal Institute of Intellectual Property registered "GPT" as a trade mark of OpenAI in spring 2023. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#Clones] | [TOKENS: 12858]
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#References] | [TOKENS: 12858]
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Interactive_fiction] | [TOKENS: 5526]
Contents Interactive fiction Interactive fiction (IF) is software simulating environments in which players use text commands to control characters and influence the environment. Works in this form can be understood as literary narratives, either in the form of Interactive narratives or Interactive narrations. These works can also be understood as a form of video game, either in the form of an adventure game or role-playing game. In common usage, the term refers to text adventures, a type of adventure game where the entire interface can be "text-only", however, graphical text adventure games, where the text is accompanied by graphics (still images, animations or video) still fall under the text adventure category if the main way to interact with the game is by typing text. Some users of the term distinguish between interactive fiction, known as "Puzzle-free", that focuses on narrative, and "text adventures" that focus on puzzles. Due to their text-only nature, they sidestepped the problem of writing for widely divergent graphics architectures. This feature meant that interactive fiction games were easily ported across all the popular platforms at the time, including CP/M (not known for gaming or strong graphics capabilities). The number of interactive fiction works is increasing steadily as new ones are produced by an online community, using freely available development systems [citation needed]. The term can also be used to refer to literary works that are not read in a linear fashion, known as gamebooks, where the reader is instead given choices at different points in the text; these decisions determine the flow and outcome of the story. The most famous example of this form of printed fiction is the Choose Your Own Adventure book series, and the collaborative "addventure" format has also been described as a form of interactive fiction. The term "interactive fiction" is sometimes used also to refer to visual novels, a type of interactive narrative software popular in Japan. Medium Text adventures are one of the oldest types of computer games and form a subset of the adventure genre. The player uses text input to control the game, and the game state is relayed to the player via text output. Interactive fiction usually relies on reading from a screen and on typing input, although text-to-speech synthesizers allow blind and visually impaired users to play interactive fiction titles as audio games. Input is usually provided by the player in the form of simple sentences such as "get key" or "go east", which are interpreted by a text parser. Parsers may vary in sophistication; the first text adventure parsers could only handle two-word sentences in the form of verb-noun pairs. Later parsers, such as those built on ZIL (Zork Implementation Language), could understand complete sentences. Later parsers could handle increasing levels of complexity parsing sentences such as "open the red box with the green key then go north". This level of complexity is the standard for works of interactive fiction today. Despite their lack of graphics, text adventures include a physical dimension where players move between rooms. Many text adventure games boasted their total number of rooms to indicate how much gameplay they offered. These games are unique in that they may create an illogical space, where going north from area A takes you to area B, but going south from area B did not take you back to area A. This can create mazes that do not behave as players expect, and thus players must maintain their own map. These illogical spaces are much more rare in today's era of 3D gaming, and the Interactive Fiction community in general decries the use of mazes entirely, claiming that mazes have become arbitrary 'puzzles for the sake of puzzles' and that they can, in the hands of inexperienced designers, become immensely frustrating for players to navigate. Interactive fiction shares much in common with Multi-User Dungeons ('MUDs'). MUDs, which became popular in the mid-1980s, rely on a textual exchange and accept similar commands from players as do works of IF; however, since interactive fiction is single player, and MUDs, by definition, have multiple players, they differ enormously in gameplay styles. MUDs often focus gameplay on activities that involve communities of players, simulated political systems, in-game trading, and other gameplay mechanics that are not possible in a single player environment. Interactive fiction features two distinct modes of writing: the player input and the game output. As described above, player input is expected to be in simple command form (imperative sentences). A typical command may be: > PULL Lever The responses from the game are usually written from a second-person point of view, in present tense. This is because, unlike in most works of fiction, the main character is closely associated with the player, and the events are seen to be happening as the player plays. While older text adventures often identified the protagonist with the player directly, newer games tend to have specific, well-defined protagonists with separate identities from the player. The classic essay "Crimes Against Mimesis" discusses, among other IF issues, the nature of "You" in interactive fiction. A typical response might look something like this, the response to "look in tea chest" at the start of Curses: "That was the first place you tried, hours and hours ago now, and there's nothing there but that boring old book. You pick it up anyway, bored as you are." Many text adventures, particularly those designed for humour (such as Zork, The Hitchhiker's Guide to the Galaxy, and Leather Goddesses of Phobos), address the player with an informal tone, sometimes including sarcastic remarks (see the transcript from Curses, above, for an example). The late Douglas Adams, in designing the IF version of his 'Hitchhiker's Guide to the Galaxy', created a unique solution to the final puzzle of the game: the game requires the one solitary item that the player didn't choose at the outset of play. Some IF works dispense with second-person narrative entirely, opting for a first-person perspective ('I') or even placing the player in the position of an observer, rather than a direct participant. In some 'experimental' IF, the concept of self-identification is eliminated, and the player instead takes the role of an inanimate object, a force of nature, or an abstract concept; experimental IF usually pushes the limits of the concept and challenges many assumptions about the medium. History Though neither program was developed as a narrative work, the software programs ELIZA (1964–1966) and SHRDLU (1968–1970) can formally be considered early examples of interactive fiction, as both programs used natural language processing to take input from their user and respond in a virtual and conversational manner. ELIZA simulated a psychotherapist that appeared to provide human-like responses to the user's input, while SHRDLU employed an artificial intelligence that could move virtual objects around an environment and respond to questions asked about the environment's shape. The development of effective natural language processing would become an essential part of interactive fiction development. Around 1975, Will Crowther, a programmer and an amateur caver, wrote the first text adventure game, Adventure (originally called ADVENT because a filename could only be six characters long in the operating system he was using, and later named Colossal Cave Adventure). Having just gone through a divorce, he was looking for a way to connect with his two young children. Over the course of a few weekends, he wrote a text based cave exploration game that featured a sort of guide/narrator who spoke in full sentences and who understood simple two word commands that came close to natural English. Adventure was programmed in Fortran for the PDP-10. Crowther's original version was an accurate simulation of part of the real life Mammoth Cave, but also included fantasy elements (such as axe-wielding dwarves and a magic bridge). Stanford University graduate student Don Woods discovered Adventure while working at the Stanford Artificial Intelligence Laboratory, and in 1977 obtained and expanded Crowther's source code (with Crowther's permission). Woods's changes were reminiscent of the writings of J. R. R. Tolkien, and included a troll, elves, and a volcano, which some claim is based on Mount Doom, but Woods says was not. In early 1977, Adventure spread across ARPAnet, and has survived on the Internet to this day. The game has since been ported to many other operating systems, and was included with the floppy-disk distribution of Microsoft's MS-DOS 1.0 OS. Adventure is a cornerstone of the online IF community;[citation needed] there currently exist dozens of different independently programmed versions, with additional elements, such as new rooms or puzzles, and various scoring systems. The popularity of Adventure led to the wide success of interactive fiction during the late 1970s, when home computers had little, if any, graphics capability. Many elements of the original game have survived into the present, such as the command 'xyzzy', which is now included as an Easter Egg in modern games, such as Microsoft Minesweeper. Adventure was also directly responsible for the founding of Sierra Online (later Sierra Entertainment); Ken and Roberta Williams played the game and decided to design one of their own, but with graphics. Adventure International was founded by Scott Adams (not to be confused with the creator of Dilbert). In 1978, Adams wrote Adventureland, which was loosely patterned after the (original) Colossal Cave Adventure. He took out a small ad in a computer magazine in order to promote and sell Adventureland, thus creating the first commercial adventure game. In 1979 he founded Adventure International, the first commercial publisher of interactive fiction. That same year, Dog Star Adventure was published in source code form in SoftSide, spawning legions of similar games in BASIC. The largest company producing works of interactive fiction was Infocom, which created the Zork series and many other titles, among them Trinity, The Hitchhiker's Guide to the Galaxy and A Mind Forever Voyaging. In June 1977, Marc Blank, Bruce K. Daniels, Tim Anderson, and Dave Lebling began writing the mainframe version of Zork (also known as Dungeon), at the MIT Laboratory for Computer Science. The game was programmed in a computer language called MDL, a variant of LISP. The term Implementer was the self-given name of the creators of the text adventure series Zork. It is for this reason that game designers and programmers can be referred to as an implementer, often shortened to "Imp", rather than a writer. In early 1979, the game was completed. Ten members of the MIT Dynamics Modelling Group went on to join Infocom when it was incorporated later that year. In order to make its games as portable as possible, Infocom developed the Z-machine, a custom virtual machine that could be implemented on a large number of platforms, and took standardized "story files" as input. In a non-technical sense, Infocom was responsible for developing the interactive style that would be emulated by many later interpreters. The Infocom parser was widely regarded as the best of its era. It accepted complex, complete sentence commands like "put the blue book on the writing desk" at a time when most of its competitors parsers were restricted to simple two word verb-noun combinations such as "put book". The parser was actively upgraded with new features like undo and error correction, and later games would 'understand' multiple sentence input: 'pick up the gem and put it in my bag. take the newspaper clipping out of my bag then burn it with the book of matches'. Several companies offered optional commercial feelies (physical props associated with a game). The tradition of 'feelies' (and the term itself) is believed to have originated with Deadline (1982), the third Infocom title after Zork I and II. When writing this game, it was not possible to include all of the information in the limited (80KB) disk space, so Infocom created the first feelies for this game; extra items that gave more information than could be included within the digital game itself. These included police interviews, the coroner's findings, letters, crime scene evidence and photos of the murder scene. These materials were very difficult for others to copy or otherwise reproduce, and many included information that was essential to completing the game. Seeing the potential benefits of both aiding game-play immersion and providing a measure of creative copy-protection, in addition to acting as a deterrent to software piracy, Infocom and later other companies began creating feelies for numerous titles. In 1987, Infocom released a special version of the first three Zork titles together with plot-specific coins and other trinkets. This concept would be expanded as time went on, such that later game feelies would contain passwords, coded instructions, page numbers, or other information that would be required to successfully complete the game. Interactive fiction became a standard product for many software companies. By 1982 Softline wrote that "the demands of the market are weighted heavily toward hi-res graphics" in games like Sierra's The Wizard and the Princess and its imitators. Such graphic adventures became the dominant form of the genre on computers with graphics, like the Apple II. By 1982 Adventure International began releasing versions of its games with graphics. The company went bankrupt in 1985. Synapse Software and Acornsoft were also closed in 1985, leaving Infocom as the leading company producing text-only adventure games on the Apple II with sophisticated parsers and writing, and still advertising its lack of graphics as a virtue. The company was bought by Activision in 1986 after the failure of Cornerstone, Infocom's database software program, and stopped producing text adventures a few years later. Soon after, Telaium/Trillium also closed. Probably the first commercial work of interactive fiction produced outside the U.S. was the dungeon crawl game of Acheton, produced in Cambridge, England, and first commercially released by Acornsoft (later expanded and reissued by Topologika). Other leading companies in the UK were Magnetic Scrolls and Level 9 Computing. Also worthy of mention are Delta 4, Melbourne House, and the homebrew company Zenobi. In the early 1980s Edu-Ware also produced interactive fiction for the Apple II as designated by the "if" graphic that was displayed on startup. Their titles included the Prisoner and Empire series (Empire I: World Builders, Empire II: Interstellar Sharks, Empire III: Armageddon). In 1981, CE Software published SwordThrust as a commercial successor to the Eamon gaming system for the Apple II. SwordThrust and Eamon were simple two-word parser games with many role-playing elements not available in other interactive fiction. While SwordThrust published seven different titles, it was vastly overshadowed by the non-commercial Eamon system which allowed private authors to publish their own titles in the series. By March 1984, there were 48 titles published for the Eamon system (and over 270 titles in total as of March 2013). In Italy, interactive fiction games were mainly published and distributed through various magazines in included tapes. The largest number of games were published in the two magazines Viking and Explorer, with versions for the main 8-bit home computers (ZX Spectrum, Commodore 64, and MSX). The software house producing those games was Brainstorm Enterprise, and the most prolific IF author was Bonaventura Di Bello, who produced 70 games in the Italian language. The wave of interactive fiction in Italy lasted for a couple of years thanks to the various magazines promoting the genre, then faded and remains still today a topic of interest for a small group of fans and less known developers, celebrated on Web sites and in related newsgroups. In Spain, interactive fiction was considered a minority genre, and was not very successful. The first Spanish interactive fiction commercially released was Yenght in 1983, by Dinamic Software, for the ZX Spectrum. Later on, in 1987, the same company produced an interactive fiction about Don Quijote. After several other attempts, the company Aventuras AD, emerged from Dinamic, became the main interactive fiction publisher in Spain, including titles like a Spanish adaptation of Colossal Cave Adventure, an adaptation of the Spanish comic El Jabato, and mainly the Ci-U-Than trilogy, composed by La diosa de Cozumel (1990), Los templos sagrados (1991) and Chichen Itzá (1992). During this period, the Club de Aventuras AD (CAAD), the main Spanish speaking community around interactive fiction in the world, was founded, and after the end of Aventuras AD in 1992, the CAAD continued on its own, first with their own magazine, and then with the advent of Internet, with the launch of an active internet community that still produces interactive non commercial fiction nowadays. Legend Entertainment was founded by Bob Bates and Mike Verdu in 1989. It started out from the ashes of Infocom. The text adventures produced by Legend Entertainment used (high-resolution) graphics as well as sound. Some of their titles include Eric the Unready, the Spellcasting series and Gateway (based on Frederik Pohl's novels). The last text adventure created by Legend Entertainment was Gateway II (1992), while the last game ever created by Legend was Unreal II: The Awakening (2003) – the well-known first-person shooter action game using the Unreal Engine for both impressive graphics and realistic physics. In 2004, Legend Entertainment was acquired by Atari, who published Unreal II and released for both Microsoft Windows and Microsoft's Xbox. Many other companies such as Level 9 Computing, Magnetic Scrolls, Delta 4 and Zenobi had closed by 1992. In 1991 and 1992, Activision released The Lost Treasures of Infocom in two volumes, a collection containing most of Infocom's games, followed in 1996 by Classic Text Adventure Masterpieces of Infocom. After the decline of the commercial interactive fiction market in the 1990s, an online community eventually formed around the medium. In 1987, the Usenet newsgroup rec.arts.int-fiction was created, and was soon followed by rec.games.int-fiction. By custom, the topic of rec.arts.int-fiction is interactive fiction authorship and programming, while rec.games.int-fiction encompasses topics related to playing interactive fiction games, such as hint requests and game reviews. As of late 2011, discussions between writers have mostly moved from rec.arts.int-fiction to the Interactive Fiction Community Forum. One of the most important early developments was the reverse-engineering of Infocom's Z-Code format and Z-Machine virtual machine in 1987 by a group of enthusiasts called the InfoTaskForce and the subsequent development of an interpreter for Z-Code story files. As a result, it became possible to play Infocom's work on modern computers. For years, amateurs with the IF community produced interactive fiction works of relatively limited scope using the Adventure Game Toolkit and similar tools. The breakthrough that allowed the interactive fiction community to truly prosper, however, was the creation and distribution of two sophisticated development systems. In 1987, Michael J. Roberts released TADS, a programming language designed to produce works of interactive fiction. In 1993, Graham Nelson released Inform, a programming language and set of libraries which compiled to a Z-Code story file. Each of these systems allowed anyone with sufficient time and dedication to create a game, and caused a growth boom in the online interactive fiction community. Despite the lack of commercial support, the availability of high quality tools allowed enthusiasts of the genre to develop new high quality games. Competitions such as the annual Interactive Fiction Competition for short works, the Spring Thing for longer works, and the XYZZY Awards, further helped to improve the quality and complexity of the games. Modern games go much further than the original "Adventure" style, improving upon Infocom games, which relied extensively on puzzle solving, and to a lesser extent on communication with non player characters, to include experimentation with writing and story-telling techniques. While the majority of modern interactive fiction that is developed is distributed for free, there are some commercial endeavors. In 1998, Michael Berlyn, a former Implementor at Infocom, started a new game company, Cascade Mountain Publishing, whose goals were to publish interactive fiction. Despite the Interactive Fiction community providing social and financial backing, Cascade Mountain Publishing went out of business in 2000. Buster Hudson, developer of The Wizard Sniffer (2017), emphasized that parser-based puzzle can be used to control the pacing or develop a character. Other commercial endeavors include: Peter Nepstad's 1893: A World's Fair Mystery, several games by Howard Sherman published as Malinche Entertainment, The General Coffee Company's Future Boy!, Cypher, a graphically enhanced cyberpunk game and various titles by Textfyre. Emily Short was commissioned to develop the game City of Secrets but the project fell through and she ended up releasing it herself. Notable works The games that won both the Interactive Fiction Competition and the XYZZY Awards are All Roads (2001), Slouching Towards Bedlam (2003), Vespers (2005), Lost Pig (2007), Violet (2008), Aotearoa (2010), Coloratura (2013), and The Wizard Sniffer (2017). Software The original Interactive fiction Colossal Cave Adventure was programmed in Fortran, originally developed by IBM. Adventure's parsers could only handle two-word sentences in the form of verb-noun pairs. Infocom's games of 1979–88, such as Zork, were written using a LISP-like programming language called ZIL (Zork Implementation Language or Zork Interactive Language; it was referred to as both) that compiled into a byte code able to run on a standardized virtual machine called the Z-machine. As the games were text based and used variants of the same Z-machine interpreter, the interpreter only had to be ported to a computer once, rather than once each game. Each game file included a sophisticated parser which allowed the user to type complex instructions to the game. Unlike earlier works of interactive fiction which only understood commands of the form 'verb noun', Infocom's parser could understand a wider variety of sentences. For instance one might type "open the large door, then go west", or "go to the hall". With the Z-machine, Infocom was able to release most of their games for most popular home computers of the time simultaneously, including Apple II, Atari 8-bit computers, IBM PC compatibles, Amstrad CPC/PCW (one disc worked on both machines), Commodore 64, Plus/4, Commodore 128, Kaypro CP/M, TI-99/4A, Macintosh, Atari ST, Amiga, and TRS-80. During the 1990s Interactive fiction was mainly written with C-like languages, such as TADS 2 and Inform 6. A number of systems for writing interactive fiction now exist. The most popular remain Inform, TADS, or ADRIFT, but they diverged in their approach to IF-writing during the 2000s, giving today's IF writers an objective choice. By 2006 IFComp, most games were written for Inform, with a strong minority of games for TADS and ADRIFT, followed by a small number of games for other systems. While familiarity with a programming language leads many new authors to attempt to produce their own complete IF application, most established IF authors recommend use of a specialised IF language, arguing that such systems allow authors to avoid the technicalities of producing a full featured parser, while allowing broad community support. The choice of authoring system usually depends on the author's desired balance of ease of use versus power, and the portability of the final product. Other development systems include: Interpreters are the software used to play the works of interactive fiction created with a development system. Since they need to interact with the player, the "story files" created by development systems are programs in their own right. Rather than running directly on any one computer, they are programs run by Interpreters, or virtual machines, which are designed specially for IF. They may be part of the development system, or can be compiled together with the work of fiction as a standalone executable file. The Z-machine was designed by the founders of Infocom, in 1979. They were influenced by the then-new idea of a virtual Pascal computer, but replaced P with Z for Zork, the celebrated adventure game of 1977–79. The Z-machine evolved during the 1980s but over 30 years later, it remains in use essentially unchanged. Glulx was designed by Andrew Plotkin in the late 1990s as a new-generation IF virtual machine. It overcomes the technical constraint on the Z-machine by being a 32-bit rather than 16-bit processor. Frotz is a modern Z-machine interpreter originally written in C (programming language) by Stefan Jokisch in 1995 for MS-DOS. Over time it was ported to other platforms, such as Unix, RISC OS, Mac OS and most recently iOS. Modern Glulx interpreters are based on "Glulxe", by Andrew Plotkin, and "Git", by Iain Merrick. Other interpreters include Zoom for Mac OS X, or for Unix or Linux, maintained by Andrew Hunter, and Spatterlight for Mac OS X, maintained by Tor Andersson. In addition to commercial distribution venues and individual websites, many works of free interactive fiction are distributed through community websites. These include the Interactive Fiction Database (IFDb), The Interactive Fiction Reviews Organization (IFRO), a game catalog and recommendation engine, and the Interactive Fiction Archive. Works may be distributed for playing with in a separate interpreter. In which case they are often made available in the Blorb package format that many interpreters support. A filename ending .zblorb is a story file intended for a Z-machine in a Blorb wrapper, while a filename ending .gblorb is a story file intended for a Glulx in a Blorb wrapper. It is not common but IF files are sometimes also seen without a Blorb wrapping, though this usually means cover art, help files, and so forth are missing, like a book with the covers torn off. Z-machine story files usually have names ending .z5 or .z8, the number being a version number, and Glulx story files usually end .ulx. Alternatively, works may be distributed for playing in a web browser. For example, the 'Parchment' project is for web browser-based IF Interpreter, for both Z-machine and Glulx files. Some software such as Twine publishes directly to HTML, the standard language used to create web pages, reducing the requirement for an Interpreter or virtual machine. See also Notes Further reading External links
========================================