text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Italian_language] | [TOKENS: 9033] |
Contents Italian language Italian (italiano, pronounced [itaˈljaːno] ⓘ, or lingua italiana, pronounced [ˈliŋɡwa itaˈljaːna]) is a Romance language of the Indo-European language family. It evolved from the Vulgar Latin of the Roman Empire and, together with Sardinian, is the least differentiated language from Latin. Current estimates indicate that between 68 and 85 million people speak Italian, including approximately 64 million native speakers as of 2024. Italian is an official language in Italy, San Marino, Switzerland (Ticino and part of the Grisons), and Vatican City, and it has official minority status in Croatia, Slovenia (Istria), Romania, Bosnia and Herzegovina, and in 6 municipalities of Brazil. It is also spoken in other European and non-EU countries, most notably in Malta (by 66% of the population), Albania (upwards of 70%), and Monaco, as well as by large immigrant and expatriate communities in the Americas, Australia, and on other continents. Italian is a major language in Europe, being one of the official languages of the Organization for Security and Co-operation in Europe and one of the working languages of the Council of Europe. It is the third-most-widely spoken native language in the European Union (13% of the EU population) and it is spoken as a second language by 13 million EU citizens (3%). Italian is the main working language of the Holy See, serving as the lingua franca in the Roman Catholic hierarchy and the official language of the Sovereign Military Order of Malta. Italian influence led to the development of derivated languages and dialects worldwide. It is also widespread in various sectors and markets, with its loanwords used in arts, luxury goods, fashion, sports and cuisine; it has a significant use in musical terminology and opera, with numerous Italian words referring to music that have become international terms taken into various languages worldwide, including in English. Italian is considered a conservative Romance language in phonology, lexicon, and morphology. Almost all native Italian words end with vowels, and the language has a 7-vowel sound system ("e" and "o" have mid-low and mid-high sounds). Italian has contrast between short and long consonants and gemination (doubling) of consonants. History The Italian language has developed through a long and slow process, which began after the Western Roman Empire's fall and the onset of the Middle Ages in the 5th century. Latin, the predominant language of the western Roman Empire, remained the established written language in Europe during the Middle Ages, although most people were illiterate. Over centuries, the Vulgar Latin popularly spoken in various areas of Europe—including the Italian peninsula—evolved into local varieties, or dialects, unaffected by formal standards and teachings. These varieties are not in any sense "dialects" of standard Italian, which itself started off as one of these local tongues, but sister languages of Italian. The Latin-speaking class referred to the collective Romance vernaculars of Europe as Romanz, Romance, or, in Italy, Romanzo or Volgare. The linguistic and historical demarcations between late Vulgar Latin and early Romance varieties in Italy are imprecise. The earliest surviving texts that can definitely be called vernacular (as distinct from its predecessor Vulgar Latin) are legal formulae known as the Placiti Cassinesi from the province of Benevento that date from 960 to 963, although the Veronese Riddle, probably from the 8th or early 9th century, contains a late form of Vulgar Latin that can be seen as a very early sample of a vernacular dialect of Italy. The Commodilla catacomb inscription likewise probably dates to the early 9th century and appears to reflect a language somewhere between late Vulgar Latin and early vernacular. The language that came to be thought of as Italian developed in central Tuscany and was first formalized in the early 14th century through the works of Tuscan writer Dante Alighieri, written in his native Florentine. Dante's epic poems, known collectively as the Commedia, to which another Tuscan poet Giovanni Boccaccio later affixed the title Divina, were read throughout the Italian peninsula. His written vernacular became the touchstone for elaborating a "canonical standard" that all educated Italians could understand. The poetry of Petrarch was also widely admired and influential in the development of the literary language, and would be identified as a model for vernacular writing by Pietro Bembo in the 16th century. In addition to the widespread exposure gained through literature, Florentine also gained prestige due to the political and cultural significance of Florence at the time and the fact that it was linguistically a middle way between the northern and the southern Italian dialects. Italian was progressively made an official language of most of the Italian states predating unification, slowly replacing Latin, even when ruled by foreign powers (such as Spain in the Kingdom of Naples, or Austria in the Kingdom of Lombardy–Venetia), although the masses kept speaking primarily their local vernaculars. Italian was also one of the many recognised languages in the Austro-Hungarian Empire. Italy has always had a distinctive dialect for each city because the cities, until recently, were thought of as city-states. Those dialects now have considerable variety. As Tuscan-derived Italian came to be used throughout Italy, features of local speech were naturally adopted, producing various versions of Regional Italian. The most characteristic differences, for instance, between Roman Italian and Milanese Italian are syntactic gemination of initial consonants in some contexts and the pronunciation of stressed "e", and of "s" between vowels in many words: e.g. va bene 'all right' is pronounced [vabˈbɛːne] by a Roman (and by any standard Italian speaker), [vaˈbeːne] by a Milanese (and by any speaker whose native dialect lies to the north of the La Spezia–Rimini Line); a casa 'at home' is [akˈkaːsa] for Roman, [akˈkaːsa] or [akˈkaːza] for standard, [aˈkaːza] for Milanese and generally northern. In contrast to the Gallo-Italic linguistic panorama of northern Italy, the Italo-Dalmatian, Neapolitan and its related dialects were largely unaffected by the Franco-Occitan influences introduced to Italy mainly by bards from France during the Middle Ages, but after the Norman conquest of southern Italy, Sicily became the first Italian land to adopt Occitan lyric moods (and words) in poetry. Even in the case of northern Italian languages, however, scholars are careful not to overstate the effects of outsiders on the natural indigenous developments of the languages. The economic might and relatively advanced development of Tuscany at the time (Late Middle Ages) gave its language weight, although Venetian remained widespread in medieval Italian commercial life, and Ligurian (or Genoese) remained in use in maritime trade alongside the Mediterranean. The increasing political and cultural relevance of Florence during the periods of the rise of the Medici Bank, humanism, and the Renaissance made its dialect, or rather a refined version of it, a standard in the arts. The Renaissance era, known as il Rinascimento in Italian, was seen as a time of rebirth, which is the literal meaning of both renaissance (from French) and rinascimento (Italian). Among its many manifestations, the Renaissance saw a reinvigorated interest in both classical antiquity and vernacular literature. Advancements in technology played a crucial role in the diffusion of the Italian language. The printing press was invented in the 15th century, and spread rapidly. By the year 1500, there were 56 printing presses in Italy, more than anywhere else in Europe. The printing press enabled the production of literature and documents in higher volumes and at lower cost, further accelerating the spread of Italian. Italian became the language used in the courts of every state in the Italian peninsula, and the prestige variety used on the island of Corsica (but not in the neighbouring Sardinia, which on the contrary underwent Italianization well into the late 18th century, under Savoyard sway: the island's linguistic composition, roofed by the prestige of Spanish among the Sardinians, would therein make for a rather slow process of assimilation to the Italian cultural sphere). The rediscovery of Dante's De vulgari eloquentia, and a renewed interest in linguistics in the 16th century, sparked a debate that raged throughout Italy concerning the criteria that should govern the establishment of a modern Italian literary and spoken language. This discussion, known as questione della lingua (i.e., the problem of the language), ran through the Italian culture until the end of the 19th century, often linked to the political debate on achieving a united Italian state. Renaissance scholars divided into three main factions: A fourth faction claimed that the best Italian was the one that the papal court adopted, which was a mixture of the Tuscan and Roman dialects. Eventually, Bembo's ideas prevailed, and the foundation of the Accademia della Crusca in Florence (1582–1583), the official legislative body of the Italian language, led to the publication of Agnolo Monosini's Latin tome Floris italicae linguae libri novem in 1604 followed by the first Italian dictionary in 1612. An important event that helped the diffusion of Italian was the conquest and occupation of Italy by Napoleon (himself of Italian-Corsican descent) in the early 19th century. This conquest propelled the unification of Italy some decades after and pushed the Italian language into the status of a lingua franca, used not only among clerks, nobility, and functionaries in the Italian courts, but also by the bourgeoisie. Today Italy has reached linguistic unity and an overwhelming majority of its 56 million citizens speak Italian. Many dialects are still alive, especially by the older generations. Today, Italian is one of the most studied foreign languages in the world. The publication of Italian literature's first modern novel, I promessi sposi (The Betrothed) by Alessandro Manzoni, both reflected and furthered the growing trend towards Italian as a national standard language. Manzoni, a Milanesian, chose to write the book in the Florentine dialect, describing this choice, in the preface to his 1840 edition, as "rinsing" his Milanese "in the waters of the Arno" (Florence's river). The novel is commonly described as "the most widely read work in the Italian language". It became a model for subsequent Italian literary fiction, helping to galvanize national linguistic unity around the Florentine dialect. This growth was initially relative; linguistic diversity continued during the unification of Italy (1848–1871). The Italian linguist Tullio De Mauro estimated that only 2.5% of Italy's population could speak the Italian standardized language properly in 1861, while Arrigo Castellani estimated the same value as 10%. Classification Italian is a Romance language, a descendant of Vulgar Latin (colloquial spoken Latin). Standard Italian is based on Tuscan, especially its Florentine dialect, and is, therefore, an Italo-Dalmatian language, a classification that includes most other central and southern Italian languages and the extinct Dalmatian. As in most Romance languages, stress is distinctive in Italian. According to Ethnologue, lexical similarity is 89% with French, 87% with Catalan, 85% with Sardinian, 82% with Spanish, 82% with Portuguese, 78% with Ladin, 77% with Romanian. Estimates may differ according to sources. A 1949 study by the linguist Mario Pei concluded that out of seven Romance languages, Italian's stressed vowel phonology was the second-closest to that of Classical Latin (after Logudorese Sardinian). The study emphasized, however, that it represented only "a very elementary, incomplete and tentative demonstration" of how statistical methods could measure linguistic change, assigned "frankly arbitrary" point values to various types of change, and did not compare languages in the sample with respect to any characteristics or forms of divergence other than stressed vowels, among other caveats. Geographic distribution Italian is the official language of Italy and San Marino and is spoken fluently by the majority of the countries' populations. Italian is the third most spoken language in Switzerland (after German and French; see Swiss Italian). It is official both on the national level and on regional level in two cantons: Ticino and Grisons. Ticino, which includes Lugano, the largest Italian-speaking city outside Italy, is the only canton where Italian is predominant. Italian is also used in administration and official documents in Vatican City. Italian is also spoken by a minority in Monaco and France, especially in the southeastern part of the country. Italian was the official language in Savoy and in Nice until 1860, when they were both annexed by France under the Treaty of Turin, a development that triggered the "Niçard exodus", or the emigration of a quarter of the Niçard Italians to Italy, and the Niçard Vespers. Giuseppe Garibaldi complained about the referendum that allowed France to annex Savoy and Nice, and a group of his followers (among the Italian Savoyards) took refuge in Italy in the following years. Corsica passed from the Republic of Genoa to France in 1769 after the Treaty of Versailles. Italian was the official language of Corsica until 1859. Giuseppe Garibaldi called for the inclusion of the "Corsican Italians" within Italy when Rome was annexed to the Kingdom of Italy, but King Victor Emmanuel II did not agree to it. Italian is generally understood in Corsica by the population resident therein who speak Corsican, which is an Italo-Romance language similar to Tuscan. Francization occurred in Nice case, and caused a near-disappearance of the Italian language as many of the Italian speakers in these areas migrated to Italy. In Corsica, on the other hand, almost everyone still speaks the Corsican language, which, due to its linguistic proximity to the Italian standard language, appears both linguistically as an Italian dialect and therefore as a carrier of Italian culture, despite the French government's decades-long efforts to cut Corsica off from the Italian motherland. Italian was the official language in Monaco until 1860, when it was replaced by the French. This was due to the annexation of the surrounding County of Nice to France following the Treaty of Turin (1860). It formerly had official status in Montenegro (because of the Venetian Albania), parts of Slovenia and Croatia (because of the Venetian Istria and Venetian Dalmatia), parts of Greece (because of the Venetian rule in the Ionian Islands and by the Kingdom of Italy in the Dodecanese). Italian is widely spoken in Malta, where nearly two-thirds of the population can speak it fluently (see Maltese Italian). Italian served as Malta's official language until 1934, when it was abolished by the British colonial administration amid strong local opposition. Italian language in Slovenia is an officially recognised minority language in the country. The official census, carried out in 2002, reported 2,258 ethnic Italians (Istrian Italians) in Slovenia (0.11% of the total population). Italian language in Croatia is an official minority language in the country, with many schools and public announcements published in both languages. The 2001 census in Croatia reported 19,636 ethnic Italians (Istrian Italians and Dalmatian Italians) in the country (some 0.42% of the total population). Their numbers dropped dramatically after World War II following the Istrian–Dalmatian exodus, which caused the emigration of between 230,000 and 350,000 Istrian Italians and Dalmatian Italians. Italian was the official language of the Republic of Ragusa from 1492 to 1807. It formerly had official status in Albania due to the annexation of the country to the Kingdom of Italy (1939–1943). Albania has a large population of non-native speakers, with over half of the population having some knowledge of the Italian language. The Albanian government has pushed to make Italian a compulsory second language in schools. The Italian language is well-known and studied in Albania, due to its historical ties and geographical proximity to Italy and to the diffusion of Italian television in the country. Due to heavy Italian influence during the Italian colonial period, Italian is still understood by some in former colonies such as Libya. Although it was the primary language in Libya since colonial rule, Italian greatly declined under the rule of Muammar Gaddafi, who expelled the Italian Libyan population and made Arabic the sole official language of the country. A few hundred Italian settlers returned to Libya in the 2000s. Italian was the official language of Eritrea during Italian colonisation. Italian is today used in commerce, and it is still spoken especially among elders; besides that, Italian words are incorporated as loan words in the main language spoken in the country (Tigrinya). The capital city of Eritrea, Asmara, still has several Italian schools, established during the colonial period. In the early 19th century, Eritrea was the country with the highest number of Italians abroad, and the Italian Eritreans grew from 4,000 during World War I to nearly 100,000 at the beginning of World War II. In Asmara there are two Italian schools, the Istituto Italiano Statale Omnicomprensivo di Asmara (Italian primary school with a Montessori department) and the Liceo Sperimentale "G. Marconi" (Italian international senior high school). Italian was also introduced to Somalia through colonialism and was the sole official language of administration and education during the colonial period but fell out of use after government, educational and economic infrastructure were destroyed in the Somali Civil War. Italian is also spoken by large immigrant and expatriate communities in the Americas and Australia. Although over 17 million Americans are of Italian descent, only a little over one million people in the United States speak Italian at home. Nevertheless, an Italian language media market does exist in the country. In Canada, Italian is the second most spoken non-official language when varieties of Chinese are not grouped together, with 375,645 claiming Italian as their mother tongue in 2016. Italian immigrants to South America have also brought a presence of the language to that continent. According to some sources, Italian is the second most spoken language in Argentina after the official language of Spanish, although its number of speakers, mainly of the older generation, is decreasing. Italian bilingual speakers can be found scattered across the southeast of Brazil and in the south. In Venezuela, Italian is the most spoken language after Spanish and Portuguese, with around 200,000 speakers. In Uruguay, people who speak Italian as their home language are 1.1% of the total population of the country. In Australia, Italian is the second most spoken foreign language after Chinese, with 1.4% of the population speaking it as their home language. The main Italian-language newspapers published outside Italy are the L'Osservatore Romano (Vatican City), the L'Informazione di San Marino (San Marino), the Corriere del Ticino and the laRegione Ticino (Switzerland), the La Voce del Popolo (Croatia), the Corriere d'Italia (Germany), the L'italoeuropeo (United Kingdom), the Passaparola (Luxembourg), the America Oggi (United States), the Corriere Canadese and the Corriere Italiano (Canada), the Il punto d'incontro (Mexico), the L'Italia del Popolo (Argentina), the Fanfulla (Brazil), the Gente d'Italia (Uruguay), the La Voce d'Italia (Venezuela), the Il Globo (Australia) and the La gazzetta del Sud Africa (South Africa). Italian is widely taught in many schools around the world. In the 21st century, technology also allows for the continual spread of the Italian language, as people have new ways to learn how to speak, read, and write languages at their own pace and at any given time. For example, the free website and application Duolingo has 4.94 million English speakers learning the Italian language. According to the Italian Ministry of Foreign Affairs, every year there are more than 200,000 foreign students who study the Italian language; they are distributed among the 90 Institutes of Italian Culture that are located around the world, in the 179 Italian schools located abroad, or in the 111 Italian lecturer sections belonging to foreign schools where Italian is taught as a language of culture. As of 2022, Australia had the highest number of students learning Italian in the world. This occurred because of support by the Italian community in Australia and the Italian Government and also because of successful educational reform efforts led by local governments in Australia. From the late 19th to the mid-20th century, millions of Italians settled in Argentina, Uruguay, southern Brazil, and Venezuela, and in Canada and the United States, where they formed a physical and cultural presence. In some cases, colonies were established where variants of regional languages of Italy were used, and some continue to use this regional language. Examples are Rio Grande do Sul, Brazil, where Talian is used, and the town of Chipilo near Puebla, Mexico; each continues to use a derived form of Venetian dating back to the 19th century. Other examples are Cocoliche, an Italian–Spanish pidgin once spoken in Argentina and especially in Buenos Aires, and Lunfardo. The Rioplatense Spanish dialect of Argentina and Uruguay today has thus been heavily influenced by both standard Italian and Italian regional languages as a result. Starting in late medieval times in much of Europe and the Mediterranean, Latin was replaced as the primary commercial language by languages of Italy, especially Tuscan and Venetian. These varieties were consolidated during the Renaissance with the strength of Italy and the rise of humanism and the arts. Italy came to enjoy increasing artistic prestige within Europe. A mark of the educated gentlemen was to make the Grand Tour, visiting Italy to see its great historical monuments and works of art. It was expected that the visitor would learn at least some Italian, understood as language based on Florentine. In England, while the classical languages Latin and Greek were the first to be learned, Italian became the second most common modern language after French, a position it held until the late 18th century when it tended to be replaced by German. John Milton, for instance, wrote some of his early poetry in Italian. Within the Catholic Church, Italian is known by a large part of the ecclesiastical hierarchy and is used in substitution for Latin in some official documents. Italian loanwords continue to be used in most languages in matters of art and music (especially classical music including opera), in the design and fashion industries, in some sports such as football and especially in culinary terms. Languages and dialects In Italy, almost all the other languages spoken as the vernacular—other than standard Italian and some languages spoken among immigrant communities—are often called "Italian dialects", a label that can be very misleading if it is understood to mean "dialects of Italian". The Romance dialects of Italy are local evolutions of spoken Latin that pre-date the establishment of Italian, and as such are sister languages to the Tuscan that was the historical source of Italian. They can be quite different from Italian and from each other, with some belonging to different linguistic branches of Romance. The only exceptions to this are twelve groups considered "historical language minorities", which are officially recognised as distinct minority languages by the law. On the other hand, Corsican (a language spoken on the French island of Corsica) is closely related to medieval Tuscan, from which standard Italian derives and evolved. The differences in the evolution of Latin in the different regions of Italy can be attributed to the natural changes that all languages in regular use are subject to, and to some extent to the presence of three other types of languages: substrata, superstrata, and adstrata. The most prevalent were substrata (the language of the original inhabitants), as the Italian dialects were most probably simply Latin as spoken by native cultural groups. Superstrata and adstrata were both less important. Foreign conquerors of Italy that dominated different regions at different times left behind little to no influence on the dialects. Foreign cultures with which Italy engaged in peaceful relations with, such as trade, had no significant influence either.: 19–20 Throughout Italy, regional varieties of standard Italian, called Regional Italian, are spoken. Regional differences can be recognised by various factors: the openness of vowels, the length of the consonants, and influence of the local language (for example, in informal situations andà, annà and nare replace the standard Italian andare in the area of Tuscany, Rome and Venice respectively for the infinitive 'to go'). There is no definitive date when the various Italian variants of Latin—including varieties that contributed to modern standard Italian—began to be distinct enough from Latin to be considered separate languages. One criterion for determining that two language variants are to be considered separate languages rather than variants of a single language is that they have evolved so that they are no longer mutually intelligible; this diagnostic is effective if mutual intelligibility is minimal or absent (e.g. in Romance, Romanian and Portuguese), but it fails in cases such as Spanish-Portuguese or Spanish-Italian, as educated native speakers of either pairing (particularly Spanish-Portuguese) can understand each other well if they choose to do so; however, the level of intelligibility is markedly lower between Italian-Spanish, and considerably higher between the Iberian sister languages of Portuguese-Spanish. Speakers of this latter pair can communicate with one another with remarkable ease, each speaking to the other in his own native language, without slang/jargon. Nevertheless, on the basis of accumulated differences in morphology, syntax, phonology, and to some extent lexicon, it is not difficult to identify that for the Romance varieties of Italy, the first extant written evidence of languages that can no longer be considered Latin comes from the 9th and 10th centuries CE. These written sources demonstrate certain vernacular characteristics and sometimes explicitly mention the use of the vernacular in Italy. Full literary manifestations of the vernacular began to surface around the 13th century in the form of various religious texts and poetry.: 21 Although these are the first written records of Italian varieties separate from Latin, the spoken language had probably diverged long before the first written records appeared since those who were literate generally wrote in Latin even if they spoke other Romance varieties in person. Throughout the 19th and 20th centuries, the use of standard Italian became increasingly widespread and was mirrored by a decline in the use of the dialects. An increase in literacy was one of the main driving factors (one can assume that only literates were capable of learning standard Italian, whereas those who were illiterate had access only to their native dialect). The percentage of literates rose from 25% in 1861 to 60% in 1911, and then on to 78.1% in 1951. Tullio De Mauro, an Italian linguist, has asserted that in 1861, only 2.5% of the population of Italy could speak standard Italian. He reports that in 1951, that percentage had risen to 87%. The ability to speak Italian did not necessarily mean that it was in everyday use, and most people (63.5%) still usually spoke their native dialects. In addition, other factors such as mass emigration, industrialization, and urbanization, and internal migrations after World War II, contributed to the proliferation of standard Italian. The Italians who emigrated during the Italian diaspora beginning in 1861 were often of the uneducated lower class, and thus the emigration had the effect of increasing the percentage of literates, who often knew and understood the importance of standard Italian back home in Italy. A large percentage of those who had emigrated also eventually returned to Italy, often more educated than when they had left.: 35 Although use of the Italian dialects has declined in the modern era, as Italy unified under standard Italian and continues to do so aided by mass media from newspapers to radio to television, diglossia is still frequently encountered in Italy and triglossia is not uncommon in emigrant communities among older speakers. Both situations normally involve some degree of code-switching and code-mixing. Phonology Notes: Italian has a seven-vowel system, consisting of /a, ɛ, e, i, ɔ, o, u/, and 23 consonants. Compared with most other Romance languages, Italian phonology is conservative, preserving many words nearly unchanged from Vulgar Latin. Some examples: The conservative nature of Italian phonology is partly explained by its origin. Italian stems from a literary language that is derived from the 13th-century speech of the city of Florence in the region of Tuscany, and has changed little in the last 700 years or so. Furthermore, the Tuscan dialect is the most conservative of all Italian dialects, radically different from the Gallo-Italian languages less than 160 kilometres (100 mi) to the north (across the La Spezia–Rimini Line). The following are some of the conservative phonological features of Italian, as compared with the common Western Romance languages (French, Spanish, Portuguese, Galician, Catalan). Some of these features are also present in Romanian. Compared with most other Romance languages, Italian has many inconsistent outcomes, where the same underlying sound produces different results in different words, e.g. laxāre > lasciare and lassare, captiāre > cacciare and cazzare, (ex)dēroteolāre > sdrucciolare, druzzolare and ruzzolare, rēgīna > regina and reina. Although in all these examples the second form has fallen out of usage, the dimorphism is thought to reflect the several-hundred-year period during which Italian developed as a literary language divorced from any native-speaking population, with an origin in 12th/13th-century Tuscan but with many words borrowed from languages farther to the north, with different sound outcomes. (The La Spezia–Rimini Line, the most important isogloss in the entire Romance-language area, passes only about 30 kilometres or 20 miles north of Florence.) Dual outcomes of Latin /p t k/ between vowels, such as lŏcvm > luogo but fŏcvm > fuoco, was once thought to be due to borrowing of northern voiced forms, but is now generally viewed as the result of early phonetic variation within Tuscany. Some other features that distinguish Italian from the Western Romance languages: Standard Italian also differs in some respects from most nearby Italian languages: Italian phonotactics do not usually permit verbs and polysyllabic nouns to end with consonants, except in poetry and song, so foreign words may receive extra terminal vowel sounds. Writing system Italian has a shallow orthography, meaning very regular spelling with an almost one-to-one correspondence between letters and sounds. In linguistic terms, the writing system is close to being a phonemic orthography. The most important of the few exceptions are the following (see below for more details): The Italian alphabet is typically considered to consist of 21 letters. The letters j, k, w, x, y are traditionally excluded, although they appear in loanwords such as jeans, whisky, taxi, and xilofono. The letter ⟨x⟩ has become common in standard Italian with the prefix extra-, although (e)stra- is traditionally used; it is also common to use the Latin particle ex(-) to mean 'former(ly)' as in la mia ex ('my ex-girlfriend'), "Ex-Jugoslavia" ('Former Yugoslavia'). The letter ⟨j⟩ appears in the first name Jacopo and in some Italian place-names, such as Bajardo, Bojano, Joppolo, Jerzu, Jesolo, Jesi, Ajaccio, among others, and in Mar Jonio, an alternative spelling of Mar Ionio (the Ionian Sea). The letter ⟨j⟩ may appear in dialectal words, but its use is discouraged in contemporary standard Italian. Letters used in foreign words can be replaced with phonetically equivalent native Italian letters and digraphs: ⟨gi⟩, ⟨ge⟩, or ⟨i⟩ for ⟨j⟩; ⟨c⟩ or ⟨ch⟩ for ⟨k⟩ (including in the standard prefix kilo-); ⟨o⟩, ⟨u⟩ or ⟨v⟩ for ⟨w⟩; ⟨s⟩, ⟨ss⟩, ⟨z⟩, ⟨zz⟩ or ⟨cs⟩ for ⟨x⟩; and ⟨e⟩ or ⟨i⟩ for ⟨y⟩. Italian has geminate, or double, consonants, which are distinguished by length and intensity. Length is distinctive for all consonants except for /ʃ/, /dz/, /ts/, /ʎ/, /ɲ/, which are always geminate when between vowels, and /z/, which is always single. Geminate plosives and affricates are realized as lengthened closures. Geminate fricatives, nasals, and /l/ are realized as lengthened continuants. There is only one vibrant phoneme /r/ but the actual pronunciation depends on the context and regional accent. Generally one can find a flap consonant [ɾ] in an unstressed position whereas [r] is more common in stressed syllables, but there may be exceptions. Especially people from the northern part of Italy (Parma, Aosta Valley, South Tyrol) may pronounce /r/ as [ʀ], [ʁ], or [ʋ]. Of special interest to the linguistic study of Regional Italian is the gorgia toscana, or "Tuscan throat", the weakening or lenition of intervocalic /p/, /t/, and /k/ in the Tuscan language. The voiced postalveolar fricative /ʒ/ is present as a phoneme only in loanwords: for example, garage [ɡaˈraːʒ]. Phonetic [ʒ] is common in central and southern Italy as an intervocalic allophone of /dʒ/: gente [ˈdʒɛnte] 'people' but la gente [laˈʒɛnte] 'the people', ragione [raˈʒoːne] 'reason'. Grammar Italian grammar is typical of the grammar of Romance languages in general. Cases exist for personal pronouns (nominative, oblique, accusative, dative), but not for nouns. There are two basic classes of nouns in Italian, referred to as genders, masculine and feminine. Gender may be natural (ragazzo 'boy', ragazza 'girl') or simply grammatical with no possible reference to biological gender (masculine costo 'cost', feminine costa 'coast'). Masculine nouns typically end in -o (ragazzo 'boy'), with plural marked by -i (ragazzi 'boys'), and feminine nouns typically end in -a, with plural marked by -e (ragazza 'girl', ragazze 'girls'). For a group composed of boys and girls, ragazzi is the plural, suggesting that -i is a general neutral plural. A third category of nouns is unmarked for gender, ending in -e in the singular and -i in the plural: legge 'law, f. sg.', leggi 'laws, f. pl.'; fiume 'river, m. sg.', fiumi 'rivers, m. pl.', thus assignment of gender is arbitrary in terms of form, enough so that terms may be identical but of distinct genders: fine meaning 'aim', 'purpose' is masculine, while fine meaning 'end, ending' (e.g. of a movie) is feminine, and both are fini in the plural, a clear instance of -i as a non-gendered default plural marker. These nouns often, but not always, denote inanimates. There are a number of nouns that have a masculine singular and a feminine plural, most commonly of the pattern m. sg. -o, f. pl. -a (miglio 'mile, m. sg.', miglia 'miles, f. pl.'; paio 'pair, m. sg.', paia 'pairs, f. pl.'), and thus are sometimes considered neuter (these are usually derived from neuter Latin nouns). An instance of neuter gender also exists in pronouns of the third person singular. Examples: Nouns, adjectives, and articles inflect for gender and number (singular and plural). Like in English, common nouns are capitalized when occurring at the beginning of a sentence. Unlike English, nouns referring to languages (e.g. Italian) and adjectives pertaining to ethnicity are never capitalized, while speakers of languages, or inhabitants of an area (e.g. Italians) used to always be capitalized, but, starting from the 19th century, this convention has been subject to various changes. There are three types of adjectives: descriptive, invariable and form-changing. Descriptive adjectives are the most common, and their endings change to match the number and gender of the noun they modify. Invariable adjectives are adjectives whose endings do not change. The form-changing adjectives buono 'good', bello 'beautiful', grande 'big', and santo 'saint/holy' change in form when placed before different types of nouns. Italian has three degrees for comparison of adjectives: positive, comparative, and superlative. The order of words in the phrase is relatively free compared to most European languages. The position of the verb in the phrase is highly mobile. Word order often has a lesser grammatical function in Italian than in English. Adjectives are sometimes placed before their noun and sometimes after. Subject nouns generally come before the verb. Italian is a null-subject language, so nominative pronouns are usually absent, with subject indicated by verbal inflections (e.g. amo 'I love', ama '(s)he loves', amano 'they love'). Noun objects normally come after the verb, as do pronoun objects after imperative verbs, infinitives and gerunds, but otherwise, pronoun objects come before the verb. There are both indefinite and definite articles in Italian. There are four indefinite articles, selected by the gender of the noun they modify and by the phonological structure of the word that immediately follows the article. Uno is masculine singular, used before z (/ts/ or /dz/), s+consonant, gn (/ɲ/), pn or ps, while masculine singular un is used before a word beginning with any other sound. The noun zio 'uncle' selects masculine singular, thus uno zio 'an uncle' or uno zio anziano 'an old uncle,' but un mio zio 'an uncle of mine'. The feminine singular indefinite articles are una, used before any consonant sound, and its abbreviated form, written un', used before vowels: una camicia 'a shirt', una camicia bianca 'a white shirt', un'altra camicia 'a different shirt'. There are seven forms for definite articles, both singular and plural. In the singular: lo, which corresponds to the uses of uno; il, which corresponds to the uses with the consonant of un; la, which corresponds to the uses of una; l', used for both masculine and feminine singular before vowels. In the plural: gli is the masculine plural of lo and l'; i is the plural of il; and le is the plural of feminine la and l'. There are numerous contractions of prepositions with subsequent articles. There are numerous productive suffixes for diminutive, augmentative, pejorative, attenuating, etc., which are also used to create neologisms. There are 27 pronouns, grouped in clitic and tonic pronouns. Personal pronouns are separated into three groups: subject, object (which takes the place of both direct and indirect objects), and reflexive. Second-person subject pronouns have both a polite and a familiar form. These two different types of addresses are very important in Italian social distinctions. All object pronouns have two forms: stressed and unstressed (clitics). Unstressed object pronouns are much more frequently used, and come before a verb conjugated for subject-verb (la vedi: 'you see her'), after (in writing, attached to) non-conjugated verbs (vedendola: 'seeing her'). Stressed object pronouns come after the verb, and are used when the emphasis is required, for contrast, or to avoid ambiguity (vedo lui, ma non lei: 'I see him, but not her'). Aside from personal pronouns, Italian also has demonstrative, interrogative, possessive, and relative pronouns. There are two types of demonstrative pronouns: relatively near (this) and relatively far (that); there exists a third type of demonstrative denoting vicinity only to the listener, but it has fallen out of use. Demonstratives in Italian are repeated before each noun, unlike in English. There are three regular sets of verbal conjugations, and various verbs are irregularly conjugated. Within each of these sets of conjugations, there are four simple (one-word) verbal conjugations by person/number in the indicative mood (present tense; past tense with imperfective aspect, past tense with perfective aspect, and future tense), two simple conjugations in the subjunctive mood (present tense and past tense), one simple conjugation in the conditional mood, and one simple conjugation in the imperative mood. Corresponding to each of the simple conjugations, there is a compound conjugation involving a simple conjugation of "to be" or "to have" followed by a past participle. "To have" is used to form compound conjugation when the verb is transitive (ha detto, ha fatto: 'he/she has said, he/she has made/done'), while "to be" is used in the case of verbs of motion and some other intransitive verbs (è andato, è stato: 'he has gone, he has been'). "To be" may be used with transitive verbs, but in such a case it makes the verb passive (è detto, è fatto: 'it is said, it is made/done'). This rule is not absolute, and some exceptions do exist. Words Note: the plural form of verbs could also be used as an extremely formal (for example to noble people in monarchies) singular form (see royal we). Example text Article 1 of the Universal Declaration of Human Rights in Italian: Article 1 of the Universal Declaration of Human Rights in English: International Phonetic Alphabet transcription: Nobel Prizes for Italian language literature See also Notes References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Hemichordate] | [TOKENS: 2452] |
Contents Hemichordate Hemichordata (/ˌhɛmɪkɔːrˈdeɪtə/ HEM-ih-kor-DAY-tə) is a phylum which consists of triploblastic, eucoelomate, and bilaterally symmetrical marine deuterostome animals, generally considered the sister group of the echinoderms. They appear in the Lower or Middle Cambrian and include two main classes: Enteropneusta (acorn worms), and Pterobranchia. A third class, Planctosphaeroidea, is known only from the larva of a single species, Planctosphaera pelagica. The class Graptolithina, formerly considered extinct, is now placed within the pterobranchs, represented by a single living genus Rhabdopleura. Acorn worms are solitary worm-shaped organisms. They generally live in burrows (the earliest secreted tubes) and are deposit feeders, but some species are pharyngeal filter feeders, while the family are free living detritivores. Many are well known for their production and accumulation of various halogenated phenols and pyrroles. Pterobranchs are filter-feeders, mostly colonial, living in a collagenous tubular structure called a coenecium. The discovery of the stem group hemichordate Gyaltsenglossus shows that early hemichordates combined aspects of the two morphologically disparate classes. Anatomy The body plan of hemichordates is characterized by a muscular organization. The anteroposterior axis is divided into three parts: the anterior prosome, the intermediate mesosome, and the posterior metasome. The body of acorn worms is worm-shaped and divided into an anterior proboscis, an intermediate collar, and a posterior trunk. The proboscis is a muscular and ciliated organ used in locomotion and in the collection and transport of food particles. The mouth is located between the proboscis and the collar. The trunk is the longest part of the animal. It contains the pharynx, which is perforated with gill slits (or pharyngeal slits), the oesophagus, a long intestine, and a terminal anus. It also contains the gonads. A post-anal tail is present in juvenile members of the acorn worm family Harrimaniidae. The prosome of pterobranchs is specialized into a muscular and ciliated cephalic shield used in locomotion and in secreting the coenecium. The mesosome extends into one pair (in the genus Rhabdopleura) or several pairs (in the genus Cephalodiscus) of tentaculated arms used in filter feeding. The metasome, or trunk, contains a looped digestive tract, gonads, and extends into a contractile stalk that connects individuals to the other members of the colony, produced by asexual budding. In the genus Cephalodiscus, asexually produced individuals stay attached to the contractile stalk of the parent individual until completing their development. In the genus Rhabdopleura, zooids are permanently connected to the rest of the colony via a common stolon system. Some species biomineralize in calcium carbonate. They have a diverticulum of the foregut called a stomochord, previously thought to be related to the chordate notochord, but this is most likely the result of convergent evolution rather than a homology. A hollow neural tube exists among some species (at least in early life), probably a primitive trait that they share with the common ancestor of Chordata and the rest of the deuterostomes. Hemichordates have a nerve net and longitudinal nerves, but no brain. The nervous system of adult enteropneusts consists of: Hemichordates have an open circulatory system. The heart vesicle is located dorsally within the proboscis complex, and does not contain any blood. Instead it moves the blood indirectly by pulsating against the dorsal blood vessel. Development Together with the echinoderms, the hemichordates form the Ambulacraria, which are the closest extant phylogenetic relatives of chordates. Thus these marine worms are of great interest for the study of the origins of chordate development. There are several species of hemichordates, with a moderate diversity of embryological development among these species. Hemichordates are classically known to develop in two ways, both directly and indirectly. Hemichordates are a phylum composed of two classes, the enteropneusts and the pterobranchs, both being forms of marine worm. The enteropneusts have two developmental strategies: direct and indirect development. The indirect developmental strategy includes an extended pelagic plankotrophic tornaria larval stage, which means that this hemichordate exists in a larval stage that feeds on plankton before turning into an adult worm. The pterobranch genus most extensively studied is Rhabdopleura from Plymouth, England and from Bermuda. The following details the development of two popularly studied species of the Hemichordata phylum Saccoglossus kowalevskii and Ptychodera flava. Saccoglossus kowalevskii is a direct developer and Ptychodera flava is an indirect developer. Most of what has been detailed in hemichordate development has come from hemichordates that develop directly. P. flava's early cleavage pattern is similar to that of S. kowalevskii. The first and second cleavages from the single cell zygote of P. flava are equal cleavages, are orthogonal to each other and both include the animal and vegetal poles of the embryo. The third cleavage is equal and equatorial so that the embryo has four blastomeres both in the vegetal and the animal pole. The fourth division occurs mainly in blastomeres in the animal pole, which divide transversally as well as equally to make eight blastomeres. The four vegetal blastomeres divide equatorially but unequally and they give rise to four big macromeres and four smaller micromeres. Once this fourth division has occurred, the embryo has reached a 16 cell stage. P. flava has a 16 cell embryo with four vegetal micromeres, eight animal mesomeres and four larger macromeres. Further divisions occur until P. flava finishes the blastula stage and goes on to gastrulation. The animal mesomeres of P. flava go on to give rise to the larva's ectoderm, animal blastomeres also appear to give rise to these structures though the exact contribution varies from embryo to embryo. The macromeres give rise to the posterior larval ectoderm and the vegetal micromeres give rise to the internal endomesodermal tissues. Studies done on the potential of the embryo at different stages have shown that at both the two and four cell stage of development P. flava blastomeres can go on to give rise to a tornaria larvae, so fates of these embryonic cells don't seem to be established till after this stage. Eggs of S. kowalevskii are oval in shape and become spherical in shape after fertilization. The first cleavage occurs from the animal to the vegetal pole and usually is equal though very often can also be unequal. The second cleavage to reach the embryos four cell stage also occurs from the animal to the vegetal pole in an approximately equal fashion though like the first cleavage it's possible to have an unequal division. The eight cell stage cleavage is latitudinal; so that each cell from the four cell stage goes on to make two cells. The fourth division occurs first in the cells of the animal pole, which end up making eight blastomeres (mesomeres) that are not radially symmetric, then the four vegetal pole blastomeres divide to make a level of four large blastomeres (macromeres) and four very small blastomeres (micromeres). The fifth cleavage occurs first in the animal cells and then in the vegetal cells to give a 32 cell blastomere. The sixth cleavage occurs in a similar order and completes a 64 cell stage, finally the seventh cleavage marks the end of the cleavage stage with a blastula with 128 blastomeres. This structure goes on to go through gastrulation movements which will determine the body plan of the resulting gill slit larva, this larva will ultimately give rise to the marine acorn worm. Much of the genetic work done on hemichordates has been done to make comparison with chordates, so many of the genetic markers identified in this group are also found in chordates or are homologous to chordates in some way. Studies of this nature have been done particularly on S. kowalevskii, and like chordates S. kowalevskii has dorsalizing bmp-like factors such as bmp 2/4, which is homologous to Drosophila's decapentaplegic dpp. The expression of bmp2/4 begins at the onset of gastrulation on the ectodermal side of the embryo, and as gastrulation progresses its expression is narrowed down to the dorsal midline but is not expressed in the post-anal tail. The bmp antagonist chordin is also expressed in the endoderm of gastrulating S. kowalevskii. Besides these well known dorsalizing factors, further molecules known to be involved in dorsal ventral patterning are also present in S. kowalevskii, such as a netrin that groups with netrin gene class 1 and 2. Netrin is important in patterning of the neural system in chordates, as well as is the molecule Shh, but S. kowalevskii was only found to have one hh gene and it appears to be expressed in a region that is uncommon to where it is usually expressed in developing chordates along the ventral midline. Classification Hemichordata are divided into two classes: the Enteropneusta, commonly called acorn worms, and the Pterobranchia, which includes the graptolites. A third class, Planctosphaeroidea, is proposed based on a single species known only from larvae. The phylum contains about 120 living species. Hemichordata appears to be sister to the Echinodermata as Ambulacraria; Xenoturbellida may be basal to that grouping. Pterobranchia may be derived from within Enteropneusta, making Enteropneusta paraphyletic. It is possible that the extinct organism Etacystis is a member of the Hemichordata, either within or with close affinity to the Pterobranchia. There are 130 described species of Hemichordata and many new species are being discovered, especially in the deep sea. A phylogenetic tree showing the position of the hemichordates is: Cephalochordata Tunicata Vertebrata/Craniata Echinodermata Hemichordata The internal relationships within the hemichordates are shown below. The tree is based on 16S +18S rRNA sequence data and phylogenomic studies from multiple sources. Stereobalanus Harrimaniidae Spengeliidae Torquaratoridae Ptychoderidae Cephalodiscida Rhabdopleurida †Dendroidea †Graptoloidea References Other references External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Synchronicity] | [TOKENS: 4894] |
Contents Synchronicity Synchronicity (German: Synchronizität) is a concept introduced by Carl Jung, founder of analytical psychology, to describe events that coincide in time and appear meaningfully related, yet lack a discoverable causal connection. Jung held that this was a healthy function of the mind, although it can become harmful within psychosis. Jung developed the theory as a hypothetical noncausal principle serving as the intersubjective or philosophically objective connection between these seemingly meaningful coincidences. After coining the term in the late 1920s Jung developed the concept with physicist Wolfgang Pauli through correspondence and in their 1952 work The Interpretation of Nature and the Psyche. This culminated in the Pauli–Jung conjecture. Jung and Pauli's view was that, just as causal connections can provide a meaningful understanding of the psyche and the world, so too may acausal connections. A 2016 study found 70% of therapists agreed synchronicity experiences could be useful for therapy. Analytical psychologists hold that individuals must understand the compensatory meaning of these experiences to "enhance consciousness rather than merely build up superstitiousness". However, clients who disclose synchronicity experiences report not being listened to, accepted, or understood. The experience of overabundance of meaningful coincidences can be characteristic of schizophrenic delusion. Jung used synchronicity in arguing for the existence of the paranormal. This idea was explored by Arthur Koestler in The Roots of Coincidence and taken up by the New Age movement. Unlike magical thinking, which believes causally unrelated events to have paranormal causal connection, synchronicity supposes events may be causally unrelated yet have unknown noncausal connection. The objection from a scientific standpoint is that this is neither testable nor falsifiable, so does not fall within empirical study. Scientific scepticism regards it as pseudoscience. Jung stated that synchronicity events are chance occurrences from a statistical point of view, but meaningful in that they may seem to validate paranormal ideas. No empirical studies of synchronicity based on observable mental states and scientific data were conducted by Jung to draw his conclusions, though studies have since been done (see § Studies). While someone may experience a coincidence as meaningful, this alone cannot prove objective meaning to the coincidence. Statistical laws or probability, show how unexpected occurrences can be inevitable or more likely encountered than people assume. These explain coincidences such as synchronicity experiences as chance events which have been misinterpreted by confirmation biases, spurious correlations, or underestimated probability. Origins Synchronicity arose with Jung's use of the ancient Chinese divination text I Ching. It has 64 hexagrams, each built from two trigrams or bagua. A divination is made by seemingly random numerical happenings for which the I Ching text gives detailed situational analysis. Richard Wilhelm, translator of Chinese, provided Jung with validation. Jung met Wilhelm in Darmstadt, Germany where Hermann von Keyserling hosted Gesellschaft für Freie Philosophie. In 1923 Wilhelm was in Zurich, as was Jung, attending the psychology club, where Wilhelm promulgated the I Ching. Finally, I Ching was published with Wilhelm's commentary. I instantly obtained the book and found to my gratification that Wilhelm took much the same view of the meaningful connections as I had. But he knew the entire literature and could therefore fill in the gaps which had been outside my competence. — Aniela Jaffé (1962), Memories, Dreams, Reflections of C. G. Jung, page 374 Jung coined the term synchronicity as part of a lecture in May 1930, or as early as 1928, at first for use in discussing Chinese religious and philosophical concepts. His first public articulation of the term came in 1930 at the memorial address for Richard Wilhelm where Jung stated: The science [i.e. cleromancy] of the I Ching is based not on the causality principle but on one which—hitherto unnamed because not familiar to us—I have tentatively called the synchronistic principle. The I Ching is one of the five classics of Confucianism. By selecting a passage according to the traditional chance operations such as tossing coins and counting out yarrow stalks, the text is supposed to give insights into a person's inner states. Jung characterised this as the belief in synchronicity, and himself believed the text to give apt readings in his own experiences. He would later also recommend this practice to certain of his patients. Jung argued that synchronicity could be found diffused throughout Chinese philosophy more broadly and in various Taoist concepts. Jung also drew heavily from German philosophers Gottfried Leibniz, whose own exposure to I Ching divination in the 17th century was the primary precursor to the theory of synchronicity in the West, Arthur Schopenhauer, whom Jung placed alongside Leibniz as the two philosophers most influential to his formulation of the concept, and Johannes Kepler. He points to Schopenhauer, especially, as providing an early conception of synchronicity in the quote: All the events in a man's life would accordingly stand in two fundamentally different kinds of connection: firstly, in the objective, causal connection of the natural process; secondly, in a subjective connection which exists only in relation to the individual who experiences it, and which is thus as subjective as his own dreams[.] — Arthur Schopenhauer, "Transcendent Speculation on the Apparent Deliberateness in the Fate of the Individual", Parerga and Paralipomena (1851), Volume 1, Chapter 4, trans. E. F. J. Payne As with Paul Kammerer's theory of seriality developed in the late 1910s, Jung looked to hidden structures of nature for an explanation of coincidences. In 1932, physicist Wolfgang Pauli and Jung began what would become an extended correspondence in which they discussed and collaborated on various topics surrounding synchronicity, contemporary science, and what is now known as the Pauli effect. Jung also built heavily upon the idea of numinosity, a concept originating in the work of German religious scholar Rudolf Otto, which describes the feeling of gravitas found in religious experiences, and which perhaps brought greatest criticism upon Jung's theory. Jung also drew from parapsychologist J. B. Rhine whose work in the 1930s had at the time appeared to validate certain claims about extrasensory perception. It was not until a 1951 Eranos conference lecture, after having gradually developed the concept for over two decades, that Jung gave his first major outline of synchronicity. The following year, Jung and Pauli published their 1952 work The Interpretation of Nature and the Psyche (German: Naturerklärung und Psyche), which contained Jung's central monograph on the subject, "Synchronicity: An Acausal Connecting Principle". Other notable influences and precursors to synchronicity can be found in: the theological concept of correspondences, sympathetic magic, astrology, and alchemy. The Pauli–Jung conjecture is a collaboration in metatheory between physicist Wolfgang Pauli and analytical psychologist Carl Jung, centered on the concept of synchronicity. It was mainly developed between 1946 and 1954, four years before Pauli's death, and speculates on a double-aspect perspective within the disciplines of both collaborators. Pauli additionally drew on various elements of quantum theory such as complementarity, nonlocality, and the observer effect in his contributions to the project. Jung and Pauli thereby "offered the radical [...] idea that the currency of these correlations is not (quantitative) statistics, as in quantum physics, but (qualitative) meaning". Contemporary physicist T. Filk writes that quantum entanglement, being "a particular type of acausal quantum correlations", was plausibly taken by Pauli as "a model for the relationship between mind and matter in the framework [...] he proposed together with Jung". Specifically, quantum entanglement may be the physical phenomenon which most closely represents the concept of synchronicity. Analytical psychology In analytical psychology, the recognition of seemingly-meaningful coincidences is a mechanism by which unconscious material is brought to the attention of the conscious mind. A harmful or developmental outcome can then result only from the individual's response to such material. Jung proposed that the concept could have psychiatric use in mitigating the negative effects of over-rationalisation and proclivities towards mind–body dualism. Analytical psychology considers modern modes of thought to rest upon the pre-modern and primordial structures of the psyche. Causal connections thus form the basis of modern worldviews, and connections which lack causal reasoning are seen as chance. This chance-based interpretation, however, is incongruent with the primordial mind, which instead interprets this category as intention. The primordial framework in fact places emphasis on these connections, just as the modern framework emphasizes causal ones. In this regard, causality, like synchronicity, is a human interpretation imposed onto external phenomena. Primordial modes of thought are however, according to Jung, necessary constituents of the modern psyche that inevitably protrude into modern life—providing the basis for meaningful interpretation of the world by way of meaning-based connections. Just as the principles of psychological causality provide meaningful understanding of causal connections, so too the principle of synchronicity attempts to provide meaningful understanding of acausal connections. Jung placed synchronicity as one of three main conceptual elements in understanding the psyche: Jung felt synchronicity to be a principle that had explanatory power towards his concepts of archetypes and the collective unconscious.[i] It described a governing dynamic which underlies the whole of human experience and history—social, emotional, psychological, and spiritual. The emergence of the synchronistic paradigm was a significant move away from Cartesian dualism towards an underlying philosophy of double-aspect theory. Some argue this shift was essential in bringing theoretical coherence to Jung's earlier work.[ii] Philosophy of science Jung held that there was both a philosophical and scientific basis for synchronicity. He identified the complementary nature of causality and acausality with Eastern sciences and protoscientific disciplines, stating "the East bases much of its science on this irregularity and considers coincidences as the reliable basis of the world rather than causality. Synchronism is the prejudice of the East; causality is the modern prejudice of the West" (see also: universal causation). Contemporary scholar L. K. Kerr writes: Jung also looked to modern physics to understand the nature of synchronicity, and attempted to adapt many ideas in this field to accommodate his conception of synchronicity, including the property of numinosity. He worked closely with Nobel Prize winning physicist Wolfgang Pauli and also consulted with Albert Einstein. The notion of synchronicity shares with modern physics the idea that under certain conditions, the laws governing the interactions of space and time can no longer be understood according to the principle of causality. In this regard, Jung joined modern physicists in reducing the conditions in which the laws of classical mechanics apply. It is also pointed out that, since Jung took into consideration only the narrow definition of causality—only the efficient cause—his notion of acausality is also narrow and so is not applicable to final and formal causes as understood in Aristotelian or Thomist systems. Either the final causality is inherent in synchronicity, as it leads to individuation; or synchronicity can be a kind of replacement for final causality. However, such finalism or teleology is considered to be outside the domain of modern science.[citation needed] Jung's theory, and philosophical worldview implicated by it, includes not only mainstream science thoughts but also esoteric ones and ones that are against mainstream. Paranormal Jung's use of the concept in arguing for the existence of paranormal phenomena has been widely considered pseudoscientific by modern scientific scepticism. Furthermore, his collaborator Wolfgang Pauli objected to his dubious experiments of the concept involving astrology—which Jung believed to be supported by the laboratory experiments behind the uncertainty principle's formulation. Jung similarly turned to the works of parapsychologist Joseph B. Rhine to support a connection between synchronicity and the paranormal. In his book Synchronicity: An Acausal Connecting Principle, Jung wrote: How are we to recognize acausal combinations of events, since it is obviously impossible to examine all chance happenings for their causality? The answer to this is that acausal events may be expected most readily where, on closer reflection, a causal connection appears to be inconceivable. It is impossible, with our present resources, to explain ESP [extrasensory perception], or the fact of meaningful coincidence, as a phenomenon of energy. This makes an end of the causal explanation as well, for "effect" cannot be understood as anything except a phenomenon of energy. Therefore it cannot be a question of cause and effect, but of a falling together in time, a kind of simultaneity. Because of this quality of simultaneity, I have picked on the term "synchronicity" to designate a hypothetical factor equal in rank to causality as a principle of explanation. Roderick Main, in the introduction to his 1997 book Jung on Synchronicity and the Paranormal, wrote: The culmination of Jung's lifelong engagement with the paranormal is his theory of synchronicity, the view that the structure of reality includes a principle of acausal connection which manifests itself most conspicuously in the form of meaningful coincidences. Difficult, flawed, prone to misrepresentation, this theory none the less remains one of the most suggestive attempts yet made to bring the paranormal within the bounds of intelligibility. It has been found relevant by psychotherapists, parapsychologists, researchers of spiritual experience and a growing number of non-specialists. Indeed, Jung's writings in this area form an excellent general introduction to the whole field of the paranormal. Studies For example, psychologists were significantly more likely than both counsellors and psychotherapists to agree that chance coincidence was an explanation for synchronicity, whereas, counsellors and psychotherapists were significantly more likely than psychologists to agree that a need for unconscious material to be expressed could be an explanation for synchronicity experiences in the clinical setting. Scientific reception Since their inception, Jung's theories of synchronicity have been highly controversial and have never had widespread scientific approval. Scientific scepticism regards them as pseudoscience. Likewise, mainstream science does not support paranormal explanations of coincidences. Despite this, synchronicity experiences and the synchronicity principle continue to be studied within philosophy, cognitive science, and analytical psychology. Synchronicity is widely challenged by the sufficiency of probability theory in explaining the occurrence of coincidences, the relationship between synchronicity experiences and cognitive biases, and doubts about the theory's psychiatric or scientific usefulness. Psychologist Fritz Levi, a contemporary of Jung, criticised the theory in his 1952 review, published in the periodical Neue Schweizer Rundschau (New Swiss Observations). Levi saw Jung's theory as vague in determinability of synchronistic events, saying that Jung never specifically explained his rejection of "magic causality" to which such an acausal principle as synchronicity would be related. He also questioned the theory's usefulness. In a 1981 paper, parapsychologist Charles Tart writes: [There is] a danger inherent in the concept of synchronicity. This danger is the temptation to mental laziness. If, in working with paranormal phenomena, I cannot get my experiments to replicate and cannot find any patterns in the results, then, as attached as I am to the idea of causality, it would be very tempting to say, "Well, it's synchronistic, it's forever beyond my understanding," and so (prematurely) give up trying to find a causal explanation. Sloppy use of the concept of synchronicity then becomes a way of being intellectually lazy and dodging our responsibilities. Robert Todd Carroll, author of The Skeptic's Dictionary in 2003, argues that synchronicity experiences are better explained as apophenia—the tendency for humans to find significance or meaning where none exists. He states that over a person's lifetime one can be expected to encounter several seemingly-unpredictable coincidences and that there is no need for Jung's metaphysical explanation of these occurrences. In a 2014 interview, emeritus professor and statistician David J. Hand states: Synchronicity is an attempt to come up with an explanation for the occurrence of highly improbable coincidences between events where there is no causal link. It's based on the premise that existing physics and mathematics cannot explain such things. This is wrong, however—standard science can explain them. That's really the point of the improbability principle. What I have tried to do is pull out and make explicit how physics and mathematics, in the form of probability calculus does explain why such striking and apparently highly improbable events happen. There's no need to conjure up other forces or ideas, and there's no need to attribute mystical meaning or significance to their occurrence. In fact, we should expect them to happen, as they do, purely in the natural course of events. In a 2015 paper, scholars M. K. Johansen and M. Osman state: As theories, the main problem with both synchronicity and seriality is that they ignore the possibility that coincidences are a psychological phenomenon and focus instead on the premise that coincidences are examples of actual but hidden structures in the world. Scientific explanations Several researchers have proposed models that attempt to frame synchronicity in scientific or quasi-scientific terms. While the concept remains controversial and outside the scope of mainstream scientific consensus, these efforts reflect ongoing interdisciplinary interest in understanding perceived meaningful coincidences. Carl Jung himself speculated on the role of mathematical structures in synchronicity, referencing the Fibonacci sequence as a potential underlying principle behind synchronistic patterns. One notable proposal is physicist Gregory S. Duane’s chaotic oscillator model, described in Synchronicity from Synchronized Chaos, which draws parallels between synchronized chaos in complex systems and Jung’s notion of acausal order. Duane suggests that apparent coincidences may arise naturally in systems exhibiting chaotic synchronization, potentially offering a physical analogy to synchronicity. The Pauli–Jung Conjecture, developed through correspondence between Jung and Nobel Prize-winning physicist Wolfgang Pauli, has also drawn interest from scholars. Atmanspacher and Fuchs (2014) discuss how concepts like quantum entanglement and nonlocality might serve as metaphors—though not literal explanations—for synchronicity. From a cognitive science perspective, Johansen and Osman (2015) have argued that perceived coincidences can be understood through rational cognition models, heuristics, and confirmation bias. Their work suggests that synchronicity experiences may be psychologically explainable rather than acausal in nature. Although these approaches vary in discipline and methodology, they share a common interest in identifying potential frameworks—be they physical, cognitive, or symbolic—that might help contextualize synchronicity experiences. Examples Jung tells the following story as an example of a synchronistic event in his 1960 book Synchronicity: By way of example, I shall mention an incident from my own observation. A young woman I was treating had, at a critical moment, a dream in which she was given a golden scarab. While she was telling me this dream I sat with my back to the closed window. Suddenly I heard a noise behind me, like a gentle tapping. I turned round and saw a flying insect knocking against the window pane from outside. I opened the window and caught the creature in the air as it flew in. It was the nearest analogy to a golden scarab that one finds in our latitudes, a scarabaeid beetle, the common rose-chafer (Cetonia aurata), which contrary to its usual habits had evidently felt an urge to get into a dark room at this particular moment. It was an extraordinarily difficult case to treat, and up to the time of the dream little or no progress had been made. I should explain that the main reason for this was my patient's animus, which was steeped in Cartesian philosophy and clung so rigidly to its own idea of reality that the efforts of three doctors—I was the third—had not been able to weaken it. Evidently something quite irrational was needed which was beyond my powers to produce. The dream alone was enough to disturb ever so slightly the rationalistic attitude of my patient. But when the "scarab" came flying in through the window in actual fact, her natural being could burst through the armor of her animus possession and the process of transformation could at last begin to move. After describing some examples, Jung wrote: "When coincidences pile up in this way, one cannot help being impressed by them—for the greater the number of terms in such a series, or the more unusual its character, the more improbable it becomes.": 91 French writer Émile Deschamps claims in his memoirs that, in 1805, he was treated to some plum pudding by a stranger named Monsieur de Fontgibu. Ten years later, the writer encountered plum pudding on the menu of a Paris restaurant and wanted to order some, but the waiter told him that the last dish had already been served to another customer, who turned out to be de Fontgibu. Many years later, in 1832, Deschamps was at a dinner and once again ordered plum pudding. He recalled the earlier incident and told his friends that only de Fontgibu was missing to make the setting complete—and in the same instant, the now-senile de Fontgibu entered the room, having got the wrong address. In his book Thirty Years That Shook Physics: The Story of Quantum Theory (1966), George Gamow writes about Wolfgang Pauli, who was apparently considered a person particularly associated with synchronicity events. Gamow whimsically refers to the "Pauli effect", a mysterious phenomenon which is not understood on a purely materialistic basis, and probably never will be. The following anecdote is told: It is well known that theoretical physicists cannot handle experimental equipment; it breaks whenever they touch it. Pauli was such a good theoretical physicist that something usually broke in the lab whenever he merely stepped across the threshold. A mysterious event that did not seem at first to be connected with Pauli's presence once occurred in Professor J. Franck's laboratory in Göttingen. Early one afternoon, without apparent cause, a complicated apparatus for the study of atomic phenomena collapsed. Franck wrote humorously about this to Pauli at his Zürich address and, after some delay, received an answer in an envelope with a Danish stamp. Pauli wrote that he had gone to visit Bohr and at the time of the mishap in Franck's laboratory his train was stopped for a few minutes at the Göttingen railroad station. You may believe this anecdote or not, but there are many other observations concerning the reality of the Pauli Effect! In popular culture Philip K. Dick makes reference to "Pauli's synchronicity" in his 1963 science-fiction novel, The Game-Players of Titan, in reference to pre-cognitive psionic abilities being interfered with by other psionic abilities such as psychokinesis: "an acausal connective event". In 1983 The Police released an album titled Synchronicity, inspired by Arthur Koestler's discussion of synchronicity in his book The Roots of Coincidence. A song from the album, "Synchronicity II", simultaneously describes the story of a man experiencing a mental breakdown and a lurking monster emerging from a Scottish lake. Björk wrote a song titled "Synchronicity" for Spike Jonze's Hot Chocolate DVD. Rising Appalachia released a song titled "Synchronicity" on their 2015 album Wider Circles. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Digital_data] | [TOKENS: 4756] |
Contents Digital data Digital data, in information theory and information systems, is information represented as a string of discrete symbols, each of which can take on one of only a finite number of values from some alphabet, such as letters or digits. An example is a text document, which consists of a string of alphanumeric characters. The most common form of digital data in modern information systems is binary data, which is represented by a string of binary digits (bits) each of which can have one of two values, either 0 or 1. Digital data can be contrasted with analog data, which is represented by a value from a continuous range of real numbers. Analog data is transmitted by an analog signal, which not only takes on continuous values but can vary continuously with time, a continuous real-valued function of time. An example is the air pressure variation in a sound wave. Data requires interpretation to become information. In modern (post-1960) computer systems, all data is digital. The word digital comes from the same source as the words digit and digitus (the Latin word for finger), as fingers are often used for counting. Mathematician George Stibitz of Bell Telephone Laboratories used the word digital in reference to the fast electric pulses emitted by a device designed to aim and fire anti-aircraft guns in 1942. The term is most commonly used in computing and electronics, especially where real-world information is converted to binary numeric form as in digital audio and digital photography. Symbol to digital conversion Since symbols (for example, alphanumeric characters) are not continuous, representing symbols digitally is rather simpler than conversion of continuous or analog information to digital. Instead of sampling and quantization as in analog-to-digital conversion, such techniques as polling and encoding are used. A symbol input device usually consists of a group of switches that are polled at regular intervals to see which switches are switched. Data will be lost if, within a single polling interval, two switches are pressed, or a switch is pressed, released, and pressed again. This polling can be done by a specialized processor in the device to prevent burdening the main CPU. When a new symbol has been entered, the device typically sends an interrupt, in a specialized format, so that the CPU can read it. For devices with only a few switches (such as the buttons on a joystick), the status of each can be encoded as bits (usually 0 for released and 1 for pressed) in a single word. This is useful when combinations of key presses are meaningful, and is sometimes used for passing the status of modifier keys on a keyboard (such as shift and control). But it does not scale to support more keys than the number of bits in a single byte or word. Devices with many switches (such as a computer keyboard) usually arrange these switches in a scan matrix, with the individual switches on the intersections of x and y lines. When a switch is pressed, it connects the corresponding x and y lines together. Polling (often called scanning in this case) is done by activating each x line in sequence and detecting which y lines then have a signal, thus which keys are pressed. When the keyboard processor detects that a key has changed state, it sends a signal to the CPU indicating the scan code of the key and its new state. The symbol is then encoded or converted into a number based on the status of modifier keys and the desired character encoding. A custom encoding can be used for a specific application with no loss of data. However, using a standard encoding such as ASCII is problematic if a symbol such as 'ß' needs to be converted but is not in the standard. It is estimated that in the year 1986, less than 1% of the world's technological capacity to store information was digital and in 2007 it was already 94%. The year 2002 is assumed to be the year when humankind was able to store more information in digital than in analog format (the "beginning of the digital age"). States Digital data come in these three states: data at rest, data in transit, and data in use. The confidentiality, integrity, and availability have to be managed during the entire lifecycle from 'birth' to the destruction of the data. Data at rest in information technology means data that is housed physically on computer data storage in any digital form (e.g. cloud storage, file hosting services, databases, data warehouses, spreadsheets, archives, tapes, off-site or cloud backups, mobile devices etc.). Data at rest includes both structured and unstructured data. This type of data is subject to threats from hackers and other malicious threats to gain access to the data digitally or physical theft of the data storage media. To prevent this data from being accessed, modified or stolen, organizations will often employ security protection measures such as password protection, data encryption, or a combination of both. The security options used for this type of data are broadly referred to as data-at-rest protection (DARP). Definitions include: "...all data in computer storage while excluding data that is traversing a network or temporarily residing in computer memory to be read or updated." "...all data in storage but excludes any data that frequently traverses the network or that which resides in temporary memory. Data at rest includes but is not limited to archived data, data which is not accessed or changed frequently, files stored on hard drives, USB thumb drives, files stored on backup tape and disks, and also files stored off-site or on a storage area network (SAN)." While it is generally accepted that archive data (i.e. which never changes), regardless of its storage medium, is data at rest and active data subject to constant or frequent change is data in use. “Inactive data” could be taken to mean data which may change, but infrequently. The imprecise nature of terms such as “constant” and “frequent” means that some stored data cannot be comprehensively defined as either data at rest or in use. These definitions could be taken to assume that Data at Rest is a superset of data in use; however, data in use, subject to frequent change, has distinct processing requirements from data at rest, whether completely static or subject to occasional change. Because of its nature data at rest is of increasing concern to businesses, government agencies and other institutions. Mobile devices are often subject to specific security protocols to protect data at rest from unauthorized access when lost or stolen and there is an increasing recognition that database management systems and file servers should also be considered as at risk; the longer data is left unused in storage, the more likely it might be retrieved by unauthorized individuals outside the network. Data encryption, which prevents data visibility in the event of its unauthorized access or theft, is commonly used to protect data in motion and increasingly promoted for protecting data at rest. The encryption of data at rest should only include strong encryption methods such as AES or RSA. Encrypted data should remain encrypted when access controls such as usernames and password fail. Increasing encryption on multiple levels is recommended. Cryptography can be implemented on the database housing the data and on the physical storage where the databases are stored. Data encryption keys should be updated on a regular basis. Encryption keys should be stored separately from the data. Encryption also enables crypto-shredding at the end of the data or hardware lifecycle. Periodic auditing of sensitive data should be part of policy and should occur on scheduled occurrences. Finally, only store the minimum possible amount of sensitive data. Tokenization is a non-mathematical approach to protecting data at rest that replaces sensitive data with non-sensitive substitutes, referred to as tokens, which have no extrinsic or exploitable meaning or value. This process does not alter the type or length of data, which means it can be processed by legacy systems such as databases that may be sensitive to data length and type. Tokens require significantly less computational resources to process and less storage space in databases than traditionally encrypted data. This is achieved by keeping specific data fully or partially visible for processing and analytics while sensitive information is kept hidden. Lower processing and storage requirements makes tokenization an ideal method of securing data at rest in systems that manage large volumes of data. A further method of preventing unwanted access to data at rest is the use of data federation especially when data is distributed globally (e.g. in off-shore archives). An example of this would be a European organisation which stores its archived data off-site in the US. Under the terms of the USA PATRIOT Act the American authorities can demand access to all data physically stored within its boundaries, even if it includes personal information on European citizens with no connections to the US. Data encryption alone cannot be used to prevent this as the authorities have the right to demand decrypted information. A data federation policy which retains personal citizen information with no foreign connections within its country of origin (separate from information which is either not personal or is relevant to off-shore authorities) is one option to address this concern. However, data stored in foreign countries can be accessed using legislation in the CLOUD Act. Data in use is an information technology term referring to active data which is stored in a non-persistent digital state or volatile memory, typically in computer random-access memory (RAM), CPU caches, or CPU registers. Data in use has also been taken to mean “active data” in the context of being in a database or being manipulated by an application. For example, some enterprise encryption gateway solutions for the cloud claim to encrypt data at rest, data in transit and data in use. Some cloud software as a service (SaaS) providers refer to data in use as any data currently being processed by applications, as the CPU and memory are utilized. Because of its nature, data in use is of increasing concern to businesses, government agencies and other institutions. Data in use, or memory, can contain sensitive data including digital certificates, encryption keys, intellectual property (software algorithms, design data), and personally identifiable information. Compromising data in use enables access to encrypted data at rest and data in motion. For example, someone with access to random access memory can parse that memory to locate the encryption key for data at rest. Once they have obtained that encryption key, they can decrypt encrypted data at rest. Threats to data in use can come in the form of cold boot attacks, malicious hardware devices, rootkits and bootkits. Encryption, which prevents data visibility in the event of its unauthorized access or theft, is commonly used to protect Data in Motion and Data at Rest and increasingly recognized as an optimal method for protecting Data in Use. There have been multiple projects to encrypt memory. Microsoft Xbox systems are designed to provide memory encryption and the company PrivateCore presently has a commercial software product vCage to provide attestation along with full memory encryption for x86 servers. Several papers have been published highlighting the availability of security-enhanced x86 and ARM commodity processors. In that work, an ARM Cortex-A8 processor is used as the substrate on which a full memory encryption solution is built. Process segments (for example, stack, code or heap) can be encrypted individually or in composition. This work marks the first full memory encryption implementation on a mobile general-purpose commodity processor. The system provides both confidentiality and integrity protections of code and data which are encrypted everywhere outside the CPU boundary. For x86 systems, AMD has a Secure Memory Encryption (SME) feature introduced in 2017 with Epyc. Intel has promised to deliver its Total Memory Encryption (TME) feature in an upcoming CPU. Operating system kernel patches such as TRESOR and Loop-Amnesia modify the operating system so that CPU registers can be used to store encryption keys and avoid holding encryption keys in RAM. While this approach is not general purpose and does not protect all data in use, it does protect against cold boot attacks. Encryption keys are held inside the CPU rather than in RAM so that data at rest encryption keys are protected against attacks that might compromise encryption keys in memory. Enclaves enable an “enclave” to be secured with encryption in RAM so that enclave data is encrypted while in RAM but available as clear text inside the CPU and CPU cache. Intel Corporation has introduced the concept of “enclaves” as part of its Software Guard Extensions. Intel revealed an architecture combining software and CPU hardware in technical papers published in 2013. Several cryptographic tools, including secure multi-party computation and homomorphic encryption, allow for the private computation of data on untrusted systems. Data in use could be operated upon while encrypted and never exposed to the system doing the processing. Data in transit, also referred to as data in motion and data in flight, is data en route between source and destination, typically on a computer network. Data in transit can be separated into two categories: information that flows over the public or untrusted network such as the Internet and data that flows in the confines of a private network such as a corporate or enterprise local area network (LAN). In computing Data within a computer, in most cases, moves as parallel data. Data moving to or from a computer, in most cases, moves as serial data. Data sourced from an analog device, such as a temperature sensor, may be converted to digital using an analog-to-digital converter. Data representing quantities, characters, or symbols on which operations are performed by a computer are stored and recorded on magnetic, optical, electronic, or mechanical recording media, and transmitted in the form of digital electrical or optical signals. Data pass in and out of computers via peripheral devices. Physical computer memory elements consist of an address and a byte/word of data storage. Digital data are often stored in relational databases, like tables or SQL databases, and can generally be represented as abstract key/value pairs. Data can be organized in many different types of data structures, including arrays, graphs, and objects. Data structures can store data of many different types, including numbers, strings and even other data structures. Metadata helps translate data to information. Metadata is data about the data. Metadata may be implied, specified or given. Data relating to physical events or processes will have a temporal component. This temporal component may be implied. This is the case when a device such as a temperature logger receives data from a temperature sensor. When the temperature is received it is assumed that the data has a temporal reference of now. So the device records the date, time and temperature together. When the data logger communicates temperatures, it must also report the date and time as metadata for each temperature reading. Fundamentally, computers follow a sequence of instructions they are given in the form of data. A set of instructions to perform a given task (or tasks) is called a program. A program is data in the form of coded instructions to control the operation of a computer or other machine. In the nominal case, the program, as executed by the computer, will consist of machine code. The elements of storage manipulated by the program, but not actually executed by the central processing unit (CPU), are also data. At its most essential, a single datum is a value stored at a specific location. Therefore, it is possible for computer programs to operate on other computer programs, by manipulating their programmatic data. To store data bytes in a file, they have to be serialized in a file format. Typically, programs are stored in special file types, different from those used for other data. Executable files contain programs; all other files are also data files. However, executable files may also contain data used by the program which is built into the program. In particular, some executable files have a data segment, which nominally contains constants and initial values for variables, both of which can be considered data. The line between program and data can become blurry. An interpreter, for example, is a program. The input data to an interpreter is itself a program, just not one expressed in native machine language. In many cases, the interpreted program will be a human-readable text file, which is manipulated with a text editor program. Metaprogramming similarly involves programs manipulating other programs as data. Programs like compilers, linkers, debuggers, program updaters, virus scanners and such use other programs as their data. For example, a user might first instruct the operating system to load a word processor program from one file, and then use the running program to open and edit a document stored in another file. In this example, the document would be considered data. If the word processor also features a spell checker, then the dictionary (word list) for the spell checker would also be considered data. The algorithms used by the spell checker to suggest corrections would be either machine code data or text in some interpretable programming language. In an alternate usage, binary files (which are not human-readable) are sometimes called data as distinguished from human-readable text. The total amount of digital data in 2007 was estimated to be 281 billion gigabytes (281 exabytes). Keys in data provide the context for values. Regardless of the structure of data, there is always a key component present. Keys in data and data-structures are essential for giving meaning to data values. Without a key that is directly or indirectly associated with a value, or collection of values in a structure, the values become meaningless and cease to be data. That is to say, there has to be a key component linked to a value component in order for it to be considered data.[citation needed] Data can be represented in computers in multiple ways, as per the following examples: Random access memory (RAM) holds data that the CPU has direct access to. A CPU may only manipulate data within its processor registers or memory. This is as opposed to data storage, where the CPU must direct the transfer of data between the storage device (disk, tape...) and memory. RAM is an array of linear contiguous locations that a processor may read or write by providing an address for the read or write operation. The processor may operate on any location in memory at any time in any order. In RAM the smallest element of data is the binary bit. The capabilities and limitations of accessing RAM are processor specific. In general main memory is arranged as an array of locations beginning at address 0 (hexadecimal 0). Each location can store usually 8 or 32 bits depending on the computer architecture. Data keys need not be a direct hardware address in memory. Indirect, abstract and logical keys codes can be stored in association with values to form a data structure. Data structures have predetermined offsets (or links or paths) from the start of the structure, in which data values are stored. Therefore, the data key consists of the key to the structure plus the offset (or links or paths) into the structure. When such a structure is repeated, storing variations of the data values and the data keys within the same repeating structure, the result can be considered to resemble a table, in which each element of the repeating structure is considered to be a column and each repetition of the structure is considered as a row of the table. In such an organization of data, the data key is usually a value in one (or a composite of the values in several) of the columns. The tabular view of repeating data structures is only one of many possibilities. Repeating data structures can be organised hierarchically, such that nodes are linked to each other in a cascade of parent-child relationships. Values and potentially more complex data-structures are linked to the nodes. Thus the nodal hierarchy provides the key for addressing the data structures associated with the nodes. This representation can be thought of as an inverted tree. Modern computer operating system file systems are a common example; and XML is another. Data has some inherent features when it is sorted on a key. All the values for subsets of the key appear together. When passing sequentially through groups of the data with the same key, or a subset of the key changes, this is referred to in data processing circles as a break, or a control break. It particularly facilitates the aggregation of data values on subsets of a key. Until the advent of bulk non-volatile memory like flash, persistent data storage was traditionally achieved by writing the data to external block devices like magnetic tape and disk drives. These devices typically seek to a location on the magnetic media and then read or write blocks of data of a predetermined size. In this case, the seek location on the media, is the data key and the blocks are the data values. Early used raw disk data file-systems or disc operating systems reserved contiguous blocks on the disc drive for data files. In those systems, the files could be filled up, running out of data space before all the data had been written to them. Thus much unused data space was reserved unproductively to ensure adequate free space for each file. Later file-systems introduced partitions. They reserved blocks of disc data space for partitions and used the allocated blocks more economically, by dynamically assigning blocks of a partition to a file as needed. To achieve this, the file system had to keep track of which blocks were used or unused by data files in a catalog or file allocation table. Though this made better use of the disc data space, it resulted in fragmentation of files across the disc, and a concomitant performance overhead due additional seek time to read the data. Modern file systems reorganize fragmented files dynamically to optimize file access times. Further developments in file systems resulted in virtualization of disc drives i.e. where a logical drive can be defined as partitions from a number of physical drives. Retrieving a small subset of data from a much larger set may imply inefficiently searching through the data sequentially. Indexes are a way to copy out keys and location addresses from data structures in files, tables and data sets, then organize them using inverted tree structures to reduce the time taken to retrieve a subset of the original data. In order to do this, the key of the subset of data to be retrieved must be known before retrieval begins. The most popular indexes are the B-tree and the dynamic hash key indexing methods. Indexing is overhead for filing and retrieving data. There are other ways of organizing indexes, e.g. sorting the keys and using a binary search algorithm. Object-oriented programming uses two basic concepts for understanding data and software: It is only after instantiation that an object of a specified class exists. After an object's reference is cleared, the object also ceases to exist. The memory locations where the object's data was stored are garbage and are reclassified as unused memory available for reuse. The advent of databases introduced a further layer of abstraction for persistent data storage. Databases use metadata, and a structured query language protocol between client and server systems, communicating over a computer network, using a two phase commit logging system to ensure transactional completeness, when saving data. Modern scalable and high-performance data persistence technologies, such as Apache Hadoop, rely on massively parallel distributed data processing across many commodity computers on a high bandwidth network. In such systems, the data is distributed across multiple computers and therefore any particular computer in the system must be represented in the key of the data, either directly, or indirectly. This enables the differentiation between two identical sets of data, each being processed on a different computer at the same time. See also References Properties of digital information All digital information possesses common properties that distinguish it from analog data with respect to communications: Historical digital systems Even though digital signals are generally associated with the binary electronic digital systems used in modern electronics and computing, digital systems are actually ancient, and need not be binary or electronic. See also References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Race_and_ethnicity_in_the_United_States_census] | [TOKENS: 6105] |
Contents Race and ethnicity in the United States census In the United States census, the U.S. Census Bureau and the Office of Management and Budget (OMB) define a set of self-identified categories of race and ethnicity chosen by residents, with which they most closely identify. Residents can indicate their origins alongside their race, and are asked specifically whether they are of Hispanic or Latino origin in a separate question. Race and ethnicity are considered separate and distinct identities, with a person's origins considered in the census. Racial categories in the United States represent a social-political construct for the race or races that respondents consider themselves to be and, "generally reflect a social definition of race recognized in this country". The OMB defines the concept of race as outlined for the census to be not "scientific or anthropological", and takes into account "social and cultural characteristics as well as ancestry", using "appropriate scientific methodologies" that are not "primarily biological or genetic in reference." The race categories include both racial and national-origin groups. From the first United States Census in 1790 to the 1960 Census, the government's census enumerators chose a person's race. Racial categories changed over time, with different groups being added and removed with each census. Since the 1970 Census, Americans provide their own racial self-identification. This change was due to the reforms brought about by the Civil Rights Act of 1964 and the Voting Rights Act of 1965, which required more accurate census data. Since the 1980 Census, in addition to their race or races, all respondents are categorized by membership in one of two ethnic categories, which are "Hispanic or Latino" and "Not Hispanic or Latino." This practice of separating "race" and "ethnicity" as different categories has been criticized both by the American Anthropological Association and members of US Commission on Civil Rights. Since the 2000 Census, Americans have been able to identify as more than one race. In 1997, the OMB issued a Federal Register notice regarding revisions to the standards for the classification of federal data on race and ethnicity. The OMB developed race and ethnic standards in order to provide "consistent data on race and ethnicity throughout the federal government". The development of the data standards stem in large measure from new responsibilities to enforce civil rights laws. Among the changes, The OMB issued the instruction to "mark one or more races" after noting evidence of increasing numbers of mixed-race children and wanting to record diversity in a measurable way after having received requests by people who wanted to be able to acknowledge theirs and their children's full ancestry, rather than identifying with only one group. Prior to this decision, the census and other government data collections asked people to report singular races. As of 2023, the OMB built on the 1997 guidelines and suggested the addition of a Middle Eastern or North African (MENA) racial category and considered combining racial and ethnic categories into one question. In March 2024, the Office of Management and Budget published revisions to Statistical Policy Directive No. 15: Standards for Maintaining, Collecting, and Presenting Federal Data on Race and Ethnicity that included a combined question and a MENA category, while also collecting additional detail to enable data disaggregation. Data on race and ethnicity The OMB states, "many federal programs are put into effect based on the race data obtained from the decennial census (i.e., promoting equal employment opportunities; assessing racial disparities in health and environmental risks). Race data is also critical for the basic research behind many policy decisions. States require this data to meet legislative redistricting requirements. The data is needed to monitor compliance with the Voting Rights Act by local jurisdictions". Data on ethnic groups are important for putting into effect a number of federal statutes (i.e., enforcing bilingual election rules under the Voting Rights Act and monitoring/enforcing equal employment opportunities under the Civil Rights Act). Data on ethnic groups is also needed by local governments to run programs and meet legislative requirements (i.e., identifying segments of the population who may not be receiving medical services under the Public Health Service Act; evaluating whether financial institutions are meeting the credit needs of minority populations under the Community Reinvestment Act). History The 1790 United States census was the first census in the history of the United States. The population of the United States was recorded as 3,929,214 as of Census Day, August 2, 1790, as mandated by Article I, Section 2 of the US Constitution and applicable laws. The law required that every household be visited, that completed census schedules be posted in two of the most public places within each jurisdiction, remain for the inspection of all concerned, and that "the aggregate amount of each description of persons" for every district be transmitted to the president. The US Marshals were also responsible for governing the census.[citation needed] About one-third of the original census data has been lost or destroyed since documentation. The data was lost in 1790–1830, and included data from Connecticut, Delaware, Georgia, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, New York, North Carolina, Pennsylvania, Rhode Island, South Carolina, Vermont, and Virginia. However, the census was proven factual and the existence of most of this data can be confirmed in many secondary sources pertaining to the first census. Census data included the name of the head of the family and categorized inhabitants as: free white males at least 16 years of age (to assess the country's industrial and military potential), free white males under 16 years of age, free white females, all other free persons (reported by sex and color), and slaves. Thomas Jefferson, then the Secretary of State, directed US Marshals to collect data from all 13 original states, and from the Southwest Territory. The census was not conducted in Vermont until 1791, after that state's admission to the Union as the 14th state on March 4 of that year. Some doubt surrounded the numbers, as President George Washington and Thomas Jefferson maintained the population was undercounted. The potential reasons Washington and Jefferson may have thought this could be refusal to participate, poor public transportation and roads, spread-out population, and restraints of current technology. No microdata from the 1790 population census are available, but aggregate data for small areas and their compatible cartographic boundary files, can be downloaded from the National Historical Geographic Information System. However, the categories of "Free white males" of 16 years and upward, including heads of families under 16 years, "Free white females", including heads of families, All other free persons, and "Slaves," existed in the census form. In 1800 and 1810, the age question regarding free white males was more detailed with five cohorts and included All other free persons, except "Indians not taxed", and "Slaves". The 1820 census built on the questions asked in 1810 by asking age questions about slaves. Also the term "colored" entered the census nomenclature. In addition, a question stating "Number of foreigners not naturalized" was included. In the 1830 census, a new question, which stated, "The number of White persons who were foreigners not naturalized" was included. The 1840 census was the last census conducted by U.S. marshals. This was due to the Northern members of the Whig Party opposing the controversial claim in the 1840 census that free Black Americans in the Northern United States suffered from a higher degree of "insane" or "idiotic" behavior compared to enslaved Black Americans. Starting in 1850, the Department of the Interior used a specialized census bureau to tabulate the census. The 1850 census had a dramatic shift in the way information about residents was collected. For the first time, free persons were listed individually instead of by head of household. Two questionnaires were used – one for free inhabitants and one for slaves. The question on the free inhabitants schedule about color was a column that was to be left blank if a person were white, marked "B" if a person were black, and marked "M" if a person were mulatto. Slaves were listed by owner, and classified by gender and age, not individually, and the question about color was a column that was to be marked with a "B" if the slave were black and an "M" if mulatto. For 1890, the Census Office changed the design of the population questionnaire. Residents were still listed individually, but a new questionnaire sheet was used for each family. Additionally, this was the first year that the census distinguished among different Asian ethnic groups, such as Japanese and Chinese, due to increased immigration. This census also marked the beginning of the term "race" in the questionnaires. Enumerators were instructed to write "White", "Black", "Mulatto", "Quadroon", "Octoroon", "Chinese", "Japanese", or "Indian". During 1900, the "Color or Race" question was slightly modified, removing the term "Mulatto". Also, there was an inclusion of an "Indian Population Schedule" in which "enumerators were instructed to use a special expanded questionnaire for American Indians living on reservations or in family groups off of reservations." This expanded version included the question "Fraction of person's lineage that is white." The 1910 census was similar to that of 1900, but it included a reinsertion of "Mulatto" and a question about the "mother tongue" of foreign-born individuals and individuals with foreign-born parents. "Ot" was also added to signify "other races", with space for a race to be written in. This decade's version of the Indian Population Schedule featured questions asking the individual's proportion of white, black, or American Indian lineage. The 1920 census questionnaire was similar to 1910, but excluded a separate schedule for American Indians. "Hin", "Kor", and "Fil" were also added to the "Color or Race" question, signifying Hindu (Asian Indian), Korean, and Filipino, respectively. The biggest change in this census was in racial classification. Enumerators were instructed to no longer use the "Mulatto" classification. Instead, they were given special instructions for reporting the race of interracial persons. A person with both white and black ancestry (termed "blood") was to be recorded as "Negro", no matter the fraction of that lineage (the "one-drop rule"). A person of mixed black and American Indian ancestry was also to be recorded as "Neg" (for "Negro") unless they were considered to be "predominantly" American Indian and accepted as such within the community. A person with both white and American Indian ancestry was to be recorded as American Indian, unless their Indigenous ancestry was small, and they were accepted as white within the community. In all situations in which a person had white and some other racial ancestry, they were to be reported as that other race.[contradictory] People who had minority interracial ancestry were to be reported as the race of their father.[contradictory] For the first and only time, "Mexican" was listed as a race. Enumerators were instructed that all people born in Mexico, or whose parents were born in Mexico, should be listed as Mexicans, and not under any other racial category. In prior censuses and in 1940, enumerators were instructed to list Mexican Americans as white, perhaps because some of them were of white background (mainly Spanish), many others mixed white and Native American and some of them Native American. The supplemental American Indian questionnaire was back, but in abbreviated form. It featured a question asking if the person was of full or mixed American Indian ancestry. President Franklin D. Roosevelt promoted a Good Neighbor policy that sought better relations with Mexico. In 1935, a federal judge ruled that three Mexican immigrants were ineligible for citizenship because they were not white, as required by federal law. Mexico protested, and Roosevelt decided to circumvent the decision and make sure the federal government treated Hispanics as white. The State Department, the Census Bureau, the Labor Department, and other government agencies therefore made sure to uniformly classify people of Mexican descent as white. This policy encouraged the League of United Latin American Citizens in its quest to minimize discrimination by asserting their whiteness. The 1940 census was the first to include separate population and housing questionnaires. The race category of "Mexican" was eliminated in 1940, and the population of Mexican descent was counted with the white population. 1940 census data was used for Japanese American internment. The Census Bureau's role was denied for decades, but was finally proven in 2007. The 1950 census questionnaire removed the word "color" from the racial question, and also removed Hindu and Korean from the race choices. The 1960 census re-added the word "color" to the racial question, and changed "Indian" to "American Indian", as well as adding Hawaiian, Part-Hawaiian, Aleut, and Eskimo. The "Other (print out race)" option was removed. This year's census included "Negro or Black", re-added Korean and the Other race option. East Indians (the term used at that time for people whose ancestry is from the Indian subcontinent) were counted as White. There was a questionnaire that was asked of only a sample of respondents. These questions were as follows: Questions on Spanish or Hispanic Origin or Descent Is this person's origin or descent? Mexican Puerto Rican Cuban Central American Other Spanish No, none of these This year added several options to the race question, including Vietnamese, Indian (East), Guamanian, Samoan, and re-added Aleut. Again, the term "color" was removed from the racial question, and the following questions were asked of a sample of respondents: Questions on Spanish or Hispanic Origin or Descent Is this person of Spanish/Hispanic origin or descent? No, not Spanish/Hispanic Yes, Mexican, Mexican American, Chicano Yes, Puerto Rican Yes, Cuban Yes, other Spanish/Hispanic The racial categories in this year are as they appear in the 2000 and 2010 censuses. The following questions were asked of a sample of respondents for the 1990 census: The 1990 census was not designed to capture multiple racial responses, and when individuals marked the "other" race option and provided a multiple write-in. The response was assigned according to the race written first. "For example, a write-in of 'black-white' was assigned a code of 'black,' while a write-in of 'white-black' was assigned a code of 'white.'" Questions on Spanish or Hispanic Origin or Descent Is this person of Spanish/Hispanic origin? No, not Spanish/Hispanic Yes, Mexican, Mexican American, Chicano Yes, Puerto Rican Yes, Cuban Yes, other Spanish/Hispanic, print one group ... Census data indicate that the number of children in interracial families grew from less than one-half million in 1970 to about two million in 1990. In 1990, for interracial families with one White partner, the other parent was Black for about 20 percent of all children, the other parent was Asian for 45 percent, and the other parent was American Indian and Alaska Native for about 34 percent. Race was asked differently in the 2000 census in several other ways than previously. Most significantly, respondents were given the option of selecting one or more race categories to indicate racial identities. Data show that nearly seven million Americans identified as members of two or more races. Because of these changes, the 2000 census data on race are not directly comparable with data from the 1990 census or earlier censuses. Use of caution is therefore recommended when interpreting changes in the racial composition of the US population over time. The following definitions apply to the 2000 census only. This census acknowledged that "race categories include both racial and national-origin groups." The federal government of the United States has mandated that "in data collection and presentation, federal agencies are required to use a minimum of two ethnicities: "Hispanic or Latino" and "Not Hispanic or Latino". The Census Bureau defines "Hispanic or Latino" as "a person of Cuban, Mexican, Puerto Rican, South or Central American or other Spanish culture or origin regardless of race." Use of the word "ethnicity" for Hispanics only is considerably more restricted than its conventional meaning, which covers other distinctions, some of which are covered by the "race" and "ancestry" questions. The distinct questions accommodate the possibility of Hispanic and Latino Americans' also declaring various racial identities (see also White Hispanic and Latino Americans, Black Hispanic and Latino Americans, and Asian Hispanic and Latino Americans). In the 2000 census, 12.5% of the US population reported "Hispanic or Latino" ethnicity and 87.5% reported "Not-Hispanic or Latino" ethnicity. The 2010 census included changes designed to more clearly distinguish Hispanic ethnicity as not being a race. That included adding the sentence: "For this census, Hispanic origins are not races." Additionally, the Hispanic terms were modified from "Hispanic or Latino" to "Hispanic, Latino or Spanish origin". Although used in the census and the American Community Survey, "Some other race" is not an official race, and the Bureau considered eliminating it prior to the 2000 census. As the 2010 census form did not contain the question titled "Ancestry" found in prior censuses, there were campaigns to get non-Hispanic West Indian Americans, Turkish Americans, Armenian Americans, Arab Americans, and Iranian Americans to indicate their ethnic or national background through the race question, specifically the "Some other race" category. The Interagency Committee has suggested that the concept of marking multiple boxes be extended to the Hispanic origin question, thereby freeing individuals from having to choose between their parents' ethnic heritages. In other words, a respondent could choose both "Hispanic or Latino" and "Not Hispanic or Latino". The 2020 census featured similar designs to the 2000 and 2010 censuses. Subsequently, the Census Bureau adhered to the 1997 OMB standards and thus used two separate questions to collect data on race and ethnicity. However, there were improvements in the phrasing of the race and ethnicity questions within the OMB guidelines, that would enhance clarity for respondents. The Hispanic origin question included the same checkboxes as the 2010 census ("Mexican, Mexican Am., Chicano", "Puerto Rican", "Cuban"), along with a "Yes, another Hispanic, Latino, or Spanish origin". Under this category, two changes emerged. The first was the shift from "Print origin, for example" to "Print, for example". The removal of the word origin was due to the surveyed confusion and differentiating meanings origin has for respondents or varying backgrounds. Furthermore, the Census Bureau updated the write-in instructions for the "Some Other Race" category and included the instruction to "Print race", but changed the instruction to read "Print race or origin" to match the primary instruction to "Mark ☒ one or more boxes AND print origins". According to the United States Census Bureau, as a result of significant feedback, a detailed write-in response and example were included for the "White" and the "Black or African Am." racial categories to compensate a wider horizon of identities. There were also six example groups for each of the "White", "Black or African American", and "American Indian or Alaska Native" racial categories. In addition, after 100 years, the term "Negro" was removed from the 2020 census, as a large portion of respondents advocated for its removal. Instead, the category shifted from "Black, African Am., or Negro" to "Black or African Am." on paper questionnaires and electronic instruments. The identification of the term African American first occurred in the 2000 census, reflecting a long-standing history of offensive terminology since the censuses' inception. The 1790 census included other "free persons" by color and "slaves". From 1850 to 1880, the codes for enumerators were generally Black (B) and Mulatto (M). In 1900, there were no specified categories on the census listing form, and the instructions called for enumerators to list "B" for "Black (or negro or negro descent)", displaying the first occurrence of the controversial term "Negro". In 1930, there were specific instructions that used the term "Negro". Mixed persons were to be counted as "Negro" no matter how small the share of blood, also known as the one-drop rule. It was not until 1970 that the term Black appeared on a census form, and in 1990 the enumerator of color was eliminated. The 2020 census was also "the first to specifically solicit Middle Eastern North African (MENA) responses" through the write-in response for the White racial category. The term MENA includes the Arab American population, which is growing quickly as of 2023. This allowed the 2020 census to include dis-aggregated data on MENA populations, which made up over 3.5 million Americans. California, New York, and Michigan have the largest MENA populations, and Lebanese, Iranian, and Egyptian populations made up nearly half of them. This was almost triple the 2000 census' estimate of a population of 1.2 million Arab Americans, based on the "Ancestry" question rather than the racial category question. That number may have been an under count however, as 19% of the American population provided no answer for the "Ancestry" question. This is significant because MENA identities were previously only tracked through the "Ancestry" write-in question on the American Community Survey in 2010. The Arab American population was then estimated through the number of responses that included one or more Arab ancestries. The 2020 census changed this by explicitly prompting write-in responses with Arab American examples listed as "Print, for example, German, Irish, English, Italian, Lebanese, Egyptian, etc". The improvements are part of a larger effort reviewing the 1997 OMB guidelines, specifically to move MENA from under the White racial category into a new label. An OMB working group officially recommended a new MENA category in 2023 based on public feedback going back to 2015 and "plans to make final decisions on revisions by Summer 2024". Many people in the community "may not be perceived, nor perceive themselves, to be White". The added category could allow for more targeted funding, social programs, and political representation. A 2015 study from Rutgers University found significant inequalities in household income, citizenship rates, and English-speaking rates between New Jersey's White population and Arab population, concluding that America's White and Arab populations might be different enough both culturally and economically to justify a separate category. The next change was reordering the example groups from "Argentinean, Colombian, Dominican, Nicaraguan, Salvadoran, Spaniard, and so on." to "Salvadoran, Dominican, Colombian, Guatemalan, Spaniard, Ecuadorian, etc." to reflect the ever-increasing geographic diversity of the Hispanic or Latino category and the variations in populations sums each year. *Net coverage error is statistically significantly different from 0. Though the issues of identification questions of origin were addressed, the accuracy of the 2020 census displays undercounts and overcounts of Black people, Latinos, and Native Americans according to the work conducted under Robert Santos, the director of the United States Census Bureau from 2022 to 2025. A follow-up survey concluded that the miscounting of children under five years of age and that American Indians and Alaska Natives living on reservations continued to have the highest net undercount rate, similar to 2010. One of the leading factors of the misrepresented information in the 2020 census is the coronavirus pandemic, which caused notable delays in the Census Bureau's Post-Enumeration Survey. The Post-Enumeration Survey is used to determine how accurate the census results are and inform planning for the next national count in 2030. Furthermore, discrepancies persisted due to the irrefutable variables of delays to field work, migration of many college students and others, and some respondents failed to answer the necessary questions required for the Post-Enumeration Survey to match the census. Journalist Michael Wines of The New York Times attributes group quarters like college dormitories, long-term care facilities and prisons to have the largest contingencies in the tally as the pandemic pushed many university students to return home, making it harder to count them in the dormitories or apartments where they normally would have been. Hispanic or Latino Translating the data set, the 3.45 difference in net coverage error for the Hispanic or Latino category proves widely problematic, but is an avid reflection of the seismic shifts in the United States. Mexican immigrants have been at the center of one of the largest mass migrations in contemporary history, reaching a peak of 12.8 million in 2007, but have since declined, as reported by the Pew Research Center. The predominant reasoning being shifts in political authority and the coronavirus pandemic resulting in policy changes. More specifically, immigrants entering through a permanent legal residency (green card), visa overstays, and apprehensions have drastically changed the input and output of data. The total number of non-immigrant visas processed in Mexico by the US Department of State dropped 35% in 2020 compared with the prior year, from about 1.5 million in 2019 to about 960,000 in 2020. The temporary visas were processed for tourism, business, or crossing the border. Consequently, due to political shifts, apprehensions of unauthorized Mexican immigrants increased considerably after the pandemic started in 2020. In fiscal 2020, the number of detainments of Mexican adults at the US-Mexican border reached sky-high new levels under former president Donald Trump. There were 253,118 such encounters, up 52% from 166,458 the previous year. Relation between ethnicity and race in census results The Census Bureau warns that data on race in 2000 census are not directly comparable to those collected in previous censuses. Many residents of the United States consider race and ethnicity to be the same. In the 2000 census, respondents were tallied in each of the race groups they reported. Consequently, the total of each racial category exceeds the total population because some people reported more than one race. According to James P. Allen and Eugene Turner from California State University, Northridge, by some calculations in the 2000 census the largest part white biracial population is white/Native American and Alaskan Native, at 7,015,017, followed by white/black at 737,492, then white/Asian at 727,197, and finally white/Native Hawaiian and other Pacific Islander at 125,628. The Census Bureau implemented a Census Quality Survey, gathering data from about 50,000 households to assess the reporting of race and Hispanic origin in the 2000 census with the purpose of creating a way to make comparisons between the 2000 census with previous census racial data. In September 1997, during the process of revision of racial categories previously declared by OMB Directive No. 15, the American Anthropological Association (AAA) recommended that OMB combine the "race" and "ethnicity" categories into one question to appear as "race/ethnicity" for the 2000 census. The Interagency Committee agreed, stating that "race" and "ethnicity" were not sufficiently defined and "that many respondents conceptualize 'race' and 'ethnicity' as one and the same [sic] underscor[ing] the need to consolidate these terms into one category, using a term that is more meaningful to the American people." The AAA also stated: The American Anthropological Association recommends the elimination of the term "race" from OMB Directive 15 during the planning for the 2010 census. During the past 50 years, "race" has been scientifically proven to not be a real, natural phenomenon. More specific, social categories such as "ethnicity" or "ethnic group" are more salient for scientific purposes and have fewer of the negative, racist connotations for which the concept of race was developed. Yet the concept of race has become thoroughly—and perniciously—woven into the cultural and political fabric of the United States. It has become an essential element of both individual identity and government policy. Because so much harm has been based on "racial" distinctions over the years, correctives for such harm must also acknowledge the impact of "racial" consciousness among the U.S. populace, regardless of the fact that "race" has no scientific justification in human biology. Eventually, however, these classifications must be transcended and replaced by more non-racist and accurate ways of representing the diversity of the U.S. population. The recommendations of the AAA were not adopted by the Census Bureau for the 2000, 2010, and 2020 censuses. This includes Hispanic, Latino, or Spanish origin, which remained an ethnicity, not a race. While race/ethnicity definitions for 2020 remained consistent, individuals who identify as White, Black/African American, and/or American Indian or Alaska Native were asked to specifically identify their racial origins. Other agencies In 2001, the National Institutes of Health adopted the new language to comply with the revisions to Directive 15, as did the Equal Employment Opportunity Commission of the United States Department of Labor in 2007. See also References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet_service_provider] | [TOKENS: 3038] |
Contents Internet service provider An Internet service provider (ISP) is an organization that provides a myriad of services related to accessing, using, managing, or participating in the Internet. ISPs can be organized in various forms, such as commercial, community-owned, non-profit, or otherwise privately owned. Internet services typically provided by ISPs can include internet access, internet transit, domain name registration, web hosting, and colocation. History The Internet (originally ARPAnet) was developed as a network between government research laboratories and participating departments of universities. Other companies and organizations joined by direct connection to the backbone, or by arrangements through other connected companies, sometimes using dialup tools such as UUCP. By the late 1980s, a process was set in place towards public, commercial use of the Internet. Some restrictions were removed by 1991, shortly after the introduction of the World Wide Web. During the 1980s, online service providers such as CompuServe, Prodigy, and America Online (AOL) began to offer limited capabilities to access the Internet, such as e-mail interchange, but full access to the Internet was not readily available to the general public. In 1989, the first Internet service providers, companies offering the public direct access to the Internet for a monthly fee, were established in Australia and the United States. In Brookline, Massachusetts, The World became the first commercial ISP in the US. Its first customer was served in November 1989. These companies generally offered dial-up connections, using the public telephone network to provide last-mile connections to their customers. The barriers to entry for dial-up ISPs were low and many providers emerged. However, cable television companies and the telephone carriers already had wired connections to their customers and could offer Internet connections at much higher speeds than dial-up using broadband technology such as cable modems and digital subscriber line (DSL). As a result, these companies often became the dominant ISPs in their service areas, and what was once a highly competitive ISP market became effectively a monopoly or duopoly in countries with a commercial telecommunications market, such as the United States. In 1995, NSFNET was decommissioned removing the last restrictions on the use of the Internet to carry commercial traffic and network access points were created to allow peering arrangements between commercial ISPs. On 23 April 2014, the U.S. Federal Communications Commission (FCC) was reported to be considering a new rule permitting ISPs to offer content providers a faster track to send content, thus reversing their earlier net neutrality position. A possible solution to net neutrality concerns may be municipal broadband, according to Professor Susan Crawford, a legal and technology expert at Harvard Law School. On 15 May 2014, the FCC decided to consider two options regarding Internet services: first, permit fast and slow broadband lanes, thereby compromising net neutrality; and second, reclassify broadband as a telecommunications service, thereby preserving net neutrality. On 10 November 2014, President Barack Obama recommended that the FCC reclassify broadband Internet service as a telecommunications service in order to preserve net neutrality. On 16 January 2015, Republicans presented legislation, in the form of a U.S. Congress H.R. discussion draft bill, that makes concessions to net neutrality but prohibits the FCC from accomplishing the goal or enacting any further regulation affecting Internet service providers. On 31 January 2015, AP News reported that the FCC will present the notion of applying ("with some caveats") Title II (common carrier) of the Communications Act of 1934 to the Internet in a vote expected on 26 February 2015. Adoption of this notion would reclassify Internet service from one of information to one of the telecommunications and, according to Tom Wheeler, chairman of the FCC, ensure net neutrality. The FCC was expected to enforce net neutrality in its vote, according to The New York Times. On 26 February 2015, the FCC ruled in favor of net neutrality by adopting Title II (common carrier) of the Communications Act of 1934 and Section 706 in the Telecommunications Act of 1996 to the Internet. The FCC Chairman, Tom Wheeler, commented, "This is no more a plan to regulate the Internet than the First Amendment is a plan to regulate free speech. They both stand for the same concept." On 12 March 2015, the FCC released the specific details of the net neutrality rules. On 13 April 2015, the FCC published the final rule on its new "Net Neutrality" regulations. These rules went into effect on 12 June 2015. Upon becoming FCC chairman in April 2017, Ajit Pai proposed an end to net neutrality, awaiting votes from the commission. On 21 November 2017, Pai announced that a vote will be held by FCC members on 14 December 2017 on whether to repeal the policy. On 11 June 2018, the repeal of the FCC's network neutrality rules took effect. Since December 31, 2021, The Affordable Connectivity Program has given households in the U.S. at or below 200% of the Federal Poverty Guidelines or households which meet a number of other criteria an up to $30 per month discount toward internet service, or up to $75 per month on certain tribal lands. Classifications Access provider ISPs provide Internet access directly to end customers such as businesses and consumers, employing a range of technologies to connect users to their network. Available technologies have ranged from computer modems with acoustic couplers to telephone lines, to television cable (CATV), Wi-Fi, and fiber optics. For users and small businesses, traditional options include copper wires to provide dial-up, DSL, typically asymmetric digital subscriber line (ADSL), cable modem or Integrated Services Digital Network (ISDN) (typically basic rate interface). Using fiber-optics to end users is called Fiber To The Home or similar names. Customers with more demanding requirements (such as medium-to-large businesses, or other ISPs) can use higher-speed DSL (such as single-pair high-speed digital subscriber line), Ethernet, metropolitan Ethernet, gigabit Ethernet, Frame Relay, ISDN Primary Rate Interface, Asynchronous Transfer Mode (ATM), synchronous optical networking (SONET) or MPLS over OTN. Dedicated internet access (DIA) services for businesses can be delivered using PON networks. Wireless access is another option, including cellular and satellite Internet access. Access providers may have an MPLS (Multiprotocol label switching) or formerly a SONET backbone network, and have a ring or mesh network topology in their core network. The networks run by access providers can be considered wide area networks. ISPs can have access networks, aggregation networks/aggregation layers/distribution layers/edge routers/metro networks and a core network/backbone network; each subsequent network handles more traffic than the last. Mobile service providers also have similar networks. These providers often buy capacity on submarine cables to connect to internet exchanges and engage in private peering with other carriers and networks including Tier 1 carriers at data centers, for example by connecting to the NAP of the Americas, a data center which connects many Latin American ISPs with networks in the US. A mailbox provider is an organization that provides services for hosting electronic mail domains with access to storage for mail boxes. It provides email servers to send, receive, accept, and store email for end users or other organizations. Many mailbox providers are also access providers, but some are not (e.g., Gmail, Yahoo! Mail, Outlook.com, AOL Mail, Po box). The definition given in RFC 6650 covers email hosting services, as well as the relevant department of companies, universities, organizations, groups, and individuals that manage their mail servers themselves. The task is typically accomplished by implementing Simple Mail Transfer Protocol (SMTP) and possibly providing access to messages through Internet Message Access Protocol (IMAP), the Post Office Protocol, Webmail, or a proprietary protocol. Internet hosting services provide email, web-hosting, or online storage services. Other services include virtual server, cloud services, or physical server operation.[failed verification] Just as their customers pay them for Internet access, ISPs themselves pay upstream ISPs for Internet access. An upstream ISP such as a tier 2 or tier 1 ISP usually has a larger network than the contracting ISP or is able to provide the contracting ISP with access to parts of the Internet the contracting ISP by itself has no access to. In the simplest case, a single connection is established to an upstream ISP and is used to transmit data to or from areas of the Internet beyond the home network; this mode of interconnection is often cascaded multiple times until reaching a tier 1 carrier. In reality, the situation is often more complex. ISPs with more than one point of presence (PoP) may have separate connections to an upstream ISP at multiple PoPs, or they may be customers of multiple upstream ISPs and may have connections to each one of them at one or more point of presence. Transit ISPs provide large amounts of bandwidth for connecting hosting ISPs and access ISPs. Border Gateway Protocol is used by routers to connect to other networks, which are identified by their autonomous system number. Tier 2 ISPs depend on Tier 1 ISPs and often have their own networks, but must pay for transit or internet access to Tier 1 ISPs, but may peer or send transit without paying, to other Tier 2 and/or some Tier 1 ISPs. Tier 3 ISPs do not engage in peering and only purchase transit from Tier 2 and Tier 1 ISPs, and often specialize in offering internet service to end customers such as businesses and individuals. Some organizations act as their own ISPs and purchase transit directly from a Tier 1 ISP. Transit ISPs may use OTN (Optical transport network) or SDH/SONET (Synchronous Digital Hierarchy/Synchronous Optical Networking) with a DWDM (Dense wavelength-division multiplexing) system for transmitting data through optical fiber over long distances such as across a city or between cities. For transmissions in a metro area such as a city and for large customers such as data centers, special pluggable modules in routers, conforming to standards such as CFP, QSFP-DD, OSFP, 400ZR or OpenZR+ may be used alongside DWDM and many vendors have proprietary offerings. Long-haul networks transport data across longer distances than metro networks, such as through submarine cables, or connecting several metropolitan networks. Optical line systems and packet optical transport systems can also be used for data transmission in metro areas, long haul connections and data center interconnect. Ultra long haul transmission transports data over distances of over 1500 kilometers. ISPs connect to each other and to customers via data centers hosting meet-me rooms. A virtual ISP (VISP) is an operation that purchases services from another ISP, sometimes called a wholesale ISP in this context, which allow the VISP's customers to access the Internet using services and infrastructure owned and operated by the wholesale ISP. VISPs resemble mobile virtual network operators and competitive local exchange carriers for voice communications. Free ISPs are Internet service providers that provide service free of charge. Many free ISPs display advertisements while the user is connected; like commercial television, in a sense they are selling the user's attention to the advertiser. Other free ISPs, sometimes called freenets, are run on a nonprofit basis, usually with volunteer staff. A wireless Internet service provider (WISP) is an Internet service provider with a network based on wireless networking. Technology may include commonplace Wi-Fi wireless mesh networking, or proprietary equipment designed to operate over open 900 MHz, 2.4 GHz, 4.9, 5.2, 5.4, 5.7, and 5.8 GHz bands or licensed frequencies such as 2.5 GHz (EBS/BRS), 3.65 GHz (NN) and in the UHF band (including the MMDS frequency band) and LMDS. It is hypothesized that the vast divide between broadband connection in rural and urban areas is partially caused by a lack of competition between ISPs in rural areas, where there exists a market typically controlled by just one provider. A lack of competition problematically causes subscription rates to rise disproportionately with the quality of service in rural areas, causing broadband connection to be unaffordable for some, even when the infrastructure supports service in a given area. In contrast, consumers in urban areas typically benefit from lower rates and higher quality of broadband services, not only due to more advanced infrastructure but also the healthy economic competition caused by having several ISPs in a given area. How the difference in competition levels has potentially negatively affected the innovation and development of infrastructure in specific rural areas remains a question. The exploration and answers developed to the question could provide guidance for possible interventions and solutions meant to remedy the digital divide between rural and urban connectivity. Altnets Altnets (portmanteau of "alternative network provider") are localized broadband networks, typically formed as an alternative to monopolistic internet service providers within a region. Peering ISPs may engage in peering, where multiple ISPs interconnect at peering points or Internet exchange points (IXPs), allowing routing of data between each network, without charging one another for the data transmitted—data that would otherwise have passed through a third upstream ISP, incurring charges from the upstream ISP. ISPs requiring no upstream and having only customers (end customers or peer ISPs) are called Tier 1 ISPs. Network hardware, software and specifications, as well as the expertise of network management personnel are important in ensuring that data follows the most efficient route, and upstream connections work reliably. A tradeoff between cost and efficiency is possible. Tier 1 ISPs are also interconnected with a mesh network topology. Internet Exchange Points (IXPs) are public locations where several networks are connected to each other. Public peering is done at IXPs, while private peering can be done with direct links between networks. IXPs or peering exchanges may be located in data centers. Law enforcement and intelligence assistance Internet service providers in many countries are legally required (e.g., via Communications Assistance for Law Enforcement Act (CALEA) in the U.S.) to allow law enforcement agencies to monitor some or all of the information transmitted by the ISP, or even store the browsing history of users to allow government access if needed (e.g. via the Investigatory Powers Act 2016 in the United Kingdom). Furthermore, in some countries ISPs are subject to monitoring by intelligence agencies. In the U.S., a controversial National Security Agency program known as PRISM provides for broad monitoring of Internet users traffic and has raised concerns about potential violation of the privacy protections in the Fourth Amendment to the United States Constitution. Modern ISPs integrate a wide array of surveillance and packet sniffing equipment into their networks, which then feeds the data to law-enforcement/intelligence networks (such as DCSNet in the United States, or SORM in Russia) allowing monitoring of Internet traffic in real time. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Kazakh_language] | [TOKENS: 4999] |
Contents Kazakh language China Kazakh[a] is a Turkic language of the Kipchak branch spoken in Central Asia by the Kazakhs. It is closely related to Nogai, Kyrgyz and Karakalpak. It is the official language of Kazakhstan, and has official status in the Altai Republic of Russia. It is also a minority language in the Ili Kazakh Autonomous Prefecture in Xinjiang, China, and in the Bayan-Ölgii Province of western Mongolia. The language is also spoken by many ethnic Kazakhs throughout the former Soviet Union (some 472,000 in Russia according to the 2010 Russian census), Germany, and Turkey. Like other Turkic languages, Kazakh is an agglutinative language and employs vowel harmony. Kazakh builds words by adding suffixes one after another to the word stem, with each suffix expressing only one unique meaning and following a fixed sequence. Ethnologue recognizes three mutually intelligible dialect groups: Northeastern Kazakh—the most widely spoken variety, which also serves as the basis for the official language—Southern Kazakh, and Western Kazakh. The language shares a degree of mutual intelligibility with the closely related Karakalpak language while its Western dialects maintain limited mutual intelligibility with the Altai languages. In October 2017, Kazakh president Nursultan Nazarbayev decreed that the writing system would change from using Cyrillic to Latin script by 2025. The proposed Latin alphabet has been revised several times and as of January 2021 is close to the inventory of the Turkish alphabet, though lacking the letters C and Ç and having four additional letters: Ä, Ñ, Q and Ū (though other letters such as Y have different values in the two languages). It is scheduled to be phased in from 2023 to 2031. Over one million Kazakh speakers in Xinjiang use a modified version of the Perso-Arabic script for writing. Geographic distribution Speakers of Kazakh are spread over a vast territory from the Tian Shan to the western shore of the Caspian Sea. Kazakh is the official state language of Kazakhstan, with nearly 10 million speakers (based on information from the CIA World Factbook on population and proportion of Kazakh speakers). In China, nearly two million ethnic Kazakhs and Kazakh speakers reside in the Ili Kazakh Autonomous Prefecture of Xinjiang. History The Kipchak branch of Turkic languages, which Kazakh is borne out of, was mainly solidified during the reign of the Golden Horde. The modern Kazakh language is said to have originated in approximately 1465 AD during the formation of the Kazakh Khanate[citation needed]. Modern Kazakh is likely a descendant of both Chagatay Turkic as spoken by the Timurids and Kipchak Turkic as spoken in the Golden Horde. Kazakh uses a high volume of loanwords from Persian and Arabic due to the frequent historical interactions between Kazakhs and Iranian ethnic groups to the south. Additionally, Persian was a lingua franca in the Kazakh Khanate, which allowed Kazakhs to mix Persian words into their own spoken and written vernacular. Meanwhile, Arabic was used by Kazakhs in mosques and mausoleums, serving as a language exclusively for religious contexts, similar to how Latin served as a liturgical language in the Western European cultural sphere. The Kazakhs used the Arabic script to write their language until approximately 1929. In the early 1900s, Kazakh activist Akhmet Baitursynuly reformed the Kazakh-Arabic alphabet, but his work was largely overshadowed by the Soviet presence in Central Asia. At that point, the new Soviet regime forced the Kazakhs to use a Latin script, and then a Cyrillic script in the 1940s. Today, Kazakhs use the Cyrillic and Latin scripts to write their language, although a presidential decree from 2017 ordered the transition from Cyrillic to Latin by 2031. Although not an endangered language, in 2024, Kazakh has been described as being placed in a somewhat vulnerable position by the Kazakhstani Minister of Science and Higher Education Sayasat Nurbek, within a category where the number of speakers is not increasing as rapidly as anticipated. Phonology and orthography Kazakh exhibits tongue-root vowel harmony, with some words of recent foreign origin (e.g., Russian, Persian, Arabic) as exceptions. There is also a system of rounding harmony which resembles that of Kyrgyz, but which does not apply as strongly and is not reflected in the orthography. This system only applies to the open vowels /e/, /ɪ/, /ʏ/ and not /ɑ/, and happens in the next syllables. Thus, (in the Latin script) jūldyz 'star', bügın 'today', and ülken 'big' are actually pronounced as jūldūz, bügün, and ülkön, respectively. The following chart depicts the consonant inventory of standard Kazakh;[b] many of the sounds, however, are allophones of other sounds or appear only in recent loanwords. The 18 consonant phonemes listed by Vajda are without parentheses—since these are phonemes, their listed place and manner of articulation are very general, and will vary from what is shown. (/t͡s/ rarely appears in normal speech.) Kazakh has 19 native consonant phonemes; these are the stops /p, b, t, d, k, ɡ, q/, fricatives /s, z, ʃ, ʒ, ʁ/, nasals /m, n, ŋ/, liquids /r, l/, and two glides /w, j/. The sounds /f, v, χ, h, t͡s, t͡ɕ/ are found only in loanwords. /ʒ/ is heard as an alveolo-palatal affricate [d͡ʒ] in the Kazakh dialects of Uzbekistan and China. The sounds [q] and [ʁ] may be analyzed as allophones of /k/ and /ɡ/ in words with back vowels, but exceptions occur in loanwords.[c] Kazakh has a system of 12 phonemic vowels, 3 of which are diphthongs. The rounding contrast and /æ/ generally only occur as phonemes in the first syllable of a word, but do occur later allophonically; see the section on harmony below for more information. Moreover, the /æ/ sound has been included artificially due to the influence of Arabic, Persian and, later, Tatar languages during the Islamic period. It can be found in some native words, however. According to Vajda, the front/back quality of vowels is actually one of neutral versus retracted tongue root. Phonetic values are paired with the corresponding character in Kazakh's Cyrillic and Latin alphabets. Kazakh exhibits tongue-root vowel harmony (also called soft-hard harmony), and arguably weakened rounding harmony which is implied in the first syllable of the word. All vowels after the first rounded syllable are the subject to this harmony with the exception of /ɑ/, and in the following syllables, e.g., өмір [wɵˈmʉr], қосы [qʰoˈsʊ]. Notably, urban Kazakh speakers tend to violate rounding harmony, as well as pronouncing Russian borrowings against the rules. Kazakh's syllable structure is (C)V(C)(C). Syllables containing consonant clusters CC typically are combination of sonorant (/r, l, n, j/) and a stop (mainly /t/). Other types of syllables are also permitted due to recent loanwords, mainly from Russian. Most words in Kazakh are stressed in the last syllable, except: Nowadays, Kazakh is mostly written in the Cyrillic script, with an Arabic-based alphabet being used by Kazakh speakers in China. On 26 October 2017, via Presidential Decree 569, Kazakhstan announced it would adopt the Latin script by 2025. However, this transition has been delayed. Since the Cyrillic alphabet was originally designed for Slavic languages, it had to be modified to better fit the sounds of Turkic languages like Kazakh. Several new letters were added and some existing ones modified: ә, ғ, қ, ң, ө, ұ, ү, һ, і. The Cyrillic letter у after a consonant represents a combination of sounds /ɘ/, /ʉ/, ы /ə/, /ʊ/ with glide /w/, e.g., кіру [kʰɘˈrɘw], су [sʊw], көру [kʰɵˈrʉw], атысу [ɑtʰə̆ˈsəw]. The Cyrillic letter ю undergoes the same process but with /j/ at the beginning. The letter и represents a combination of sounds /ɘ/ (in front-vowel contexts) or /ə/ (in back vowel contexts) with glide /j/, e.g., тиіс [tʰɘˈjɘ̆s], оқиды [woqʰəjˈdə]. In Russian loanwords, particularly in educated speech, it is often realized as /ʲi/ (when stressed) or /ʲɪ/ (when unstressed), e.g., изоморфизм [ɪzəmɐrˈfʲizm]. The letter я represents either /jɑ/ or /jæ/ depending on vowel harmony. The letter щ represents /ʃː/, e.g. ащы [ɑʃˈʃə]. Meanwhile, the letters в, ё, ф, х, һ, ц, ч, ъ, ь, э are only used in loanwords—mostly those of Russian origin, but sometimes of Persian and Arabic origin. They are often substituted in spoken Kazakh. The table below compares the various scripts. Grammar Kazakh is generally verb-final, though various permutations on SOV (subject–object–verb) word order can be used, for example, due to topicalization. Inflectional and derivational morphology, both verbal and nominal, in Kazakh, exists almost exclusively in the form of agglutinative suffixes. Kazakh is a nominative-accusative, head-final, left-branching, dependent-marking language. Kazakh has no noun class or gender system. Nouns are declined for number (singular or plural) and one of seven cases: The suffix for case is placed after the suffix for number. Forms 'child' 'hedgehog' 'Kazakh' 'school' 'person' 'flower' 'word' There are eight personal pronouns in Kazakh: The declension of the pronouns is outlined in the following chart. Singular pronouns exhibit irregularities, while plural pronouns do not. Irregular forms are highlighted in bold. In addition to the pronouns, there are several more sets of morphemes dealing with person. Adjectives in Kazakh are not declined for any grammatical category of the modified noun. Being a head-final language, adjectives are always placed before the noun that they modify. Kazakh has two varieties of adjectives: The comparative form can be created by appending the suffix -(y)raq/-(ı)rek or -tau/-teu/-dau/-dau to an adjective. The superlative form can be created by placing the morpheme eñ before the adjective. The superlative form can also be expressed by reduplication. Kazakh may express different combinations of tense, aspect and mood through the use of various verbal morphology or through a system of auxiliary verbs, many of which might better be considered light verbs. The present tense is a prime example of this; progressive tense in Kazakh is formed with one of four possible auxiliaries. These auxiliaries otyr 'sit', tūr 'stand', jür 'go' and jat 'lie', encode various shades of meaning of how the action is carried out and also interact with the lexical semantics of the root verb: telic and non-telic actions, semelfactives, durative and non-durative, punctual, etc. There are selectional restrictions on auxiliaries: motion verbs, such as бару 'go' and келу 'come' may not combine with otyr. Any verb, however, can combine with jat 'lie' to get a progressive tense meaning. While it is possible to think that different categories of aspect govern the choice of auxiliary, it is not so straightforward in Kazakh. Auxiliaries are internally sensitive to the lexical semantics of predicates, for example, verbs describing motion: Suda water-LOC balyq fish jüzedı swim-PRES-3 Suda balyq jüzedı water-LOC fish swim-PRES-3 'Fish swim in water' (general statement) Suda water-LOC balyq fish jüzıp swim-CVB jatyr AUX.3 Suda balyq jüzıp jatyr water-LOC fish swim-CVB AUX.3 'The/A fish is swimming in the water' Suda water-LOC balyq fish jüzıp swim-CVB jür AUX.3 Suda balyq jüzıp jür water-LOC fish swim-CVB AUX.3 'The fish is swimming [as it always does] in the water' Suda water-LOC balyq fish jüzıp swim-CVB tūr AUX.3 Suda balyq jüzıp tūr water-LOC fish swim-CVB AUX.3 'The fish is swimming in the water' * Suda water-LOC balyq fish jüzıp swim-CVB otyr AUX.3 * Suda balyq jüzıp otyr {} water-LOC fish swim-CVB AUX.3 *The fish has been swimming Not a possible sentence in Kazakh In addition to the complexities of the progressive tense, there are many auxiliary-converb pairs that encode a range of aspectual, modal, volitional, evidential and action- modificational meanings. For example, the pattern verb + köru, with the auxiliary verb köru 'see', indicates that the subject of the verb attempted or tried to do something (compare the Japanese てみる temiru construction). Annotated text with gloss From the first stanza and refrain of "Menıñ Qazaqstanym" ("My Kazakhstan"), the national anthem of Kazakhstan: Алтын күн аспаны {Алтын күн} аспаны [ɑ̝ɫ̪ˈt̪ʰə̃ŋ‿kʰʏ̞̃n̪ ɑ̝s̪pʰɑ̝̃ˈn̪ə] Altyn gold kün sun aspan-y sky-3.POSS Altyn kün aspan-y gold sun sky-3.POSS 'Sky of the golden sun' Алтын дән даласы Алтын дән даласы [ɑ̝ɫ̪ˈt̪ʰə̃n̪‿d̪æ̝̃n̪ d̪ɑ̝ɫ̪ɑ̝ˈs̪ə |] Altyn gold dän grain dala-sy steppe-3.POSS Altyn dän dala-sy gold grain steppe-3.POSS 'Steppe of the golden grain' Ерліктің дастаны Ерліктің дастаны [je̘r̪l̪ɪ̞k̚ˈt̪ʰɪ̞̃ŋ̟ d̪ɑ̝s̪t̪ʰɑ̝̃ˈn̪ə] Erlık-tıñ courage legend-GEN dastan-y epic-3.POSS-NOM Erlık-tıñ dastan-y {courage legend-GEN} epic-3.POSS-NOM 'The legend of courage' Еліме қарашы! Еліме қарашы! [je̘l̪ɪ̞̃ˈmʲe̘ qʰɑ̝ˈr̪ɑ̝ʃə ‖] El-ım-e country-1SG.DAT qara-şy look-IMP El-ım-e qara-şy country-1SG.DAT look-IMP 'Look at my country!' Ежелден ер деген Ежелден ер деген [je̘ʒʲe̘l̪ʲˈd̪ʲẽ̘n̪ je̘r̪ d̪ʲe̘ˈɡʲẽ̘n̪] Ejel-den antiquity-ABL er hero de-gen say-PTCP.PST Ejel-den er de-gen antiquity-ABL hero say-PTCP.PST 'Called heroes since ancient times' Даңқымыз шықты ғой Даңқымыз шықты ғой [d̪ɑ̝̃ɴqʰə̃ˈməz̪ ʃəq̚ˈt̪ʰə ʁo̞j |] Dañq-ymyz glory-1PL.POSS.NOM şyq-ty emerge-PST.3 ğoi EMPH Dañq-ymyz şyq-ty ğoi glory-1PL.POSS.NOM emerge-PST.3 EMPH 'Our glory emerged!' Намысын бермеген Намысын бермеген [n̪ɑ̝̃məˈs̪ə̃m‿bʲe̘r̪mʲe̘ˈɡʲẽ̘n̪] Namys-yn honor-3.POSS-ACC ber-me-gen give-NEG-PTCP.PST Namys-yn ber-me-gen honor-3.POSS-ACC give-NEG-PTCP.PST 'They did not give up their honor' Қазағым мықты ғой Қазағым мықты ғой [qʰɑ̝z̪ɑ̝ˈʁə̃m məq̚ˈt̪ʰə ʁo̞j ‖] Qazağ-ym Kazakh-1SG.POSS myqty strong ğoi EMPH Qazağ-ym myqty ğoi Kazakh-1SG.POSS strong EMPH 'My Kazakhs are mighty!' Менің елім, менің елім Менің елім, менің елім [mʲẽ̘ˈn̪ɪ̞̃ŋ̟ je̘ˈl̪ɪ̞̃m | mʲẽ̘ˈn̪ɪ̞̃ŋ̟ je̘ˈl̪ɪ̞̃m |] Men-ıñ 1SG.GEN el-ım, country-1SG.NOM menıñ 1SG.GEN el-ım country-1SG.NOM Men-ıñ el-ım, menıñ el-ım 1SG.GEN country-1SG.NOM 1SG.GEN country-1SG.NOM 'My country, my country' Гүлің болып, егілемін Гүлің болып, егілемін [ɡʏ̞ˈl̪ʏ̞̃m‿bo̞ˈɫ̪ʊp | je̘ɣɪ̞ˈl̪ʲẽ̘mɪ̞̃n̪ |] Gül-ıñ flower-2SG.NOM bol-yp, be-CVB, eg-ıl-e-mın root-PASS-PRES-1SG Gül-ıñ bol-yp, eg-ıl-e-mın flower-2SG.NOM be-CVB, root-PASS-PRES-1SG 'As your flower, I am rooted in you' Жырың болып төгілемін, елім Жырың болып төгілемін, елім [ʒəˈr̪ə̃m bo̞ˈɫ̪ʊp | t̪ʰɵɣʏ̞ˈl̪ʲẽ̘mɪ̞̃n̪ je̘ˈl̪ɪ̞̃m |] Jyr-yñ song-2SG.NOM bol-yp, be-CVB, tög-ıl-e-mın, sing-PASS-PRES-1SG, el-ım country-1SG.POSS.NOM Jyr-yñ bol-yp, tög-ıl-e-mın, el-ım song-2SG.NOM be-CVB, sing-PASS-PRES-1SG, country-1SG.POSS.NOM 'As your song, I shall be sung abound' Туған жерім менің – Қазақстаным Туған жерім менің – Қазақстаным [t̪ʰuˈʁɑ̝̃n̪‿d͡ʒʲe̘ˈr̪ɪ̞̃m mʲẽ̘ˈn̪ɪ̞̃ŋ̟ | qʰɑ̝ˌz̪ɑ̝ʁə̆s̪t̪ʰɑ̝̃ˈn̪ə̃m ‖] Tu-ğan birth-PTCP-PST jer-ım place-1SG.POSS.NOM menıñ 1SG.GEN – – Qazaqstan-ym Kazakhstan-1SG.POSS.NOM Tu-ğan jer-ım menıñ – Qazaqstan-ym birth-PTCP-PST place-1SG.POSS.NOM 1SG.GEN – Kazakhstan-1SG.POSS.NOM 'My native land – My Kazakhstan' See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Multiracial_Americans] | [TOKENS: 11417] |
Contents Multiracial Americans Multiracial Americans, also known as mixed-race Americans, are Americans who have mixed ancestry of two or more races. The term may also include Americans of mixed-race ancestry who self-identify with just one group culturally and socially (cf. the one-drop rule). In the 2020 United States census, 33.8 million individuals or 10.2% of the population, self-identified as multiracial. There is evidence that an accounting by genetic ancestry would produce a higher number. The multiracial population is the fastest growing demographic group in the United States, increasing by 276% between 2010 and 2020. This growth was driven largely by Hispanic or Latino Americans identifying as multiracial, with this group increasing from 3 million in 2010 to over 20 million in 2020, making up almost two thirds of the multiracial population. Most multiracial Hispanics identified as white and "some other race" in combination, with this group increasing from 1.6 million to 24 million between 2010 and 2021. While the multiracial population has been growing naturally for the last few decades, increasing by around 32% between 2000 and 2010, the sharp rise of 276% seen in the 2020 census has been attributed mostly to changes in the Census Bureau's methodology on counting write-in ancestry responses, rather than cultural or demographic shifts. The impact of historical racial systems, such as that created by admixture between white European colonists and Native Americans, has often led people to identify or be classified by only one ethnicity, generally that of the culture in which they were raised. Prior to the mid-20th century, many people hid their multiracial heritage because of racial discrimination against minorities. While many Americans may be considered multiracial, they often do not know it or do not identify so culturally, any more than they maintain all the differing traditions of a variety of national ancestries. After a lengthy period of formal racial segregation in the former Confederacy following the Reconstruction Era and bans on interracial marriage in various parts of the country, more people are openly forming interracial unions. In addition, social conditions have changed and many multiracial people do not believe it is socially advantageous to try to "pass" as white. Diverse immigration has brought more mixed race people into the United States, such as a significant population of Hispanics. Since the 1980s, the United States has had a growing multiracial identity movement (cf. Loving Day). Because more Americans have insisted on being allowed to acknowledge their mixed racial origins, the 2000 census for the first time allowed residents to check more than one ethno-racial identity and thereby identify as multiracial. In 2008, Barack Obama, who is of Luo (Kenyan) and Scottish lineage, was elected as the first biracial President of the United States; he acknowledges both sides of his family and identifies as African-American. Today, multiracial individuals are found in every corner of the country. Multiracial groups in the United States include many African Americans, Asian Americans, Hispanic Americans, Latino Americans, Métis Americans, Louisiana Creoles, Hapas, Melungeons and several other communities found primarily in the Eastern US. Many Native Americans are multiracial in ancestry while identifying fully as members of federally recognized tribes. History The American people are mostly multi-ethnic descendants of various culturally distinct immigrant groups, many of which have now developed nations. Some consider themselves multiracial, while acknowledging race as a social construct. Creolization, assimilation and integration have been continuing processes. The Civil Rights Movement and other social movements since the mid-twentieth century worked to achieve social justice and equal enforcement of civil rights under the constitution for all ethnicities. In the 2000s, less than 5% of the population identified as multiracial. In many instances, mixed racial ancestry is so far back in an individual's family history (for instance, before the Civil War or earlier), that it does not affect more recent ethnic and cultural identification. Interracial relationships, common-law marriages and marriages occurred since the earliest colonial years, especially before slavery hardened as a racial caste associated with people of African descent in Colonial America. Several of the Thirteen Colonies passed laws in the 17th century that gave children the social status of their mother, according to the principle of partus sequitur ventrem, regardless of the father's race or citizenship. This overturned the precedent in common law by which a man gave his status to his children – this had enabled communities to demand that fathers support their children, whether legitimate or not. The change increased white men's ability to use slave women sexually, as they had no responsibility for the children. As master as well as father of mixed-race children born into slavery, the men could use these people as servants or laborers or sell them as slaves. In some cases, white fathers provided for their multiracial children, paying or arranging for education or apprenticeships and freeing them, particularly during the two decades following the Revolutionary War. (The practice of providing for the children was more common in French and Spanish colonies, where a class of free people of color developed who became educated and property owners.) Many other white fathers abandoned the mixed race children and their mothers to slavery. The researcher Paul Heinegg found that most families of free people of color in colonial times were founded from the unions of white women, whether free or indentured servants and African men, slave, indentured or free. In the early years, the working-class peoples lived and worked together. Their children were free because of the status of the white women. This was in contrast to the pattern in the post-Revolutionary era, in which most mixed-race children had white fathers and Black mothers. Anti-miscegenation laws were passed in most states during the 18th, 19th and early 20th centuries, but this did not prevent white slaveholders, their sons, or other powerful white men from taking slave women as concubines and having multiracial children with them. In California and the rest of the American West, there were greater numbers of Latin American and Asian residents. These were prohibited from official relationships with whites. White legislators passed laws prohibiting marriage between European and Asian Americans until the 1950s. Interracial relationships have had a long history in North America and the United States, beginning with the intermixing of European explorers and soldiers, who took native women as companions. After European settlement increased, traders and fur trappers often married or had unions with women of native tribes. In the 17th century, faced with a continuing, critical labor shortage, colonists primarily in the Chesapeake Bay Colony, imported Africans as laborers, sometimes as indentured servants and, increasingly, as slaves. African slaves were also imported into New York and other northern ports by European colonists. Some African slaves were freed by their masters during these early years. In the colonial years, while conditions were more fluid, white women, indentured servant or free, and African men, servant, slave or free, made unions. Because the women were free, their mixed-race children were born free; they and their descendants formed most of the families of free people of color during the colonial period in Virginia. The scholar Paul Heinegg found that eighty percent of the free people of color in North Carolina in censuses from 1790 to 1810 could be traced to families free in Virginia in colonial years. In 1789, Olaudah Equiano, a former slave from modern-day Nigeria who was enslaved in North America, published his autobiography. He advocated interracial marriage between whites and blacks. By the late eighteenth century, visitors to the Upper South noted the high proportion of mixed-race slaves, evidence of miscegenation by white men. In 1790, the first federal population census was taken in the United States. Enumerators were instructed to classify free residents as white or "other." Only the heads of households were identified by name in the federal census until 1850. Native Americans were included among "Other;" in later censuses, they were included as "Free people of color" if they were not living on Indian reservations. Slaves were counted separately from free persons in all the censuses until the Civil War and end of slavery. In later censuses, people of African descent were classified by appearance as mulatto (which recognized visible European ancestry in addition to African) or black. After the American Revolutionary War, the number and proportion of free people of color increased markedly in the North and the South as slaves were freed. Most northern states abolished slavery, sometimes, like New York, in programs of gradual emancipation that took more than two decades to be completed. The last slaves in New York were not freed until 1827. In connection with the Second Great Awakening, Quaker and Methodist preachers in the South urged slaveholders to free their slaves. Revolutionary ideals led many men to free their slaves, some by deed and others by will, so that from 1782 to 1810, the percentage of free people of color rose from less than one percent to nearly 10 percent of blacks in the South. Of numerous relationships between male slaveholders, overseers, or master's sons and women slaves, the most notable is likely that of President Thomas Jefferson with his slave Sally Hemings. As noted in the 2012 collaborative Smithsonian-Monticello exhibit, Slavery at Monticello: The Paradox of Liberty, Jefferson, then a widower, took Hemings as his concubine for nearly 40 years. They had six children of record; four Hemings children survived into adulthood, and he freed them all, among the very few slaves he freed. Two were allowed to "escape" to the North in 1822, and two were granted freedom by his will upon his death in 1826. Seven-eighths white by ancestry, all four of his Hemings children moved to northern states as adults; three of the four entered the white community, and all their descendants identified as white. Of the descendants of Madison Hemings who continued to identify as black, some in future generations eventually identified as white and "married out," while others continued to identify as African American. It was socially advantageous for the Hemings children to identify as white, in keeping with their appearance and the majority proportion of their ancestry. Although born into slavery, the Hemings children were legally white under Virginia law of the time. Racial discrimination continued to be enacted in new laws in the 20th century, for instance the one-drop rule was enacted in Virginia's 1924 Racial Integrity Law and in other southern states, in part influenced by the popularity of eugenics and ideas of racial purity. People buried fading memories that many whites had multiracial ancestry. Many families were multiracial. Similar laws had been proposed but not passed in the late nineteenth century in South Carolina and Virginia, for instance. After regaining political power in Southern states by disenfranchising blacks, white Democrats passed laws to impose Jim Crow and racial segregation to restore white supremacy. They maintained these until forced to change in the 1960s and after by enforcement of federal legislation authorizing oversight of practices to protect the constitutional rights of African Americans and other minority citizens. In 1967 the United States Supreme Court case Loving v. Virginia ruled that anti-miscegenation laws were unconstitutional. In the twentieth century up until 1989, social service organizations typically assigned multiracial children to the racial identity of the minority parent, which reflected social practices of hypodescent. Black social workers had influenced court decisions on regulations related to identity; they argued that, as the biracial child was socially considered black, it should be classified that way to identify with the group and learn to deal with discrimination. By 1990, the Census Bureau included more than a dozen ethnic/racial categories on the census, reflecting not only changing social ideas about ethnicity, but the wide variety of immigrants who had come to reside in the United States due to changing historical forces and new immigration laws in the 1960s. With a changing society, more citizens have begun to press for acknowledging multiracial ancestry. The Census Bureau changed its data collection by allowing people to self-identify as more than one ethnicity. Some ethnic groups are concerned about the potential political and economic effects, as federal assistance to historically underserved groups has depended on Census data. According to the Census Bureau, as of 2002, 75% of all African Americans had multiracial ancestries. The proportion of acknowledged multiracial children in the United States is growing. Interracial partnerships are on the rise, as are transracial adoptions. In 1990, around 14% of 18- to 19-year-olds, 12% of 20- to 21-year-olds, and 7% of 34- to 35-year-olds were involved in interracial relationships (Joyner and Kao, 2005). The number of interracial marriages as a proportion of new marriages has increased from 11% in 2010 to 19% in 2019. Demographics According to estimates from the 2022 American Community Survey, there are 41,782,288 people identifying as with multiple races in the US, making up 12.5% of the population. Excluding responses of "some other race" in combination with a single recognized category, this number is reduced to 13,658,099, or 4.1% of the population. Almost 90% of Americans identifying as "some other race" in combination were Hispanic/Latino in 2022, making up over 90% of the multiracial Hispanic population and over half of the total multiracial population in the US. The largest multiracial groups in the US in 2022 are: Multiracial people who wanted to acknowledge their full heritage won a victory of sorts in 1997, when the Office of Management and Budget (OMB) changed the federal regulation of racial categories to permit multiple responses. This resulted in a change to the 2000 United States Census, which allowed participants to select more than one of the six available categories, which were, in brief: "White," "Black or African-American," "Asian," "American Indian or Alaskan Native," "Native Hawaiian or other Pacific Islander" and "Other." Further details are given in the article: Race and ethnicity in the United States Census. The OMB made its directive mandatory for all government forms by 2003. In 2000, Cindy Rodriguez reported on reactions to the new census: To many mainline civil rights groups, the new census is part of a multiracial nightmare. After decades of framing racial issues in stark black and white terms, they fear that the multiracial movement will break down longstanding alliances, weakening people of color by splintering them into new subgroups. Some multiracial individuals feel marginalized by U.S. society. For example, when applying to schools or for a job or when taking standardized tests, Americans are sometimes asked to check boxes corresponding to race or ethnicity. Typically, about five race choices are given, with the instruction to "check only one." While some surveys offer an "other" box, this choice groups together individuals of many different multiracial types (ex: European Americans/African-Americans are grouped with Asian/Native American Indians).[citation needed] The 2000 U.S. Census in the write-in response category had a code listing which standardizes the placement of various write-in responses for automatic placement within the framework of the U.S. Census's enumerated races. Whereas most responses can be distinguished as falling into one of the five enumerated races, there remains some write-in responses which fall into the "Mixture" heading which cannot be racially categorized. These include "Bi Racial, Combination, Everything, Many, Mixed, Multi National, Multiple, Several and Various". In 1997, Greg Mayeda, a member of the board of directors person for the Hapa Issues Forum, attended a meeting regarding the new racial classifications for the 2000 U.S. Census. He was arguing against a multiracial category and for multiracial people being counted as all of their races. He argued that a separate Multiracial Box does not allow a person who identifies as mixed race the opportunity to be counted accurately. After all, we are not just mixed race. We are representatives of all racial groups and should be counted as such. A stand alone Multiracial Box reveals very little about the person's background checking it. According to James P. Allen and Eugene Turner from California State University, Northridge, who analyzed the 2000 Census, most multiracial people identified as part white. In addition, the breakdown is as follows: In 2010, 1.6 million Americans checked both "black" and "white" on their census forms, a figure 134% higher than the number a decade earlier. The number of interracial marriages and relationships, and transracial and international adoptions has increased the proportion of multiracial families. In addition, more individuals may be identifying multiple ancestries, as the concept is more widely accepted. Multiracial American identity Despite a long history of miscegenation within the U.S. political territory and American continental landscape, advocacy for a unique social race classification to recognize direct, or recent, multiracial parentage did not begin until the 1970s. After the Civil Rights Era and rapid integration of African-Americans into predominately European-American institutions and residential communities, it became more socially acceptable for White-identified women to date, marry and procreate children fathered by non-White men. This trend evolved a political push that offspring of interracial unions fully inherit the social race classifications of both parents, regardless of the racial classification of the maternal parent. This advocacy countered what had been practiced in the United States since the early 1800s where a newborn's racial classification defaulted to that of their mother, which was by a variety of classifications differing from state to state over the past two centuries. In some states 3/4ths African ancestry determined African identity, in some it was more qualified, or less. The hypodescent or one-drop rule, meaning one African ancestor identified as black was adopted by Virginia in 1924. This one-drop rule was not adopted as law by South Carolina, Louisiana and other states where Creole were or had been slaveowners. White supremacist in effect practicing the one-drop rule during chattel slavery, the rule delegated the racial classification of offspring produced by White male slave masters and female slaves to be slaves, failing to acknowledge the male parentage. Similarly laws were passed punishing free people of mixed heritage, the same as free black men and women, denying their basic rights. Voting, for example, which free blacks could and did do under French rule, were denied after the Louisiana Purchase in 1803 within a few years time. About ten percent of the slave population, according to observers, looked to be white, but had known African ancestors. After the end of slavery most of these people disappeared into the white population simply by moving. Walter White, President of the NAACP in 1920 reported that passing for white from 1880 to 1920 involved about 400,000 descendents of slaves. See Helen Catterall, editor, Judicial Cases Concerning American Slavery and the Negro, 5 Volumes, 1935 and A Man Called White, autobiography by Walter White, first President of NAACP. In 2009, Keith Bardwell, a justice of the peace in Robert, Louisiana, refused to officiate a wedding for an interracial couple and was summarily sued in federal court. See refusal of interracial marriage in Louisiana. About 15% of all new marriages in the United States in 2010 were between spouses of a different race or ethnicity from one another, more than double the share in 1980 (6.7%). Given the variety of the familial and general social environments in which multiracial children are raised, along with the diversity of their appearance and heritage, generalizations about multiracial children's challenges or opportunities are not very useful. A 1989 article written by Charlotte Nitary revealed that parents of mixed raced children often struggled between teaching their children to identify as only the race of their non-white parent, not identifying with social race at all, or identifying with the racial identities of both parents. The social identity of children and of their parents in the same multiracial family may vary or be the same. Some multiracial children feel pressure from various sources to "choose" or to identify as a single racial identity. Others may feel pressure not to abandon one or more of their ethnicities, particularly if identified with culturally. Some children grow up without race being a significant issue in their lives because they identify against the one-drop-rule construct. This approach to addressing plural racial heritage is something U.S. society has slowly become socialized into as the general consensus among monoracially identified individuals is plural racial identity is a choice and presents disingenuous motives against the more oppressed inherited racial identity. By the 1990s, as more multiracial identified students attended colleges and university, many were met with alienation from culturally and racially homogenous groups on campus. This common national trend saw the launch of many multi-racial campus organizations across the country. By the 2000s, these efforts for self-identification soon reached beyond educational institutions and into mainstream society. In her book Love's Revolution: Interracial Marriage, Maria P. P. Root suggests that when interracial parents divorce, their mixed-race children become threatening in circumstances where the custodial parent has remarried into a union where an emphasis is placed on racial identity. Some multiracial individuals attempt to claim a new category. For instance, the athlete Tiger Woods has said that he is not only African-American but "Cablinasian," as he is of Caucasian, African-American, Native American and Asian descent. Native American identity In the 2010 Census, nearly 3 million people indicated that their race was Native American (including Alaska Native). Of these, more than 27% specifically indicated "Cherokee" as their ethnic origin. Many of the First Families of Virginia claim descent from Pocahontas or some other "Indian princess". This phenomenon has been dubbed the "Cherokee Syndrome". Across the US, numerous individuals cultivate an opportunistic ethnic identity as Native American, sometimes through Cherokee heritage groups or Indian Wedding Blessings. Levels of Native American ancestry (distinct from Native American identity) differ. The genomes of self-reported African Americans averaged to 0.8% Native American ancestry, those of European Americans averaged to 0.18%, and those of Latinos averaged to 18.0%. Many tribes, especially those in the Eastern United States, are primarily made up of individuals with an unambiguous Native American identity, despite being predominantly of European ancestry. More than 75% of those enrolled in the Cherokee Nation have less than one-quarter Cherokee blood. Former Principal Chief of the Cherokee Nation, Bill John Baker, is 1/32 Cherokee, amounting to about 3%. Historically, non-Native governments have forced numerous Native Americans to assimilate into colonial and later American society, e.g. through language shifts and conversions to Christianity. In many cases, this process occurred through forced assimilation of children sent off to special boarding schools far from their families. Those who could pass for white had the advantage of white privilege. Today, after generations of racial whitening through hypergamy, a number of Native Americans may have fair skin like White Americans. Native Americans are more likely than any other racial group to practice racial exogamy, resulting in an ever-declining proportion of indigenous blood among those who claim a Native American identity. Some tribes disenroll tribal members unable to provide proof of Native ancestry, usually through a Certificate of Degree of Indian Blood. Disenrollment has become a contentious issue in Native American reservation politics. Interracial relations between Native Americans and African-Americans is a part of American history that has been neglected. The earliest record of African and Native American relations in the Americas occurred in April 1502, when the first Africans kidnapped were brought to Hispaniola to serve as slaves. Some escaped and somewhere inland on Santo Domingo, the first Black Indians were born. In addition, an example of African slaves' escaping from European colonists and being absorbed by Native Americans occurred as far back as 1526. In June of that year, Lucas Vázquez de Ayllón established a Spanish colony near the mouth of the Pee Dee River in what is now eastern South Carolina. The Spanish settlement was named San Miguel de Gualdape. Among the settlement were 100 enslaved Africans. In 1526, the first African slaves fled the colony and took refuge with local Native Americans. European colonists created treaties with Native American tribes requesting the return of any runaway slaves. For example, in 1726, the governor of New York exacted a promise from the Iroquois to return all runaway slaves who had joined them. This same promise was extracted from the Huron people in 1764, and from the Delaware people in 1765, though there is no record of slaves ever being returned. Numerous advertisements requested the return of African-Americans who had married Native Americans or who spoke a Native American language. The primary exposure that Native Americans and Africans had to each other came through the institution of slavery. Native Americans learned that Africans had what Native Americans considered 'Great Medicine' in their bodies because Africans were virtually immune to the Old-World diseases that were decimating most native populations. Because of this, many tribes encouraged marriage between the two groups, to create stronger, healthier children from the unions. For African-Americans, the one-drop rule was a significant factor in ethnic solidarity. African-Americans generally shared a common cause in society regardless of their multiracial admixture or social/economic stratification. Additionally, African-Americans found it, near, impossible to learn about their Native American heritage as many family elders withheld pertinent genealogical information. Tracing the genealogy of African-Americans can be a very difficult process, especially for descendants of Native Americans, because African-Americans who were slaves were forbidden to learn to read and write and a majority of Native Americans neither spoke English, nor read or wrote it. Interracial relations among Native Americans and Europeans occurred from the earliest years of colonization. European impact was immediate, widespread and profound—more than any other race that had contact with Native Americans during the early years of colonization and nationhood. Some early male settlers married Native American women or had informal unions with them. Early contact between Native Americans and Europeans was often charged with tension, but also had moments of friendship, cooperation and intimacy. Several marriages took place in European colonies between European men and Native women. For instance, on April 5, 1614, Pocahontas, a Powhatan woman in present-day Virginia, married the Virginian colonist John Rolfe of Jamestown. Their son Thomas Rolfe was an ancestor to many descendants in First Families of Virginia. As a result, discriminatory laws (such as those against African Americans) often excluded Native Americans during this period. In the early 19th century, the Native American woman Sacagawea, who would help translate for and guide the Lewis and Clark Expedition in the West, married the French-Canadian trapper Toussaint Charbonneau. Some Europeans living among Native Americans were called "White Indians". They "lived in native communities for years, learned native languages fluently, attended native councils, and often fought alongside their native companions." European traders and trappers often married Native American women from tribes on the frontier and had families with them. Sometimes these marriages were done for political reasons between a Native American tribe and the European traders. Some traders, who kept bases in the cities, had what were called "country wives" among Native Americans, with legal European-American wives and children at home in the city. Not all abandoned their "natural" mixed-race children. Some arranged for sons to be sent to European-American schools for their education. Early European colonists were predominately men and Native American women were at risk for rape or sexual harassment especially if they were enslaved. Most marriages between Europeans and Native Americans were between European men and Native American women. The social identity of the children was strongly determined by the tribe's kinship system. This determined how easy it would be for the child assimilated into the tribe. Among the matrilineal tribes of the Southeast, such as the Creek and Cherokee, the mixed race children generally were accepted as and identified as Indian, as they gained their social status from their mother's clans and tribes and often grew up with their mothers and their male relatives. By contrast, among the patrilineal Omaha, for example, the child of a white man and Omaha woman was considered "white"; such mixed-race children and their mothers would be protected, but the children could formally belong to the tribe as members only if adopted by a man. In those years, a Native American man had to get consent of the European parents to marry a white woman. When such marriages were approved, it was with the stipulation that "he can prove to support her as a white woman in a good home". In the early twentieth century in the West, "intermarried whites" were listed in a separate category on the Dawes Rolls, when members of tribes were listed and identified for allocation of lands to individual heads of households in the break-up of tribal communal lands in Indian Territory. This increased intermarriage as some white men married Native Americans to gain control of land. In the late 19th century, three European-American middle-class female teachers married Native American men they had met at Hampton Institute during the years when it ran its Indian program. In the late nineteenth century, Charles Eastman, a physician of Sioux and European ancestry who trained at Boston University, married Elaine Goodale, a European-American woman from New England. They met and worked together in Dakota Territory when she was Superintendent of Indian Education and he was a doctor for the reservations. His maternal grandfather was Seth Eastman, an artist and Army officer from New England, who had married a Sioux woman and had a daughter with her while stationed at Fort Snelling in Minnesota. Black and African-American identity Americans with sub-Saharan African ancestry for historical reasons: slavery, partus sequitur ventrem, one-eighth law, the one-drop rule of 20th-century legislation, have frequently been classified as black (historically) or African-American, even if they have significant European-American or Native American ancestry. As slavery became a racial caste, those who were enslaved and others of any African ancestry were classified by what is termed "hypodescent" according to the lower status ethnic group. Many of majority European ancestry and appearance "married white" and assimilated into white society for its social and economic advantages, such as generations of families identified as Melungeons, now generally classified as white but demonstrated genetically to be of European and sub-Saharan African ancestry. Sometimes people of mixed Native American and African-American descent report having had elder family members withholding pertinent genealogical information. Tracing the genealogy of African-Americans can be a very difficult process, as censuses did not identify slaves by name before the American Civil War, meaning that most African Americans did not appear by name in those records. In addition, many white fathers who used slave women sexually, even those in long-term relationships like Thomas Jefferson's with Sally Hemings, did not acknowledge their mixed race slave children in records, so paternity was lost. Colonial records of French and Spanish slave ships and sales and plantation records in all the former colonies, often have much more information about slaves, from which researchers are reconstructing slave family histories. Genealogists have begun to find plantation records, court records, land deeds and other sources to trace African-American families and individuals before 1870. As slaves were generally forbidden to learn to read and write, black families passed along oral histories, which have had great persistence. Similarly, Native Americans did not generally learn to read and write English, although some did in the nineteenth century. Until 1930, census enumerators used the terms free people of color and mulatto to classify people of apparent mixed race. When those terms were dropped, as a result of the lobbying by the Southern Congressional bloc, the Census Bureau used only the binary classifications of black or white, as was typical in segregated southern states. In the 1980s, parents of mixed race children began to organize and lobby for the addition of a more inclusive term of racial designation that would reflect the heritage of their children. When the U.S. government proposed the addition of the category of "biracial" or "multiracial" in 1988, the response from the public was mostly negative. Some African-American organizations and African-American political leaders, such as Congresswoman Diane Watson and Congressman Augustus Hawkins, were particularly vocal in their rejection of the category, as they feared the loss of political and economic power if African-Americans reduced their numbers by self-identification. Since the 1990s and 2000s, the terms mixed race, multiracial and biracial have been used more frequently in society. It is still most common in the United States (unlike some other countries with a history of slavery) for people seen as "African" in appearance to identify as or be classified solely as "Black" or "African-Americans", for cultural, social and familial reasons. President Barack Obama is of European-American and East African ancestry; he identifies as African-American. A 2007 poll, when Obama was a presidential candidate, found that Americans differed in their responses as to how they classified him: a majority of White and Hispanics classified him as biracial, but a majority of African-Americans classified him as black. A 2003 study found an average of 18.6% (±1.5%) European admixture in a population sample of 416 African-Americans from Washington, D.C. Studies of other populations in other areas have found differing percentages of ethnicity. Twenty percent of African-Americans have more than 25% European ancestry, reflecting the long history of unions between the groups. The "mostly African" group is substantially African, as 70% of African-Americans in this group have less than 15% European ancestry. The 20% of African Americans in the "mostly mixed" group (2.7% of US population) have between 25% and 50% European ancestry. The writer Sherrel W. Stewart's assertion that "most" African-Americans have significant Native American heritage, is not supported by genetic researchers who have done extensive population mapping studies. The TV series on African-American ancestry, hosted by the scholar Henry Louis Gates Jr., had genetics scholars who discussed in detail the variety of ancestries among African-Americans. They noted there is popular belief in a high rate of Native American admixture that is not supported by the data that has been collected.[citation needed] Genetic testing of direct male and female lines evaluates only direct male and female descent without accounting for many ancestors. For this reason, individuals on the Gates show had fuller DNA testing. The critic Troy Duster, writing in The Chronicle of Higher Education, thought Gates' series African American Lives should have told people more about the limitations of genetic SNP testing. He says that not all ancestry may show up in the tests, especially for those who claim part-Native American descent. Other experts also agree. Population testing is still being done. Some Native American groups that have been sampled may not have shared the pattern of markers being searched for. Geneticists acknowledge that DNA testing cannot yet distinguish among members of differing cultural Native American nations. There is genetic evidence for three major migrations into North America, but not for more recent historic differentiation. In addition, not all Native Americans have been tested, so scientists do not know for sure that Native Americans have only the genetic markers they have identified. On census forms, the government depends on individuals' self-identification. Contemporary African-Americans possess varying degrees of admixture with European (and other) ancestry. They also have various degrees of Native American ancestry. In addition to being found to have 8% Asian and 19.6% European ancestry, African-Americans, who were sampled in 2010, were found to be 72.5% African; the Asian ancestry serving as a proxy for Native-American. Many free African-American families descended from unions between white women and African men in colonial Virginia. Their free descendants migrated to the frontier of Virginia, North Carolina and South Carolina in the 18th and 19th centuries. There were also similar free families in Delaware and Maryland, as documented by Paul Heinegg. In addition, many Native American women turned to African-American men due to the decline in the number of Native American men due to disease and warfare. Some Native American women bought African slaves but, unknown to European sellers, the women freed the African men and married them into their respective tribes. If an African-American man had children by a Native American woman, their children were free because of the status of the mother. In their attempt to ensure white supremacy decades after emancipation, in the early 20th century, most southern states created laws based on the one-drop rule, defining as black persons with any known African ancestry. This was a stricter interpretation than what had prevailed in the 19th century; it ignored the many mixed families in the state and went against commonly accepted social rules of judging a person by appearance and association. Some courts called it "the traceable amount rule." Anthropologists called it an example of a hypodescent rule, meaning that racially mixed persons were assigned the status of the socially subordinate group. Prior to the one-drop rule, different states had different laws regarding color. More importantly, social acceptance often played a bigger role in how a person was perceived and how identity was construed than any law. In frontier areas, there were fewer questions about origins. The community looked at how people performed, whether they served in the militia and voted, which were the responsibilities and signs of free citizens. When questions about racial identity arose because of inheritance issues, for instance, litigation outcomes often were based on how people were accepted by neighbors. The first year in which the U.S. Census dropped the mulatto category was 1920; that year enumerators were instructed to classify people in a binary way as white or black. This was a result of the Southern-dominated Congress convincing the Census Bureau to change its rules. After the Civil War, racial segregation forced African Americans to share more of a common lot in society than they might have given widely varying ancestry, educational and economic levels. The binary division altered the separate status of the traditionally free people of color in Louisiana, for instance, although they maintained a strong Louisiana Créole culture related to French culture and language, and practice of Catholicism. African Americans began to create common cause—regardless of their multiracial admixture or social and economic stratification. In 20th-century changes, during the rise of the Civil Rights and Black Power movements, the African-American community increased its own pressure for people of any portion of African descent to be claimed by the black community to add to its power. By the 1980s, parents of mixed race children (and adults of mixed race ancestry) began to organize and lobby for the ability to show more than one ethnic category on Census and other legal forms. They refused to be put into just one category. When the U.S. government proposed the addition of the category of "biracial" or "multiracial" in 1988, the response from the general public was mostly negative. Some African-American organizations and political leaders, such as Senator Diane Watson and Representative Augustus Hawkins, were particularly vocal in their rejection of the category. They feared a loss in political and economic power if African-Americans abandoned their one category. This reaction is characterized as "historical irony" by Reginald Daniel (2002). The African-American self-designation had been a response to the one-drop rule, but then people resisted the chance to claim their multiple heritages. At the bottom was a desire not to lose political power of the larger group. Whereas before people resisted being characterized as one group regardless of ranges of ancestry, now some of their own were trying to keep them in the same group. Since the late twentieth century, the number of African and Caribbean ethnic African immigrants have increased in the United States. Together with publicity about the ancestry of President Barack Obama, whose father was from Kenya, some black writers have argued that new terms are needed for recent immigrants. There is a consensus that suggests that the term African-American should refer strictly to the descendants of American Colonial Era chattel slave descendants which includes various, subsequent, Free People of Color ethnic groups who survived the Chattel Slavery Era in the United States. It's been recognized that grouping together all Afrodescent ethnicities, regardless of their unique ancestral circumstances, would deny the lingering effects of slavery within the American Colonial Era chattel slave descended community. A growing sentiment within the Descendants of American Colonial Era Chattel Slaves (DOS) population insists that ethnic African immigrants as well as all other Afro-descent and Trans-Atlantic Slave Trade descendants and those relegated, or self-designated, to the Black race social identity or classification recognize their own unique familial, genealogical, ancestral, social, political and cultural backgrounds. Stanley Crouch wrote in a New York Daily News piece "Obama's mother is of white U.S. stock. His father is a black Kenyan," in a column entitled "What Obama Isn't: Black Like Me." During the 2008 campaign, the mixed-race columnist David Ehrenstein of the LA Times accused white liberals of flocking to Obama because he was a "Magic Negro", a term that refers to a black person with no past who simply appears to assist the mainstream white (as cultural protagonists/drivers) agenda. Ehrenstein went on to say "He's there to assuage white 'guilt' they feel over the role of slavery and racial segregation in American history." Reacting to media criticism of Michelle Obama during the 2008 presidential election, Charles Steele Jr., CEO of the Southern Christian Leadership Conference said, "Why are they attacking Michelle Obama and not really attacking, to that degree, her husband? Because he has no slave blood in him." He later claimed his comment was intended to be "provocative" but declined to expand on the subject. Former Secretary of State Condoleezza Rice (who was famously mistaken for a "recent American immigrant" by French President Nicolas Sarkozy), said "descendants of slaves did not get much of a head start, and I think you continue to see some of the effects of that." She has also rejected an immigrant designation for African-Americans and instead prefers the terms black or white. White and European-American identity Some of the most notable[vague] families include the Van Salees, Vanderbilts, Whitneys, Blacks, Cheswills, Newells, Battises, Bostons, Eldings of the North; the Staffords, Gibsons, Locklears, Pendarvises, Driggers, Galphins, Fairfaxes, Grinsteads (Greenstead, Grinsted and Grimsted), Johnsons, Timrods, Darnalls of the South and the Picos, and Bushes of the West. DNA analysis shows varied results regarding non-European ancestry in self-identified White Americans. A 2003 DNA analysis found that about 30% of self-identified White Americans have less than 90% European ancestry. A 2014 study performed on data obtained from 23andme customers found that the percentage of African or American Indian ancestry among White Americans varies significantly by region, with about 5% of White Americans living in Louisiana and South Carolina having 2% or more African ancestry. Some biographical accounts include the autobiography Life on the Color Line: The True Story of a White Boy Who Discovered He Was Black by Gregory Howard Williams; One Drop: My Father's Hidden Life—A Story of Race and Family Secrets written by Bliss Broyard about her father Anatole Broyard; the documentary Colored White Boy about a white man in North Carolina who discovers that he is the descendant of a white plantation owner and a raped African slave and the documentary on The Sanders Women of Shreveport, Louisiana. Passing is a phenomenon most widely noted in the United States, which occurs when a person who may be literally classified as a member of one racial group (by law or frequent social convention applied to others with similar ancestry) is accepted or perceived ("passes") as a member of another. The phenomenon known as "passing as white" is difficult to explain in other countries or to foreign students. Typical questions are: "Shouldn't Americans say that a person who is passing as white is white or nearly all white and has previously been passing as black?" or "To be consistent, shouldn't you say that someone who is one-eighth white is passing as black?" ... A person who is one-fourth or less American Indian or Korean or Filipino is not regarded as passing if he or she intermarries with and joins fully the life of the dominant community, so the minority ancestry need not be hidden... It is often suggested that the key reason for this is that the physical differences between these other groups and whites are less pronounced than the physical differences between African blacks and whites and therefore are less threatening to whites... [W]hen ancestry in one of these racial minority groups does not exceed one-fourth, a person is not defined solely as a member of that group. Laws dating from 17th-century colonial America defined children of African slave mothers as taking the status of their mothers and born into slavery regardless of the race or status of the father, under partus sequitur ventrem. The association of slavery with a "race" led to slavery as a racial caste. But, most families of free people of color formed in Virginia before the American Revolution were the descendants of unions between white women and African men, who frequently worked and lived together in the looser conditions of the early colonial period. While interracial marriage was later prohibited, white men frequently took sexual advantage of slave women, and numerous generations of multiracial children were born. By the late 1800s it had become common among African Americans to use passing to gain educational opportunities as did the first African-American graduate of Vassar College, Anita Florence Hemmings. Some 19th-century categorization schemes defined people by proportion of African ancestry: a person whose parents were black and white was classified as mulatto, with one black grandparent and three white as quadroon, and with one black great-grandparent and the remainder white as octoroon. The latter categories remained within an overall black or colored category, but before the Civil War, in Virginia and some other states, a person of one-eighth or less black ancestry was legally white. Some members of these categories passed temporarily or permanently as white. After whites regained power in the South following Reconstruction, they established racial segregation to reassert white supremacy, followed by laws defining people with any apparent or known African ancestry as black, under the principle of hypodescent. However, since several thousand blacks have been crossing the color line each year, millions of white Americans have relatively recent African ancestors (of the last 250 years). A statistical analysis done in 1958 estimated that 21 percent of the white population had some African ancestors. The study concluded that the majority of Americans of African descent were today classified as white and not black. Hispanic and Latino American identity A typical Latino American family may have members with a wide range of racial phenotypes, meaning a Latino couple may have children who look white and black and/or Native American and/or Asian. Latino Americans have several self-identifications; most Latinos identify as "Some other race", while others identify as white and/or black and/or Native American and/or Asian. Latinos of darker skin tones are noted as having limited media appearance; critics and Latinos of color have accused Latin American media of overlooking dark-skinned individuals in favor of those that are of lighter complexion, blonde-haired and blue/green-eyed – especially in regards to actors and actresses on telenovelas – rather than the typical nonwhite Latin Americans. Pacific Islander American identity During the 19th century, Christian missionaries from Europe and the United States followed Western traders to the Hawaiian Islands, leading to a wave of Western migration to the Kingdom of Hawaii. Westerners in the Hawaiian Islands often intermarried with Native Hawaiian women, including Hawaiian royalty. These developments eventually led to a gradual change in the beauty standards of Native Hawaiian women to a more westernized standard, which was reinforced by the refusal of Westerners to marry dark-skinned Hawaiians. While some American Pacific Islanders continue traditional cultural endogamy, many within this population now have mixed racial ancestry, sometimes combining European, Native American, as well as East Asian ancestry. The Hawaiians originally described the mixed race descendants as hapa. The term has evolved to encompass all people of mixed Asian and/or Pacific Islander ancestry. Subsequently, many ethnic Chinese also settled on the islands and married into the Pacific Islander populations. There are many other Pacific Islanders outside of Hawaii that do not share this common history with Hawaii and Asian populations are not the only race that Pacific Islanders mix with. Eurasian-American identity In its original meaning, an Amerasian is a person born in Asia to an Asian mother and a U.S. military father. Colloquially, the term has sometimes been considered synonymous with Asian-American[citation needed], to describe any person of mixed American and Asian parentage, regardless of the circumstances. The term "wasian" is also common slang to describe the individuals. "Wasian" has gained popularity on online platforms like TikTok among younger audiences, where trends in the 2020s have increased the proliferation of the term. According to the United States Census Bureau, concerning multiracial families in 1990, the number of children in interracial families grew from less than one-half million in 1970 to about two million in 1990. According to James P. Allen and Eugene Turner from California State University, Northridge, by some calculations the largest part white biracial population is white/American Indian and Alaskan Native, at 7,015,017; followed by white/black at 737,492; then white/Asian at 727,197; and finally white/Native Hawaiian and other Pacific Islander at 125,628. The U.S. Census categorizes Eurasian responses in the "some other race" section as part of the Asian race. The Eurasian responses which the U.S. Census officially recognizes are Indo-European, Amerasian, and Eurasian. Afro-Asian-American identity Chinese men entered the United States as laborers, primarily on the West Coast and in western territories. Following the Reconstruction era, as blacks set up independent farms, white planters imported Chinese laborers to satisfy their need for labor. In 1882, the Chinese Exclusion Act was passed and Chinese workers who chose to stay in the U.S. were unable to have their wives join them. In the South, some Chinese married into the black and mulatto communities, as generally, discrimination meant they did not take white spouses. They rapidly left working as laborers and set up groceries in small towns throughout the South. They worked to get their children educated and socially mobile. The Afro-Asian population drastically increased by the 1950s, with a number of Afro-Asians born to African American fathers and Japanese, Korean, Vietnamese, or Filipino mothers due to the large number of African Americans who enrolled in the military and developed relationships with Asian women abroad. Other groups of Afro-Asians are those who are of Caribbean American descent and are considered Dougla, or of Indian or Indo-Caribbean and African or Afro-Caribbean descent. As of the census of 2000, there were 106,782 Afro-Asian individuals in the United States. In fiction The figure of the "tragic octoroon" was a stock character of abolitionist literature: a mixed-race woman raised as if a white woman in her white father's household, until his bankruptcy or death has her reduced to a menial position She may even be unaware of her status before being reduced to victimization. The first character of this type was the heroine of Lydia Maria Child's "The Quadroons" (1842), a short story. This character allowed abolitionists to draw attention to the sexual exploitation in slavery and, unlike portrayals of the suffering of the field hands, did not allow slaveholders to retort that the sufferings of Northern mill hands were no easier. The Northern mill owner would not sell his own children into slavery. Abolitionists sometimes featured attractive, escaped mulatto slaves in their public lectures to arouse sentiments against slavery. They showed Northerners those slaves who looked like them rather than an "Other"; this technique, which is labeled White slave propaganda, collapsed the separation between peoples and made it impossible for the public to ignore the brutality of slavery. Charles W. Chesnutt, an author of the post-Civil War era, explored stereotypes in his portrayal of multiracial characters in southern society in the postwar years. Even characters who had been free and possibly educated before the war had trouble making a place for themselves in the postwar years. His stories feature mixed-race characters with complex lives. William Faulkner also portrayed the lives of mixed-race people and complex interracial families in the postwar South. Comic book writer and filmmaker Greg Pak wrote that while white filmmakers have used multiracial characters explore themes about race and racism, many of these characters created stereotypes that Pak described were: "Wild Half-Castes", "sexually destructive antagonists explicitly or implicitly perceived as unable to control the instinctive urges of their non-white heritage" who exhibited the same racial stereotypes of their "full blood" counterparts, symbolically used by filmmakers to "[perpetuate] the association of multiraciality with sexual aberration and violence"; the "Tragic mulatto", "a typically female character who tries to pass for white but finds disaster when her non-white heritage is revealed" whose plight used by filmmakers to "to critique racism by inspiring pity"; and the "Half Breed Hero", an "empowering" stereotype whose objective of "[inspiring] identification as he actively resists white racism" is contradicted by the character being played by a white actor, reinforcing a "white liberal's dream of inclusion and authenticity than an honest depiction of a multiracial character's experiences." Pak noted that "Wild Half Caste" and "Tragic Mulatto" characters possess little to no character development and that while many multiracial characters have appeared more frequently in films without reinforcing stereotypes, white filmmakers have mostly avoided addressing their ethnicities. See also References "Many monoracials do view a multiracial identity as a choice that denies loyalty to the oppressed racial group. We can see this issue enacted currently over the debate of the U.S. census to include a multiracial category— some oppressed monoracial groups believe this category would decrease their numbers and 'benefits." "never emulate white men and brown men whose fates didn't speak to my own. It was into my father's image, the black man, son of Africa, that I'd packed all the attributes I sought in myself, the attributes of Martin and Malcolm, DuBois and Mandela." Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Artificial_intelligence_arms_race] | [TOKENS: 6523] |
Contents Artificial intelligence arms race Donald Trump David Sacks Joe Biden Elon Musk Sundar Pichai Jensen Huang Sam Altman Satya Nadella Dario Amodei Andy Jassy Tim Cook Lisa Su Mark Zuckerberg Alexandr Wang Alex Karp Xi Jinping Zhang Guoqing He Lifeng Jack Ma Robin Li Liang Wenfeng Pony Ma Daniel Zhang Ren Zhengfei Tan Ruisong Lei Jun United StatesGoogleNvidia Stargate OpenAIMicrosoftAmazonAppleTeslaMetaIBM xAI IntelBroadcom AnthropicAMD Oracle Figure AI Lockheed Martin Palantir CoreWeave Perplexity AI Anduril Cisco Micron Salesforce ChinaBaiduDeepSeekTencentAlibabaHuaweiSenseTimeiFlytekAlphaXiaomiMegviiYMTCSilanAVIC Est. Over $700 billion(USA, in 2026) Est. $200 billion (China, over the last decade) AI regulation (USA)Data privacy and security issuesAI bias and fairness China's Data Security Law A military artificial intelligence arms race is an economic and sometimes military competition between two or more states to develop and deploy advanced AI technologies and lethal autonomous weapons systems (LAWS). The goal is to gain a strategic or tactical advantage over rivals, similar to previous arms races involving nuclear or conventional military technologies. Since the mid-2010s, many analysts have noted the emergence of such an arms race between superpowers for better AI technology and military AI, driven by increasing geopolitical and military tensions. An AI arms race is sometimes placed in the context of an AI Cold War between the United States and China. Several influential figures and publications have emphasized that whoever develops artificial general intelligence (AGI) first could dominate global affairs in the 21st century. Russian President Vladimir Putin stated that the leader in AI will "rule the world." Experts and analysts—from researchers like Leopold Aschenbrenner to institutions like Lawfare and Foreign Policy—warn that the AGI race between major powers like the U.S. and China could reshape geopolitical power. This includes AI for surveillance, autonomous weapons, decision-making systems, cyber operations, and more. Terminology Lethal autonomous weapons systems use artificial intelligence to identify and kill human targets without human intervention. LAWS have colloquially been called "slaughterbots" or "killer robots". Broadly, any competition for superior AI is sometimes framed as an "arms race". Advantages in military AI overlap with advantages in other sectors, as countries pursue both economic and military advantages, as per previous arms races throughout history. History In 2014, AI specialist Steve Omohundro warned that "An autonomous weapons arms race is already taking place". According to Siemens, worldwide military spending on robotics was US$5.1 billion in 2010 and US$7.5 billion in 2015. China became a top player in artificial intelligence research in the 2010s. According to the Financial Times, in 2016, for the first time, China published more AI research papers than the entire European Union. When restricted to number of AI papers in the top 5% of cited papers, China overtook the United States in 2016 but lagged behind the European Union. 23% of the researchers presenting at the 2017 American Association for the Advancement of Artificial Intelligence (AAAI) conference were Chinese. Eric Schmidt, the former chairman and chief executive officer of Alphabet, has predicted China will be the leading country in AI by 2025. Risks One risk concerns the AI race itself, whether or not the race is won by any one group. There are strong incentives for development teams to cut corners with regard to the safety of the system, increasing the risk of critical failures and unintended consequences. This is in part due to the perceived advantage of being the first to develop advanced AI technology. One team appearing to be on the brink of a breakthrough can encourage other teams to take shortcuts, ignore precautions and deploy a system that is less ready. Some argue that using "race" terminology at all in this context can exacerbate this effect. Another potential danger of an AI arms race is the possibility of losing control of the AI systems; the risk is compounded in the case of a race to artificial general intelligence, which may present an existential risk. In 2023, a United States Air Force official reportedly said that during a computer test, a simulated AI drone killed the human character operating it. The USAF later said the official had misspoken and that it never conducted such simulations. A third risk of an AI arms race is whether or not the race is actually won by one group. The concern is regarding the consolidation of power and technological advantage in the hands of one group. A US government report argued that "AI-enabled capabilities could be used to threaten critical infrastructure, amplify disinformation campaigns, and wage war":1, and that "global stability and nuclear deterrence could be undermined".:11 By nation In 2014, former Secretary of Defense Chuck Hagel posited the "Third Offset Strategy" that rapid advances in artificial intelligence will define the next generation of warfare. According to data science and analytics firm Govini, the U.S. Department of Defense (DoD) increased investment in artificial intelligence, big data and cloud computing from $5.6 billion in 2011 to $7.4 billion in 2016. However, the civilian NSF budget for AI saw no increase in 2017. Japan Times reported in 2018 that the United States private investment is around $70 billion per year. The November 2019 'Interim Report' of the United States' National Security Commission on Artificial Intelligence confirmed that AI is critical to US technological military superiority. The U.S. has many military AI combat programs, such as the Sea Hunter autonomous warship, which is designed to operate for extended periods at sea without a single crew member, and to even guide itself in and out of port. From 2017, a temporary US Department of Defense directive requires a human operator to be kept in the loop when it comes to the taking of human life by autonomous weapons systems. On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. However, a major concern is how the report will be implemented. The Joint Artificial Intelligence Center (JAIC) (pronounced "jake") is an American organization on exploring the usage of AI (particularly edge computing), Network of Networks, and AI-enhanced communication, for use in actual combat. It is a subdivision of the United States Armed Forces and was created in June 2018. The organization's stated objective is to "transform the US Department of Defense by accelerating the delivery and adoption of AI to achieve mission impact at scale. The goal is to use AI to solve large and complex problem sets that span multiple combat systems; then, ensure the combat Systems and Components have real-time access to ever-improving libraries of data sets and tools." In 2023, Microsoft pitched the DoD to use DALL-E models to train its battlefield management system. OpenAI, the developer of DALL-E, removed the blanket ban on military and warfare use from its usage policies in January 2024. The Biden administration imposed restrictions on the export of advanced NVIDIA chips and GPUs to China in an effort to limit China's progress in artificial intelligence and high-performance computing. The policy aimed to prevent the use of cutting-edge U.S. technology in military or surveillance applications and to maintain a strategic advantage in the global AI race. In 2025, under the second Trump administration, the United States began a broad deregulation campaign aimed at accelerating growth in sectors critical to artificial intelligence, including nuclear energy, infrastructure, and high-performance computing. The goal was to remove regulatory barriers and attract private investment to boost domestic AI capabilities. This included easing restrictions on data usage, speeding up approvals for AI-related infrastructure projects, and incentivizing innovation in cloud computing and semiconductors. Companies like NVIDIA, Oracle, and Cisco played a central role in these efforts, expanding their AI research, data center capacity, and partnerships to help position the U.S. as a global leader in AI development. Project Maven is a Pentagon project involving using machine learning and engineering talent to distinguish people and objects in drone videos, apparently giving the government real-time battlefield command and control, and the ability to track, tag and spy on targets without human involvement. Initially the effort was led by Robert O. Work who was concerned about China's military use of the emerging technology. Reportedly, Pentagon development stops short of acting as an AI weapons system capable of firing on self-designated targets. The project was established in a memo by the U.S. Deputy Secretary of Defense on 26 April 2017. Also known as the Algorithmic Warfare Cross Functional Team, it is, according to Lt. Gen. of the United States Air Force Jack Shanahan in November 2017, a project "designed to be that pilot project, that pathfinder, that spark that kindles the flame front of artificial intelligence across the rest of the [Defense] Department". Its chief, U.S. Marine Corps Col. Drew Cukor, said: "People and computers will work symbiotically to increase the ability of weapon systems to detect objects." Project Maven has been noted by allies, such as Australia's Ian Langford, for the ability to identify adversaries by harvesting data from sensors on UAVs and satellite. At the second Defense One Tech Summit in July 2017, Cukor also said that the investment in a "deliberate workflow process" was funded by the Department [of Defense] through its "rapid acquisition authorities" for about "the next 36 months". The U.S. Department of Defense is partnering with Ukraine on "Project Artemis" to develop advanced drones that can withstand electronic warfare, blending Ukrainian simplicity and adaptability with American precision. Due to the Russia-Ukraine war, Ukraine has emerged as a leader in drone production and warfare, creating cost-effective systems that challenge traditional approaches. Countries like Turkey, China, and Iran are also producing affordable drones, reducing America's monopoly and reshaping warfare dynamics. U.S. efforts are focused on integrating AI, drone swarm technology, and hybrid drone systems to maintain military dominance. The democratization of drone technology raises issues, such as autonomous decision-making, counter-drone defenses, and dual-use concerns, that challenge ethical and security norms. The Stargate Project is a joint venture announced in 2025 by OpenAI CEO Sam Altman, U.S. President Donald Trump, Oracle Corporation, MGX, SoftBank Group, and other partners. The initiative aims to develop large-scale artificial intelligence (AI) infrastructure in the United States, with a projected $500 billion investment by 2029. The project focuses on building advanced data centers, custom AI hardware, and sustainable energy systems, while also supporting research, workforce development, and national AI competitiveness. It is considered an effort to position the U.S. as a global leader in AI technology. The program has been compared to the Manhattan Project because of its large scale. China is pursuing a strategic policy of military-civil fusion on AI for global technological supremacy. According to a February 2019 report by Gregory C. Allen of the Center for a New American Security, China's leadership – including General Secretary of the Chinese Communist Party Xi Jinping – believes that being at the forefront in AI technology is critical to the future of global military and economic power competition. Chinese military officials have said that their goal is to incorporate commercial AI technology to "narrow the gap between the Chinese military and global advanced powers." The close ties between Silicon Valley and China, and the open nature of the American research community, has made the West's most advanced AI technology easily available to China; in addition, Chinese industry has numerous home-grown AI accomplishments of its own, such as Baidu passing a notable Chinese-language speech recognition capability benchmark in 2015. As of 2017, Beijing's roadmap aims to create a $150 billion AI industry by 2030. Before 2013, Chinese defense procurement was mainly restricted to a few conglomerates; however, as of 2017, China often sources sensitive emerging technology such as drones and artificial intelligence from private start-up companies. An October 2021 report by the Center for Security and Emerging Technology found that "Most of the [Chinese military]'s AI equipment suppliers are not state-owned defense enterprises, but private Chinese tech companies founded after 2010." The report estimated that Chinese military spending on AI exceeded $1.6 billion each year. The Japan Times reported in 2018 that annual private Chinese investment in AI is under $7 billion per year. AI startups in China received nearly half of total global investment in AI startups in 2017; the Chinese filed for nearly five times as many AI patents as did Americans. China published a position paper in 2016 questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U. N. Security Council to broach the issue. In 2018, CCP general secretary Xi Jinping called for greater international cooperation in basic AI research. Chinese officials have expressed concern that AI such as drones could lead to accidental war, especially in the absence of international norms. In 2019, former United States Secretary of Defense Mark Esper lashed out at China for selling drones capable of taking life with no human oversight. The focus on "intelligentized AI warfare", pursued by China, suggests a comprehensive integration of AI across all domains (land, sea, air, space, and cyber) for autonomous attack, defence and cognitive warfare. The intelligentized strategy is distinct from traditional warfare, which focuses on network-centric operations, and instead sees AI as a force multiplier that enhances decision-making, command structures, and autonomous capabilities. Unlike traditional warfare, intelligentization leverages AI to create a cognitive advantage—allowing it to process battlefield information better. AI-assisted command-and-control (C2) systems, predictive analytics, and real-time data fusion, enabling accelerated human-AI hybrid decision-making. Autonomous systems, including drone swarms, AI-powered cyber warfare, play a crucial role in this strategy. China is reported to be currently developing wingman drones, robotic ground forces, and optimised logistics to enhance combat effectiveness. The Chinese army (PLA)) also emphasises cognitive warfare using AI-driven psychological operations, social media manipulation, and predictive behavioural analysis to influence adversaries and the importance of dynamic responses where AI enhances hacking capabilities, automated SIGINT (Signals Intelligence) and adaptive tactics. However, despite this focus, some analysts believe China could be struggling to fully realise AI capability within the military environment: a "comprehensive review of dozens of Chinese-language journal articles about AI and warfare reveals that Chinese defense experts claim that Beijing is facing several technological challenges that may hinder its ability to capitalize on the advantages provided by military AI" A task force for the Strategic Implementation of AI for National Security and Defence was established in February 2018 by the Ministry of Defense's Department of Defence Production. The process of getting the military ready for AI use was started by the MoD in 2019. The Centre for Artificial Intelligence and Robotics was approved to develop AI solutions to improve intelligence collection and analysis capabilities. In 2021, the Indian Army, with assistance from the National Security Council, began operating the Quantum Lab and Artificial Intelligence Center at the Military College of Telecommunication Engineering. With an emphasis on robotics and artificial intelligence, Defence Research and Development Organisation and Indian Institute of Science established the Joint Advanced Technology Programme-Center of Excellence. In 2022, the Indian Navy created an AI Core group and set up a Center of Excellence for AI and Big Data analysis at INS Valsura. Indian Army incubated Artificial Intelligence Offensive Drone Operations Project. During Exercise Dakshin Shakti 2021, the Indian Army integrated AI into its intelligence, surveillance, and reconnaissance architecture. In 2022, the Indian government established the Defence Artificial Intelligence Council and the Defence AI Project Agency, and it also published a list of 75 defense-related AI priority projects. MoD earmarked ₹1,000 crore annually till 2026 for capacity building, infrastructure setup, data preparation, and Al project implementation. The Indian Army, the Indian Navy and the Indian Air Force set aside ₹100 crore annually for the development of AI-specific applications. The military is already deploying some AI-enabled projects and equipment. At Air Force Station Rajokri, the IAF Centre of Excellence for Artificial Intelligence was established in 2022 as part of the Unit for Digitization, Automation, Artificial Intelligence, and Application Networking (UDAAN). Swarm drone systems were introduced by the Mechanised Infantry Regiment for offensive operations close to the Line of Actual Control. For offensive operations, the military began acquiring AI-enabled UAVs and swarm drones. Bharat Electronics developed AI-enabled audio transcription and analysis software for battlefield communication. Using AI during transport operations, the Indian Army's Research & Development branch patented driver tiredness monitoring system. As part of initial investment, the Indian Armed Forces is investing about $50 million (€47.2 million) yearly on AI, according to Delhi Policy Group. For high altitude logistics at forward outposts, military robots are deployed. Army is developing autonomous combat vehicles, robotic surveillance platforms, and Manned-Unmanned Teaming (MUM-T) solutions as part of the Defence AI roadmap. MCTE is working with the Ministry of Electronics and Information Technology and, Society for Applied Microwave Electronics Engineering & Research, on AI and military-grade chipset. Phase III of AI-enabled space-based surveillance has been authorized. DRDO Chairman and Secretary of the Department of Defense Research & Development Samir V. Kamat said the agency started concentrating on the potential use of AI in the development of military systems and subsystems. The Indian government intends to leverage the private sector's sizable AI workforce and dual-use technologies for defense by 2026. In order to conduct research on autonomous platforms, improved surveillance, predictive maintenance, and intelligent decision support system, the Indian Army AI Incubation Center was established. Indian Navy launched INS Surat with AI capabilities. In 2025 Iran established National AI action with ($20bn USD) 100.000.000.000.000.000 billion Rial investment backed by National Development Fund of Iran incorporated National Artificial Intelligence Organization.IRGC commander General Pakpur ordered bombs using Ai to be developed while Ai has reportedly already been deployed for Afghan border control. Before the Israel-Iran war the army had advertised AI ready weapons,Iran and Russia have signed a new cooperation agreement on artificial intelligence. IRGC Navy has also tested AI missiles capable. Russian General Viktor Bondarev, commander-in-chief of the Russian air force, stated that as early as February 2017, Russia was working on AI-guided missiles that could decide to switch targets mid-flight. The Military-Industrial Commission of Russia has approved plans to derive 30 percent of Russia's combat power from remote controlled and AI-enabled robotic platforms by 2030. Reports by state-sponsored Russian media on potential military uses of AI increased in mid-2017. In May 2017, the CEO of Russia's Kronstadt Group, a defense contractor, stated that "there already exist completely autonomous AI operation systems that provide the means for UAV clusters, when they fulfill missions autonomously, sharing tasks between them, and interact", and that it is inevitable that "swarms of drones" will one day fly over combat zones. Russia has been testing several autonomous and semi-autonomous combat systems, such as Kalashnikov's "neural net" combat module, with a machine gun, a camera, and an AI that its makers claim can make its own targeting judgements without human intervention. In September 2017, during a National Knowledge Day address to over a million students in 16,000 Russian schools, Russian President Vladimir Putin stated "Artificial intelligence is the future, not only for Russia but for all humankind... Whoever becomes the leader in this sphere will become the ruler of the world". Putin also said it would be better to prevent any single actor achieving a monopoly, but that if Russia became the leader in AI, they would share their "technology with the rest of the world, like we are doing now with atomic and nuclear technology". Russia is establishing a number of organizations devoted to the development of military AI. In March 2018, the Russian government released a 10-point AI agenda, which calls for the establishment of an AI and Big Data consortium, a Fund for Analytical Algorithms and Programs, a state-backed AI training and education program, a dedicated AI lab, and a National Center for Artificial Intelligence, among other initiatives. In addition, Russia recently created a defense research organization, roughly equivalent to DARPA, dedicated to autonomy and robotics called the Foundation for Advanced Studies, and initiated an annual conference on "Robotization of the Armed Forces of the Russian Federation." The Russian military has been researching a number of AI applications, with a heavy emphasis on semiautonomous and autonomous vehicles. In an official statement on November 1, 2017, Viktor Bondarev, chairman of the Federation Council's Defense and Security Committee, stated that "artificial intelligence will be able to replace a soldier on the battlefield and a pilot in an aircraft cockpit" and later noted that "the day is nearing when vehicles will get artificial intelligence." Bondarev made these remarks in close proximity to the successful test of Nerehta, an crewless Russian ground vehicle that reportedly "outperformed existing [crewed] combat vehicles." Russia plans to use Nerehta as a research and development platform for AI and may one day deploy the system in combat, intelligence gathering, or logistics roles. Russia has also reportedly built a combat module for crewless ground vehicles that is capable of autonomous target identification—and, potentially, target engagement—and plans to develop a suite of AI-enabled autonomous systems. In addition, the Russian military plans to incorporate AI into crewless aerial, naval, and undersea vehicles and is currently developing swarming capabilities. It is also exploring innovative uses of AI for remote sensing and electronic warfare, including adaptive frequency hopping, waveforms, and countermeasures. Russia has also made extensive use of AI technologies for domestic propaganda and surveillance, as well as for information operations directed against the United States and U.S. allies. The Russian government has strongly rejected any ban on lethal autonomous weapon systems, suggesting that such an international ban could be ignored. The Russian invasion of Ukraine and the ensuing Russia-Ukraine war has seen seen significant use of AI by both sides and has also been characterised as a drone war. Advances in AI-powered GPS-denied navigation and drone swarming techniques are significantly improving operational capabilities for Ukraine. Fully realised drone swarms, where multiple drones coordinate and make decisions autonomously, are still in the early stages of experimentation but Ukraine is exploring and implementing these techniques in a real conflict situation. The Defense Intelligence of Ukraine (DIU) has been at the forefront of utilizing drones with some elements of autonomy for conducting long-range strikes into Russian territory. Domestic drone production has significantly expanded, with approximately 2 million drones produced in 2024, 96.2% of which were domestically manufactured. Rather than replacing human involvement, AI is primarily serving to augment existing capabilities, enhancing the speed, accuracy, and overall efficiency of numerous military functions. Perhaps the most important way in which AI has been used by Ukraine is in intelligence, surveillance, and reconnaissance (ISR) capabilities. The Ukrainian military uses Palantir's MetaConstellation software to monitor the movement of Russian troops and supplies (highlighting the blurring of boundaries between state military and commercial AI use). It aggregates data from various commercial civilian providers of satellite imagery Ukraine also uses its own Delta system which aggregates real time data from drone imagery, satellite photos, acoustic signals, and text to construct an operational picture for military commanders. AI is used to prioritise incoming threats, potential targets and resource constraints. AI is also being used to process intercepted communications from Russian soldiers, to process, select, and output militarily useful information from these intercepted calls. Israel has made extensive use of AI for military applications specially during the Gaza war. The main AI systems used for target identification are the Gospel and Lavender. Lavender developed by the Unit 8200 identifies and creates a database of individuals mostly low-ranking militants of Hamas and the Palestinian Islamic Jihad and has a 90% accuracy rate and a database of tens of thousands. The Gospel in comparisons recommended buildings and structures rather than individuals. The acceptable collateral damage and the type of weapon used to eliminate the target is decided by IDF members and could track militants even when at home. Israel's Harpy anti-radar "fire and forget" drone is designed to be launched by ground troops, and autonomously fly over an area to find and destroy radar that fits pre-determined criteria. The application of artificial intelligence is also expected to be advanced in crewless ground systems and robotic vehicles such as the Guardium MK III and later versions. These robotic vehicles are used in border defense. In 2015, the UK government opposed a ban on lethal autonomous weapons, stating that "international humanitarian law already provides sufficient regulation for this area", but that all weapons employed by UK armed forces would be "under human oversight and control". The South Korean Super aEgis II machine gun, unveiled in 2010, sees use both in South Korea and in the Middle East. It can identify, track, and destroy a moving target at a range of 4 km. While the technology can theoretically operate without human intervention, in practice safeguards are installed to require manual input. A South Korean manufacturer states, "Our weapons don't sleep, like humans must. They can see in the dark, like humans can't. Our technology therefore plugs the gaps in human capability", and they want to "get to a place where our software can discern whether a target is friend, foe, civilian or military". Saudi Arabia entered the AI race relatively late, beginning in the early 2020s. The country announced its Vision 2030 initiative—a multi-trillion dollar plan to diversify its oil-dependent economy—under the leadership of the Public Investment Fund (PIF). A key turning point in U.S.-Saudi relations came during President Donald Trump's first foreign trip in 2017, when he visited Riyadh and signed hundreds of billions of dollars in agreements spanning defense, energy, and technology. This visit laid the groundwork for deeper U.S.-Saudi cooperation in areas like AI and tech infrastructure. In the years that followed, Saudi Arabia formed major partnerships with U.S. firms like NVIDIA, AMD, and Cisco, investing billions in semiconductors, cloud computing, and AI research. Saudi-backed startup Humain also partnered with several American firms, further strengthening the Kingdom's ties with Silicon Valley as it pushed to become a global leader in artificial intelligence by 2030. The United Arab Emirates has been expanding its role in artificial intelligence and technology through investments in infrastructure and partnerships. One major initiative is MGX, a UAE-backed technology group focused on AI development. In 2025, U.S. President Donald Trump visited the UAE, where he met with Emirati officials and business leaders. The visit included discussions on technology and economic cooperation, including potential collaborations with U.S. companies such as Oracle, NVIDIA, and Cisco. These talks focused on areas like data centers, AI hardware, and advanced computing, reflecting ongoing efforts by the UAE to strengthen its technological capabilities through international partnerships. NVIDIA, OpenAI, and Cisco have announced plans to collaborate on building one of the world's largest data centers in the United Arab Emirates. The project is part of the UAE's broader strategy to become a global technology and AI hub. The data center will support advanced cloud computing, AI model training, and data storage capabilities. The European Parliament holds the position that humans must have oversight and decision-making power over lethal autonomous weapons. However, it is up to each member state of the European Union to determine their stance on the use of autonomous weapons and the mixed stances of the member states is perhaps the greatest hindrance to the European Union's ability to develop autonomous weapons. Some members such as France, Germany, Italy, and Sweden are developing lethal autonomous weapons. Some members remain undecided about the use of autonomous military weapons and Austria has even called to ban the use of such weapons. Some EU member states have developed and are developing automated weapons. Germany has developed an active protection system, the Active Defense System, that can respond to a threat with complete autonomy in less than a millisecond. Italy plans to incorporate autonomous weapons systems into its future military plans. Proposals for international regulation The international regulation of autonomous weapons is an emerging issue for international law. AI arms control will likely require the institutionalization of new international norms embodied in effective technical specifications combined with active monitoring and informal diplomacy by communities of experts, together with a legal and political verification process. As early as 2007, scholars such as AI professor Noel Sharkey have warned of "an emerging arms race among the hi-tech nations to develop autonomous submarines, fighter jets, battleships and tanks that can find their own targets and apply violent force without the involvement of meaningful human decisions". Miles Brundage of the University of Oxford has argued an AI arms race might be somewhat mitigated through diplomacy: "We saw in the various historical arms races that collaboration and dialog can pay dividends". Over a hundred experts signed an open letter in 2017 calling on the UN to address the issue of lethal autonomous weapons; however, at a November 2017 session of the UN Convention on Certain Conventional Weapons (CCW), diplomats could not agree even on how to define such weapons. The Indian ambassador and chair of the CCW stated that agreement on rules remained a distant prospect. As of 2019, 26 heads of state and 21 Nobel Peace Prize laureates have backed a ban on autonomous weapons. However, as of 2022, most major powers continue to oppose a ban on autonomous weapons. Many experts believe attempts to completely ban killer robots are likely to fail, in part because detecting treaty violations would be extremely difficult. A 2017 report from Harvard's Belfer Center predicts that AI has the potential to be as transformative as nuclear weapons. The report further argues that "Preventing expanded military use of AI is likely impossible" and that "the more modest goal of safe and effective technology management must be pursued", such as banning the attaching of an AI dead man's switch to a nuclear arsenal. Other reactions to autonomous weapons A 2015 open letter by the Future of Life Institute calling for the prohibition of lethal autonomous weapons systems has been signed by over 26,000 citizens, including physicist Stephen Hawking, Tesla magnate Elon Musk, Apple's Steve Wozniak and Twitter co-founder Jack Dorsey, and over 4,600 artificial intelligence researchers, including Stuart Russell, Bart Selman and Francesca Rossi. The Future of Life Institute has also released two fictional films, Slaughterbots (2017) and Slaughterbots - if human: kill() (2021), which portray threats of autonomous weapons and promote a ban, both of which went viral. Professor Noel Sharkey of the University of Sheffield argues that autonomous weapons will inevitably fall into the hands of terrorist groups such as the Islamic State. Disassociation Many Western tech companies avoid being associated too closely with the U.S. military, for fear of losing access to China's market. Furthermore, some researchers, such as DeepMind CEO Demis Hassabis, are ideologically opposed to contributing to military work. For example, in June 2018, company sources at Google said that top executive Diane Greene told staff that the company would not follow-up Project Maven after the current contract expired in March 2019. Rankings See also References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Chordate] | [TOKENS: 3896] |
Contents Chordate Clockwise: lancelet, tunicate, tiger, shark And see text A chordate (/ˈkɔːrdeɪt/ KOR-dayt) is a bilaterian animal belonging to the phylum Chordata (/kɔːrˈdeɪtə/ kor-DAY-tə). All chordates possess, at some point during their larval or adult stages, five distinctive physical characteristics (synapomorphies) that distinguish them from other taxa. These five synapomorphies are a notochord, a hollow dorsal nerve cord, an endostyle or thyroid, pharyngeal slits, and a post-anal tail. In addition to the morphological characteristics used to define chordates, analysis of genome sequences has identified two conserved signature indels (CSIs) in their proteins: cyclophilin-like protein and inner mitochondrial membrane protease ATP23, which are exclusively shared by all vertebrates, tunicates and cephalochordates. These CSIs provide molecular means to reliably distinguish chordates from all other animals. Chordates are divided into three subphyla: Vertebrata (fish, amphibians, reptiles, birds and mammals), whose notochords are replaced by a cartilaginous/bony axial endoskeleton (spine) and are cladistically and phylogenetically a subgroup of the clade Craniata (i.e. chordates with a skull); Tunicata or Urochordata (sea squirts, salps, and larvaceans), which only retain the synapomorphies during their larval stage; and Cephalochordata (lancelets), which resemble jawless fish but have no gills or a distinct head. The vertebrates and tunicates compose the clade Olfactores, which is sister to Cephalochordata (see diagram under Phylogeny). Extinct taxa such as the conodonts are chordates, but their internal placement is less certain. Hemichordata (which includes the acorn worms) was previously considered a fourth chordate subphylum, but now is treated as a separate phylum which are now thought to be closer to the echinoderms, and together they form the clade Ambulacraria, the sister phylum of the chordates. Chordata, Ambulacraria, and possibly Xenacoelomorpha are believed to form the superphylum Deuterostomia, although this called into doubt in a 2021 publication. Chordata is the third-largest phylum of the animal kingdom (behind only the protostomal phyla Arthropoda and Mollusca) and is also one of the most ancient animal taxa. Chordate fossils have been found from as early as the Cambrian explosion over 539 million years ago. Of the more than 81,000 living species of chordates, about half are ray-finned fishes (class Actinopterygii) and the vast majority of the rest are tetrapods, a terrestrial clade of lobe-finned fishes (Sarcopterygii) who evolved air-breathing using lungs. Etymology The name "chordate" comes from the first of these synapomorphies, the notochord, which plays a significant role in chordate body plan structuring and movements. Chordates are also bilaterally symmetric, have a coelom, possess a closed circulatory system, and exhibit metameric segmentation. Although the name Chordata is attributed to William Bateson (1885), it was already in prevalent use by 1880. Ernst Haeckel described a taxon comprising tunicates, cephalochordates, and vertebrates in 1866. Though he used the German vernacular form, it is allowed under the ICZN code because of its subsequent latinization. Anatomy Chordates form a phylum of animals that are defined by having at some stage in their lives all of the following anatomical features: There are soft constraints that separate chordates from other biological lineages, but are not part of the formal definition: Classification The following schema is from the 2015 edition of Vertebrate Palaeontology. The invertebrate chordate classes are from Fishes of the World. While it is generally structured so as to reflect evolutionary relationships (similar to a cladogram), it also retains the traditional ranks used in Linnaean taxonomy. Subphyla Cephalochordates, one of the three subdivisions of chordates, are small, "vaguely fish-shaped" animals that lack brains, clearly defined heads and specialized sense organs. These burrowing filter-feeders compose the earliest-branching chordate subphylum. The tunicates have three distinct adult shapes. Each is a member of one of three monophyletic clades. All tunicate larvae have the standard chordate features, including long, tadpole-like tails. Their larva also have rudimentary brains, light sensors and tilt sensors. The smallest of the three groups of tunicates is the Appendicularia. They retain tadpole-like shapes and active swimming all their lives, and were for a long time regarded as larvae of the other two groups. The other two groups, the sea squirts and the salps, metamorphize into adult forms which lose the notochord, nerve cord, and post anal tail. Both are soft-bodied filter feeders with multiple gill slits. They feed on plankton which they collect in their mucus. Sea squirts are sessile and consist mainly of water pumps and filter-feeding apparatus. Most attach firmly to the sea floor, where they remain in one place for life, feeding on plankton. The salps float in mid-water, feeding on plankton, and have a two-generation cycle in which one generation is solitary and the next forms chain-like colonies. The etymology of the term Urochordata (Balfour 1881) is from the ancient Greek οὐρά (oura, "tail") + Latin chorda ("cord"), because the notochord is only found in the tail. The term Tunicata (Lamarck 1816) is recognised as having precedence and is now more commonly used. Craniates all have distinct skulls. They include the hagfish, which have no vertebrae. Michael J. Benton commented that "craniates are characterized by their heads, just as chordates, or possibly all deuterostomes, are by their tails". Most craniates are vertebrates, in which the notochord is replaced by the vertebral column. It consists of a series of bony or cartilaginous cylindrical vertebrae, generally with neural arches that protect the spinal cord, and with projections that link the vertebrae. Hagfishes have incomplete braincases and no vertebrae, and are therefore not regarded as vertebrates, but they are members of the craniates, the group within which vertebrates are thought to have evolved. However the cladistic exclusion of hagfish from the vertebrates is controversial, as they may instead be degenerate vertebrates who have secondarily lost their vertebral columns. Before molecular phylogenetics, the position of lampreys was ambiguous. They have complete braincases and rudimentary vertebrae, and therefore may be regarded as vertebrates and true fish. However, molecular phylogenetics, which uses DNA to classify organisms, has produced both results that group them with vertebrates and others that group them with hagfish. If lampreys are more closely related to the hagfish than the other vertebrates, this would suggest that they form a clade, which has been named the Cyclostomata. Phylogeny There is still much ongoing differential (DNA sequence based) comparison research that is trying to separate out the simplest forms of chordates. As some lineages of the 90% of species that lack a backbone or notochord might have lost these structures over time, this complicates the classification of chordates. Some chordate lineages may only be found by DNA analysis, when there is no physical trace of any chordate-like structures. Attempts to work out the evolutionary relationships of the chordates have produced several hypotheses. The current consensus is that chordates are monophyletic (meaning that the Chordata includes all and only the descendants of a single common ancestor, which is itself a chordate) and that the vertebrates' nearest relatives are tunicates. In 2016, identification of two conserved signature indels (CSIs) in the proteins cyclophilin-like protein and mitochondrial inner membrane protease ATP23, which are exclusively shared by all vertebrates, tunicates and cephalochordates also provided strong evidence of the monophyly of Chordata. All of the earliest chordate fossils have been found in the Early Cambrian Chengjiang fauna, and include three species that are regarded as fish,[citation needed] and hence vertebrates. Because the fossil record of early chordates is poor, only molecular phylogenetics offers a reasonable prospect of dating their emergence. However, the use of molecular phylogenetics for dating evolutionary transitions is controversial. It has proven difficult to produce a detailed classification within the living chordates. Attempts to produce evolutionary "family trees" shows that many of the traditional classes are paraphyletic.[citation needed] Hemichordates Echinoderms Cephalochordates Tunicates Vertebrates While this has been well known since the 19th century, an insistence on only monophyletic taxa has resulted in vertebrate classification being in a state of flux. The majority of animals more complex than jellyfish and other cnidarians are split into two groups, the protostomes and deuterostomes, the latter of which contains chordates. It seems very likely the 555 million-year-old Kimberella was a member of the protostomes. If so, this means the protostome and deuterostome lineages must have split some time before Kimberella appeared—at least 558 million years ago, and hence well before the start of the Cambrian 538.8 million years ago. Three enigmatic species that are possible very early tunicates, and therefore deuterostomes, were also found from the Ediacaran period – Ausia fenestrata from the Nama Group of Namibia, the sac-like Yarnemia ascidiformis, and one from a second new Ausia-like genus from the Onega Peninsula of northern Russia, Burykhia hunti. Results of a new study have shown possible affinity of these Ediacaran organisms to the ascidians. Ausia and Burykhia lived in shallow coastal waters slightly more than 555 to 548 million years ago, and are believed to be the oldest evidence of the chordate lineage of metazoans. The Russian Precambrian fossil Yarnemia is identified as a tunicate only tentatively, because its fossils are nowhere near as well-preserved as those of Ausia and Burykhia, so this identification has been questioned.[citation needed] Fossils of one major deuterostome group, the echinoderms (whose modern members include starfish, sea urchins and crinoids), are quite common from the start of the Cambrian, 542 million years ago. The Mid Cambrian fossil Rhabdotubus johanssoni has been interpreted as a pterobranch hemichordate. Opinions differ about whether the Chengjiang fauna fossil Yunnanozoon, from the earlier Cambrian, was a hemichordate or chordate. Another fossil, Haikouella lanceolata, also from the Chengjiang fauna, is interpreted as a chordate and possibly a craniate, as it shows signs of a heart, arteries, gill filaments, a tail, a neural chord with a brain at the front end, and possibly eyes—although it also had short tentacles round its mouth. Haikouichthys and Myllokunmingia, also from the Chengjiang fauna, are regarded as fish. Pikaia, discovered much earlier (1911) but from the Mid Cambrian Burgess Shale (505 Ma), is also regarded as a primitive chordate. On the other hand, fossils of early chordates are very rare, since invertebrate chordates have no bones or teeth, and only one has been reported for the rest of the Cambrian. The best known and earliest unequivocally identified Tunicate is Shankouclava shankouense from the Lower Cambrian Maotianshan Shale at Shankou village, Anning, near Kunming (South China). The evolutionary relationships between the chordate groups and between chordates as a whole and their closest deuterostome relatives have been debated since 1890. Studies based on anatomical, embryological, and paleontological data have produced different "family trees". Some closely linked chordates and hemichordates, but that idea is now rejected. Combining such analyses with data from a small set of ribosome RNA genes eliminated some older ideas, but opened up the possibility that tunicates (urochordates) are "basal deuterostomes", surviving members of the group from which echinoderms, hemichordates and chordates evolved. Some researchers believe that, within the chordates, craniates are most closely related to cephalochordates, but there are also reasons for regarding tunicates (urochordates) as craniates' closest relatives. Since early chordates have left a poor fossil record, attempts have been made to calculate the key dates in their evolution by molecular phylogenetics techniques—by analyzing biochemical differences, mainly in RNA. One such study suggested that deuterostomes arose before 900 million years ago and the earliest chordates around 896 million years ago. However, molecular estimates of dates often disagree with each other and with the fossil record, and their assumption that the molecular clock runs at a known constant rate has been challenged. Traditionally, Cephalochordata and Craniata were grouped into the proposed clade "Euchordata", which would have been the sister group to Tunicata/Urochordata. More recently, Cephalochordata has been thought of as a sister group to the "Olfactores", which includes the craniates and tunicates. The matter is not yet settled.[citation needed] A specific relationship between vertebrates and tunicates is also strongly supported by two CSIs found in the proteins predicted exosome complex RRP44 and serine palmitoyltransferase, that are exclusively shared by species from these two subphyla but not cephalochordates, indicating vertebrates are more closely related to tunicates than cephalochordates. Below is a phylogenetic tree of the phylum. Lines of the cladogram show probable evolutionary relationships between both extinct taxa, which are denoted with a dagger (†), and extant taxa. Cephalochordata (lancelets) Appendicularia (larvaceans) Thaliacea Phlebobranchia Aplousobranchia Stolidobranchia Myllokunmingiida † Anaspidomorphi † Conodonta † Myxini (hagfish) Hyperoartia (lampreys) Pteraspidomorphi † Thelodonti † Galeaspida † Pituriaspida † Osteostraci † "Placodermi" † (paraphyletic) "Acanthodii" † (paraphyletic) Holocephali Selachimorpha (sharks) Batoidea (rays) Cladistia Chondrostei Holostei Teleostei Actinistia (coelacanths) Dipnoi (lungfish) Amphibia Sauropsida Synapsida Closest non-chordate relatives The closest relatives of the chordates are believed to be the hemichordates and Echinodermata, which together form the Ambulacraria. The Chordata and Ambulacraria together form the superphylum Deuterostomia. Hemichordates ("half chordates") have some features similar to those of chordates: branchial openings that open into the pharynx and look rather like gill slits; stomochords, similar in composition to notochords, but running in a circle round the "collar", which is ahead of the mouth; and a dorsal nerve cord—but also a smaller ventral nerve cord. There are two living groups of hemichordates. The solitary enteropneusts, commonly known as "acorn worms", have long proboscises and worm-like bodies with up to 200 branchial slits, are up to 2.5 metres (8.2 ft) long, and burrow though seafloor sediments. Pterobranchs are colonial animals, often less than 1 millimetre (0.039 in) long individually, whose dwellings are interconnected. Each filter feeds by means of a pair of branched tentacles, and has a short, shield-shaped proboscis. The extinct graptolites, colonial animals whose fossils look like tiny hacksaw blades, lived in tubes similar to those of pterobranchs. Echinoderms differ from chordates and their other relatives in three conspicuous ways: they possess bilateral symmetry only as larvae – in adulthood they have radial symmetry, meaning that their body pattern is shaped like a wheel; they have tube feet; and their bodies are supported by dermal skeletons made of calcite, a material not used by chordates. Their hard, calcified shells keep their bodies well protected from the environment, and these skeletons enclose their bodies, but are also covered by thin skins. The feet are powered by another unique feature of echinoderms, a water vascular system of canals that also functions as a "lung" and surrounded by muscles that act as pumps. Crinoids are typically sessile and look rather like flowers (hence the common name "sea lilies"), and use their feather-like arms to filter food particles out of the water; most live anchored to rocks, but a few species can move very slowly. Other echinoderms are mobile and take a variety of body shapes, for example starfish and brittle stars, sea urchins and sea cucumbers. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Assembly_language] | [TOKENS: 7079] |
Contents Assembly language In computing, assembly language (alternatively assembler language or symbolic machine code), often referred to simply as assembly and commonly abbreviated as ASM or asm, is any low-level programming language with a very strong correspondence between the instructions in the language and the architecture's machine code instructions. Assembly language usually has one statement per machine code instruction (1:1), but constants, comments, assembler directives, symbolic labels of, e.g., memory locations, registers, and macros are generally also supported. The first assembly code in which a language is used to represent machine code instructions is found in Kathleen and Andrew Donald Booth's 1947 work, Coding for A.R.C.. Assembly code is converted into executable machine code by a utility program referred to as an assembler. The term "assembler" is generally attributed to Wilkes, Wheeler and Gill in their 1951 book The Preparation of Programs for an Electronic Digital Computer, who, however, used the term to mean "a program that assembles another program consisting of several sections into a single program". The conversion process is referred to as assembly, as in assembling the source code. The computational step when an assembler is processing a program is called assembly time. Because assembly depends on the machine code instructions, each assembly language[nb 1] is specific to a particular computer architecture such as x86 or ARM. Sometimes there is more than one assembler for the same architecture, and sometimes an assembler is specific to an operating system or to particular operating systems. Most assembly languages do not provide specific syntax for operating system calls, and most assembly languages can be used universally with any operating system,[nb 2] as the language provides access to all the real capabilities of the processor, upon which all system call mechanisms ultimately rest. In contrast to assembly languages, most high-level programming languages are generally portable across multiple architectures but require interpreting or compiling, much more complicated tasks than assembling. In the first decades of computing, it was commonplace for both systems programming and application programming to take place entirely in assembly language. While still irreplaceable for some purposes, the majority of programming is now conducted in higher-level interpreted and compiled languages. In "No Silver Bullet", Fred Brooks summarised the effects of the switch away from assembly language programming: "Surely the most powerful stroke for software productivity, reliability, and simplicity has been the progressive use of high-level languages for programming. Most observers credit that development with at least a factor of five in productivity, and with concomitant gains in reliability, simplicity, and comprehensibility." Today, it is typical to use small amounts of assembly language code within larger systems implemented in a higher-level language, for performance reasons or to interact directly with hardware in ways unsupported by the higher-level language. For instance, just under 2% of version 4.9 of the Linux kernel source code is written in assembly; more than 97% is written in C. Assembly language syntax Assembly language uses a mnemonic to represent, e.g., each low-level machine instruction or opcode, each directive, typically also each architectural register, flag, etc. Some of the mnemonics may be built-in and some user-defined. Many operations require one or more operands in order to form a complete instruction. Most assemblers permit named constants, registers, and labels for program and memory locations, and can calculate expressions for operands. Thus, programmers are freed from tedious repetitive calculations and assembler programs are much more readable than machine code. Depending on the architecture, these elements may also be combined for specific instructions or addressing modes using offsets or other data as well as fixed addresses. Many assemblers offer additional mechanisms to facilitate program development, to control the assembly process, and to aid debugging. Some are column oriented, with specific fields in specific columns; this was very common for machines using punched cards in the 1950s and early 1960s. Some assemblers have free-form syntax, with fields separated by delimiters, e.g., punctuation, white space. Some assemblers are hybrid, with, e.g., labels, in a specific column and other fields separated by delimiters; this became more common than column-oriented syntax in the 1960s. Terminology Key concepts An assembler program creates object code by translating combinations of mnemonics and syntax for operations and addressing modes into their numerical equivalents. This representation typically includes an operation code ("opcode") as well as other control bits and data. The assembler also calculates constant expressions and resolves symbolic names for memory locations and other entities. The use of symbolic references is a key feature of assemblers, saving tedious calculations and manual address updates after program modifications. Most assemblers also include macro facilities for performing textual substitution – e.g., to generate common short sequences of instructions as inline, instead of called subroutines. Some assemblers may also be able to perform some simple types of instruction set-specific optimizations. One concrete example of this may be the ubiquitous x86 assemblers from various vendors. Called jump-sizing, most of them are able to perform jump-instruction replacements (long jumps replaced by short or relative jumps) in any number of passes, on request. Others may even do simple rearrangement or insertion of instructions, such as some assemblers for RISC architectures that can help optimize a sensible instruction scheduling to exploit the CPU pipeline as efficiently as possible. Assemblers have been available since the 1950s, as the first step above machine language and before high-level programming languages such as Fortran, Algol, COBOL and Lisp. There have also been several classes of translators and semi-automatic code generators with properties similar to both assembly and high-level languages, with Speedcode as perhaps one of the better-known examples. There may be several assemblers with different syntax for a particular CPU or instruction set architecture. For instance, an instruction to add memory data to a register in a x86-family processor might be add eax,[ebx], in original Intel syntax, whereas this would be written addl (%ebx),%eax in the AT&T syntax used by the GNU Assembler. Despite different appearances, different syntactic forms generally generate the same numeric machine code. A single assembler may also have different modes in order to support variations in syntactic forms as well as their exact semantic interpretations (such as FASM-syntax, TASM-syntax, ideal mode, etc., in the special case of x86 assembly programming). There are two types of assemblers based on how many passes through the source are needed (how many times the assembler reads the source) to produce the object file. In both cases, the assembler must be able to determine the size of each instruction on the initial passes in order to calculate the addresses of subsequent symbols. This means that if the size of an operation referring to an operand defined later depends on the type or distance of the operand, the assembler will make a pessimistic estimate when first encountering the operation, and if necessary, pad it with one or more "no-operation" instructions in a later pass or the errata. In an assembler with peephole optimization, addresses may be recalculated between passes to allow replacing pessimistic code with code tailored to the exact distance from the target. The original reason for the use of one-pass assemblers was memory size and speed of assembly – often a second pass would require storing the symbol table in memory (to handle forward references), rewinding and rereading the program source on tape, or rereading a deck of cards or punched paper tape. Later computers with much larger memories (especially disc storage), had the space to perform all necessary processing without such re-reading. The advantage of the multi-pass assembler is that the absence of errata makes the linking process (or the program load if the assembler directly produces executable code) faster. Example: in the following code snippet, a one-pass assembler would be able to determine the address of the backward reference BKWD when assembling statement S2, but would not be able to determine the address of the forward reference FWD when assembling the branch statement S1; indeed, FWD may be undefined. A two-pass assembler would determine both addresses in pass 1, so they would be known when generating code in pass 2. More sophisticated high-level assemblers provide language abstractions such as: See Language design below for more details. A program written in assembly language consists of a series of mnemonic processor instructions and meta-statements (known variously as declarative operations, directives, pseudo-instructions, pseudo-operations and pseudo-ops), comments and data. Assembly language instructions usually consist of an opcode mnemonic followed by an operand, which might be a list of data, arguments or parameters. Some instructions may be "implied", which means the data upon which the instruction operates is implicitly defined by the instruction itself—such an instruction does not take an operand. The resulting statement is translated by an assembler into machine language instructions that can be loaded into memory and executed. For example, the instruction below tells an x86/IA-32 processor to move an immediate 8-bit value into a register. The binary code for this instruction is 10110 followed by a 3-bit identifier for which register to use. The identifier for the AL register is 000, so the following machine code loads the AL register with the data 01100001. This binary computer code can be made more human-readable by expressing it in hexadecimal as follows. Here, B0 means "Move a copy of the following value into AL", and 61 is a hexadecimal representation of the value 01100001, which is 97 in decimal. Assembly language for the 8086 family provides the mnemonic MOV (an abbreviation of move) for instructions such as this, so the machine code above can be written as follows in assembly language, complete with an explanatory comment if required, after the semicolon. This is much easier to read and to remember. In some assembly languages (including this one) the same mnemonic, such as MOV, may be used for a family of related instructions for loading, copying and moving data, whether these are immediate values, values in registers, or memory locations pointed to by values in registers or by immediate (a.k.a. direct) addresses. Other assemblers may use separate opcode mnemonics such as L for "move memory to register", ST for "move register to memory", LR for "move register to register", MVI for "move immediate operand to memory", etc. If the same mnemonic is used for different instructions, that means that the mnemonic corresponds to several different binary instruction codes, excluding data (e.g. the 61h in this example), depending on the operands that follow the mnemonic. For example, for the x86/IA-32 CPUs, the Intel assembly language syntax MOV AL, AH represents an instruction that moves the contents of register AH into register AL. The[nb 3] hexadecimal form of this instruction is: The first byte, 88h, identifies a move between a byte-sized register and either another register or memory, and the second byte, E0h, is encoded (with three bit-fields) to specify that both operands are registers, the source is AH, and the destination is AL. In a case like this where the same mnemonic can represent more than one binary instruction, the assembler determines which instruction to generate by examining the operands. In the first example, the operand 61h is a valid hexadecimal numeric constant and is not a valid register name, so only the B0 instruction can be applicable. In the second example, the operand AH is a valid register name and not a valid numeric constant (hexadecimal, decimal, octal, or binary), so only the 88 instruction can be applicable. Assembly languages are always designed so that this sort of lack of ambiguity is universally enforced by their syntax. For example, in the Intel x86 assembly language, a hexadecimal constant must start with a numeral digit, so that the hexadecimal number 'A' (equal to decimal ten) would be written as 0Ah or 0AH, not AH, specifically so that it cannot appear to be the name of register AH. (The same rule also prevents ambiguity with the names of registers BH, CH, and DH, as well as with any user-defined symbol that ends with the letter H and otherwise contains only characters that are hexadecimal digits, such as the word "BEACH".) Returning to the original example, while the x86 opcode 10110000 (B0) copies an 8-bit value into the AL register, 10110001 (B1) moves it into CL and 10110010 (B2) does so into DL. Assembly language examples for these follow. The syntax of MOV can also be more complex as the following examples show. In each case, the MOV mnemonic is translated directly into one of the opcodes 88-8C, 8E, A0-A3, B0-BF, C6 or C7 by an assembler, and the programmer normally does not have to know or remember which. Transforming assembly language into machine code is the job of an assembler, and the reverse can at least partially be achieved by a disassembler. Unlike high-level languages, there is a one-to-one correspondence between many simple assembly statements and machine language instructions. However, in some cases, an assembler may provide pseudoinstructions (essentially macros) which expand into several machine language instructions to provide commonly needed functionality. For example, for a machine that lacks a "branch if greater or equal" instruction, an assembler may provide a pseudoinstruction that expands to the machine's "set if less than" and "branch if zero (on the result of the set instruction)". Most full-featured assemblers also provide a rich macro language (discussed below) which is used by vendors and programmers to generate more complex code and data sequences. Since the information about pseudoinstructions and macros defined in the assembler environment is not present in the object program, a disassembler cannot reconstruct the macro and pseudoinstruction invocations but can only disassemble the actual machine instructions that the assembler generated from those abstract assembly-language entities. Likewise, since comments in the assembly language source file are ignored by the assembler and have no effect on the object code it generates, a disassembler is always completely unable to recover source comments. Each computer architecture has its own machine language. Computers differ in the number and type of operations they support, in the different sizes and numbers of registers, and in the representations of data in storage. While most general-purpose computers are able to carry out essentially the same functionality, the ways they do so differ; the corresponding assembly languages reflect these differences. Multiple sets of mnemonics or assembly-language syntax may exist for a single instruction set, typically instantiated in different assembler programs. In these cases, the most popular one is usually that supplied by the CPU manufacturer and used in its documentation. Two examples of CPUs that have two different sets of mnemonics are the Intel 8080 family and the Intel 8086/8088. Because Intel claimed copyright on its assembly language mnemonics (on each page of their documentation published in the 1970s and early 1980s, at least), some companies that independently produced CPUs compatible with Intel instruction sets invented their own mnemonics. The Zilog Z80 CPU, an enhancement of the Intel 8080A, supports all the 8080A instructions plus many more; Zilog invented an entirely new assembly language, not only for the new instructions but also for all of the 8080A instructions. For example, where Intel uses the mnemonics MOV, MVI, LDA, STA, LXI, LDAX, STAX, LHLD, and SHLD for various data transfer instructions, the Z80 assembly language uses the mnemonic LD for all of them. A similar case is the NEC V20 and V30 CPUs, enhanced copies of the Intel 8086 and 8088, respectively. Like Zilog with the Z80, NEC invented new mnemonics for all of the 8086 and 8088 instructions, to avoid accusations of infringement of Intel's copyright. (It is questionable whether such copyrights can be valid, and later CPU companies such as AMD[nb 4] and Cyrix republished Intel's x86/IA-32 instruction mnemonics exactly with neither permission nor legal penalty.) It is doubtful whether in practice many people who programmed the V20 and V30 actually wrote in NEC's assembly language rather than Intel's; since any two assembly languages for the same instruction set architecture are isomorphic (somewhat like English and Pig Latin), there is no requirement to use a manufacturer's own published assembly language with that manufacturer's products. In 32-bit assembly language for Linux on an x86 processor, "Hello, world!" can be printed like this. Language design There is a large degree of diversity in the way the authors of assemblers categorize statements and in the nomenclature that they use. In particular, some describe anything other than a machine mnemonic or extended mnemonic as a pseudo-operation (pseudo-op). A typical assembly language consists of 3 types of instruction statements that are used to define program operations: Instructions (statements) in assembly language are generally very simple, unlike those in high-level languages. Generally, a mnemonic is a symbolic name for a single executable machine language instruction (an opcode), and there is at least one opcode mnemonic defined for each machine language instruction. Each instruction typically consists of an operation or opcode plus zero or more operands. Most instructions refer to a single value or a pair of values. Operands can be immediate (value coded in the instruction itself), registers specified in the instruction or implied, or the addresses of data located elsewhere in storage. This is determined by the underlying processor architecture: the assembler merely reflects how this architecture works. Extended mnemonics are often used to specify a combination of an opcode with a specific operand, e.g., the System/360 assemblers use B as an extended mnemonic for BC with a mask of 15 and NOP ("NO OPeration" – do nothing for one step) for BC with a mask of 0. Extended mnemonics are often used to support specialized uses of instructions, often for purposes not obvious from the instruction name. For example, many CPU's do not have an explicit NOP instruction, but do have instructions that can be used for the purpose. In 8086 CPUs the instruction xchg ax,ax is used for nop, with nop being a pseudo-opcode to encode the instruction xchg ax,ax. Some disassemblers recognize this and will decode the xchg ax,ax instruction as nop. Similarly, IBM assemblers for System/360 and System/370 use the extended mnemonics NOP and NOPR for BC and BCR with zero masks. For the SPARC architecture, these are known as synthetic instructions. Some assemblers also support simple built-in macro-instructions that generate two or more machine instructions. For instance, with some Z80 assemblers the instruction ld hl,bc is recognized to generate ld l,c followed by ld h,b. These are sometimes known as pseudo-opcodes. Mnemonics are arbitrary symbols; in 1985 the IEEE published Standard 694 for a uniform set of mnemonics to be used by all assemblers. The standard has since been withdrawn. There are instructions used to define data elements to hold data and variables. They define the type of data, the length and the alignment of data. These instructions can also define whether the data is available to outside programs (programs assembled separately) or only to the program in which the data section is defined. Some assemblers classify these as pseudo-ops. Assembly directives, also called pseudo-opcodes, pseudo-operations or pseudo-ops, are commands given to an assembler "directing it to perform operations other than assembling instructions". Directives affect how the assembler operates and "may affect the object code, the symbol table, the listing file, and the values of internal assembler parameters". Sometimes the term pseudo-opcode is reserved for directives that generate object code, such as those that generate data. The names of pseudo-ops often start with a dot to distinguish them from machine instructions. Pseudo-ops can make the assembly of the program dependent on parameters input by a programmer, so that one program can be assembled in different ways, perhaps for different applications. Or, a pseudo-op can be used to manipulate presentation of a program to make it easier to read and maintain. Another common use of pseudo-ops is to reserve storage areas for run-time data and optionally initialize their contents to known values. Symbolic assemblers let programmers associate arbitrary names (labels or symbols) with memory locations and various constants. Usually, every constant and variable is given a name so instructions can reference those locations by name, thus promoting self-documenting code. In executable code, the name of each subroutine is associated with its entry point, so any calls to a subroutine can use its name. Inside subroutines, GOTO destinations are given labels. Some assemblers support local symbols which are often lexically distinct from normal symbols (e.g., the use of "10$" as a GOTO destination). Some assemblers, such as NASM, provide flexible symbol management, letting programmers manage different namespaces, automatically calculate offsets within data structures, and assign labels that refer to literal values or the result of simple computations performed by the assembler. Labels can also be used to initialize constants and variables with relocatable addresses. Assembly languages, like most other computer languages, allow comments to be added to program source code that will be ignored during assembly. Judicious commenting is essential in assembly language programs, as the meaning and purpose of a sequence of binary machine instructions can be difficult to determine. The "raw" (uncommented) assembly language generated by compilers or disassemblers is quite difficult to read when changes must be made. Many assemblers support predefined macros, and others support programmer-defined (and repeatedly re-definable) macros involving sequences of text lines in which variables and constants are embedded. The macro definition is most commonly[nb 5] a mixture of assembler statements, e.g., directives, symbolic machine instructions, and templates for assembler statements. This sequence of text lines may include opcodes or directives. Once a macro has been defined its name may be used in place of a mnemonic. When the assembler processes such a statement, it replaces the statement with the text lines associated with that macro, then processes them as if they existed in the source code file (including, in some assemblers, expansion of any macros existing in the replacement text). Macros in this sense date to IBM autocoders of the 1950s. Macro assemblers typically have directives to, e.g., define macros, define variables, set variables to the result of an arithmetic, logical or string expression, iterate, conditionally generate code. Some of those directives may be restricted to use within a macro definition, e.g., MEXIT in HLASM, while others may be permitted within open code (outside macro definitions), e.g., AIF and COPY in HLASM. In assembly language, the term "macro" represents a more comprehensive concept than it does in some other contexts, such as the pre-processor in the C programming language, where its #define directive typically is used to create short single line macros. Assembler macro instructions, like macros in PL/I and some other languages, can be lengthy "programs" by themselves, executed by interpretation by the assembler during assembly. Since macros can have 'short' names but expand to several or indeed many lines of code, they can be used to make assembly language programs appear to be far shorter, requiring fewer lines of source code, as with higher level languages. They can also be used to add higher levels of structure to assembly programs, optionally introduce embedded debugging code via parameters and other similar features. Macro assemblers often allow macros to take parameters. Some assemblers include quite sophisticated macro languages, incorporating such high-level language elements as optional parameters, symbolic variables, conditionals, string manipulation, and arithmetic operations, all usable during the execution of a given macro, and allowing macros to save context or exchange information. Thus a macro might generate numerous assembly language instructions or data definitions, based on the macro arguments. This could be used to generate record-style data structures or "unrolled" loops, for example, or could generate entire algorithms based on complex parameters. For instance, a "sort" macro could accept the specification of a complex sort key and generate code crafted for that specific key, not needing the run-time tests that would be required for a general procedure interpreting the specification. An organization using assembly language that has been heavily extended using such a macro suite can be considered to be working in a higher-level language since such programmers are not working with a computer's lowest-level conceptual elements. Underlining this point, macros were used to implement an early virtual machine in SNOBOL4 (1967), which was written in the SNOBOL Implementation Language (SIL), an assembly language for a virtual machine. The target machine would translate this to its native code using a macro assembler. This allowed a high degree of portability for the time. Macros were used to customize large scale software systems for specific customers in the mainframe era and were also used by customer personnel to satisfy their employers' needs by making specific versions of manufacturer operating systems. This was done, for example, by systems programmers working with IBM's Conversational Monitor System / Virtual Machine (VM/CMS) and with IBM's "real time transaction processing" add-ons, Customer Information Control System CICS, and ACP/TPF, the airline/financial system that began in the 1970s and still runs many large computer reservation systems (CRS) and credit card systems today. It is also possible to use solely the macro processing abilities of an assembler to generate code written in completely different languages, for example, to generate a version of a program in COBOL using a pure macro assembler program containing lines of COBOL code inside assembly time operators instructing the assembler to generate arbitrary code. IBM OS/360 uses macros to perform system generation. The user specifies options by coding a series of assembler macros. Assembling these macros generates a job stream to build the system, including job control language and utility control statements. This is because, as was realized in the 1960s, the concept of "macro processing" is independent of the concept of "assembly", the former being in modern terms more word processing, text processing, than generating object code. The concept of macro processing appeared, and appears, in the C programming language, which supports "preprocessor instructions" to set variables, and make conditional tests on their values. Unlike certain previous macro processors inside assemblers, the C preprocessor is not Turing-complete because it lacks the ability to either loop or "go to", the latter allowing programs to loop. Despite the power of macro processing, it fell into disuse in many high level languages (major exceptions being C, C++ and PL/I) while remaining a perennial for assemblers. Macro parameter substitution is strictly by name: at macro processing time, the value of a parameter is textually substituted for its name. The most famous class of bugs resulting was the use of a parameter that itself was an expression and not a simple name when the macro writer expected a name. In the macro: the intention was that the caller would provide the name of a variable, and the "global" variable or constant b would be used to multiply "a". If foo is called with the parameter a-c, the macro expansion of load a-c*b occurs. To avoid any possible ambiguity, users of macro processors can parenthesize formal parameters inside macro definitions, or callers can parenthesize the input parameters. Packages of macros have been written providing structured programming elements to encode execution flow. The earliest example of this approach was in the Concept-14 macro set, originally proposed by Harlan Mills (March 1970), and implemented by Marvin Kessler at IBM's Federal Systems Division, which provided IF/ELSE/ENDIF and similar control flow blocks for OS/360 assembler programs. This was a way to reduce or eliminate the use of GOTO operations in assembly code, one of the main factors causing spaghetti code in assembly language. This approach was widely accepted in the early 1980s (the latter days of large-scale assembly language use). IBM's High Level Assembler Toolkit includes such a macro package. Another design was A-Natural, a "stream-oriented" assembler for 8080/Z80 processors from Whitesmiths Ltd. (developers of the Unix-like Idris operating system, and what was reported to be the first commercial C compiler). The language was classified as an assembler because it worked with raw machine elements such as opcodes, registers, and memory references; but it incorporated an expression syntax to indicate execution order. Parentheses and other special symbols, along with block-oriented structured programming constructs, controlled the sequence of the generated instructions. A-natural was built as the object language of a C compiler, rather than for hand-coding, but its logical syntax won some fans. There has been little apparent demand for more sophisticated assemblers since the decline of large-scale assembly language development. In spite of that, they are still being developed and applied in cases where resource constraints or peculiarities in the target system's architecture prevent the effective use of higher-level languages. Assemblers with a strong macro engine allow structured programming via macros, such as the switch macro provided with the Masm32 package (this code is a complete program): Use of assembly language When the stored-program computer was introduced, programs were written in machine code, and loaded into the computer from punched paper tape or toggled directly into memory from console switches.[citation needed] Kathleen Booth "is credited with inventing assembly language" based on theoretical work she began in 1947, while working on the ARC2 at Birkbeck, University of London, following consultation by Andrew Booth (later her husband) with mathematician John von Neumann and physicist Herman Goldstine at the Institute for Advanced Study. In late 1948, the Electronic Delay Storage Automatic Calculator (EDSAC) had an assembler (named "initial orders") integrated into its bootstrap program. It used one-letter mnemonics developed by David Wheeler, who is credited by the IEEE Computer Society as the creator of the first "assembler". Reports on the EDSAC introduced the term "assembly" for the process of combining fields into an instruction word. SOAP (Symbolic Optimal Assembly Program) was an assembly language for the IBM 650 computer written by Stan Poley in 1955. Assembly languages eliminated much of the error-prone, tedious, and time-consuming first-generation programming needed with the earliest computers, freeing programmers from tedium such as remembering numeric codes and calculating addresses. They were once widely used for all sorts of programming. By the late 1950s their use had largely been supplanted by higher-level languages in the search for improved programming productivity. Today, assembly language is still used for direct hardware manipulation, access to specialized processor instructions, or to address critical performance issues. Typical uses are device drivers, low-level embedded systems, and real-time systems (see § Current usage). Numerous programs were written entirely in assembly language. The Burroughs MCP (1961) was the first computer for which an operating system was not developed entirely in assembly language; it was written in Executive Systems Problem Oriented Language (ESPOL), an Algol dialect. Many commercial applications were written in assembly language as well, including a large amount of the IBM mainframe software developed by large corporations. COBOL, FORTRAN and some PL/I eventually displaced assembly language, although a number of large organizations retained assembly-language application infrastructures well into the 1990s. Assembly language was the primary development language for 8-bit home computers such as the Apple II, Atari 8-bit computers, ZX Spectrum, and Commodore 64. Interpreted BASIC on these systems did not offer maximum execution speed and full use of facilities to take full advantage of the available hardware. Assembly language was the default choice for programming 8-bit consoles such as the Atari 2600 and Nintendo Entertainment System. Key software for IBM PC compatibles such as MS-DOS, Turbo Pascal, and the Lotus 1-2-3 spreadsheet was written in assembly language. As computer speed grew exponentially, assembly language became a tool for speeding up parts of programs, such as the rendering of Doom, rather than a dominant development language. In the 1990s, assembly language was used to maximise performance from systems such as the Sega Saturn, and as the primary language for arcade hardware using the TMS34010 integrated CPU/GPU such as Mortal Kombat and NBA Jam. There has been debate over the usefulness and performance of assembly language relative to high-level languages. Although assembly language has specific niche uses where it is important (see below), there are other tools for optimization. As of July 2017[update], the TIOBE index of programming language popularity ranks assembly language at 11, ahead of Visual Basic, for example. Assembler can be used to optimize for speed or optimize for size. In the case of speed optimization, modern optimizing compilers are claimed to render high-level languages into code that can run as fast as hand-written assembly, despite some counter-examples. The complexity of modern processors and memory sub-systems makes effective optimization increasingly difficult for compilers and assembly programmers alike. Increasing processor performance has meant that most CPUs sit idle most of the time, with delays caused by predictable bottlenecks such as cache misses, I/O operations and paging, making raw code execution speed a non-issue for many programmers. There are still certain computer programming domains in which the use of assembly programming is more common: Assembly language is still taught in most computer science and electronic engineering programs. Although few programmers today regularly work with assembly language as a tool, the underlying concepts remain important. Such fundamental topics as binary arithmetic, memory allocation, stack processing, character set encoding, interrupt processing, and compiler design would be hard to study in detail without a grasp of how a computer operates at the hardware level. Since a computer's behaviour is fundamentally defined by its instruction set, the logical way to learn such concepts is to study an assembly language. Most modern computers have similar instruction sets. Therefore, studying a single assembly language is sufficient to learn the basic concepts, recognize situations where the use of assembly language might be appropriate, and to see how efficient executable code can be created from high-level languages. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Modern_Hebrew] | [TOKENS: 2880] |
Contents Modern Hebrew Modern Hebrew (endonym: עִבְרִית חֲדָשָׁה, romanized: 'Ivrit ḥadasha, IPA: [ivˈʁit χadaˈʃa] or [ʕivˈrit ħadaˈʃa]), also known as Israeli Hebrew or simply Hebrew, is the standard form of the Hebrew language spoken today. It is the only extant Canaanite language, as well as one of the oldest attested languages to be spoken as a first language in the modern day, on account of Hebrew being attested since the 2nd millennium BC. It uses the Hebrew Alphabet, an abjad script written from right-to-left. The current standard was codified as part of the revival of Hebrew in the late 19th and early 20th centuries, and now serves as the official and national language of the State of Israel, where it is predominantly spoken by over 10 million people. Thus, Modern Hebrew is nearly universally regarded as the most successful instance of language revitalization in history. A Northwest Semitic language within the Afroasiatic language family, Hebrew was spoken since antiquity as the vernacular of the Israelites until around the 3rd century BCE, when it was supplanted by a western dialect of the Aramaic language, the local or dominant languages of the regions Jews migrated to, and later Judeo-Arabic, Judaeo-Spanish, Yiddish, and other Jewish languages. Although Hebrew continued to be used for Jewish liturgy, poetry and literature, and written correspondence, it became extinct as a spoken language. By the late 19th century, Russian-Jewish linguist Eliezer Ben-Yehuda (formerly Eliezer Yitzhak Perelman) had begun a popular movement to revive Hebrew as an everyday language, motivated by his desire to preserve Hebrew literature and establish a distinct Jewish nationality and nationalism in the context of Zionism. Soon after, a large number of Yiddish and Judaeo-Spanish speakers were murdered in the Holocaust or fled to Israel, and many speakers of Judeo-Arabic emigrated to Israel in the Jewish exodus from the Muslim world, where many would adapt to Modern Hebrew. Currently, Hebrew is spoken by over 10 million people, counting native, fluent, and non-fluent speakers. Over 6.5 million of these speak it as their native language, the overwhelming majority of whom are Jews who were born in Israel. The rest is split: 2 million are immigrants to Israel; 1.5 million are Israeli Arabs, whose first language is usually Arabic; and half a million are expatriate Israelis or diaspora Jews. Under Israeli law, the organization that officially directs the development of Modern Hebrew is the Academy of the Hebrew Language, headquartered at the Hebrew University of Jerusalem. Name The most common scholarly term for the language is "Modern Hebrew" (עברית חדשה). Most people refer to it simply as "Hebrew" (עברית Hebrew pronunciation: [ivˈʁit]). The term "Modern Hebrew" has been described as "somewhat problematic" as it implies unambiguous periodization from Biblical Hebrew. Haiim B. Rosén [he] (חיים רוזן) supported the now widely used term "Israeli Hebrew" on the basis that it "represented the non-chronological nature of Hebrew". In 1999, Israeli linguist Ghil'ad Zuckermann proposed the term "Israeli" to represent the multiple origins of the language.: 325 Background The history of the Hebrew language can be divided into four major periods: Jewish contemporary sources describe Hebrew flourishing as a spoken language in the kingdoms of Israel and Judah, during about 1200 to 586 BCE. Scholars debate the degree to which Hebrew remained a spoken vernacular following the Babylonian captivity, when Old Aramaic became the predominant international language in the region. Hebrew died out as a vernacular language somewhere between 200 and 400 CE, declining after the Bar Kokhba revolt of 132–136 CE, which devastated the population of Judea. After the exile, Hebrew became restricted to liturgical and literary use. Revival Hebrew had been spoken at various times and for many purposes throughout the Diaspora. During the Old Yishuv, it had developed into a spoken lingua franca among Palestinian Jews. Eliezer Ben-Yehuda then led a revival of the Hebrew language as a mother tongue in the late 19th century and early 20th century. Modern Hebrew used Biblical Hebrew morphemes, Mishnaic spelling and grammar, and Sephardic pronunciation. Its acceptance by the early Jewish immigrants to Ottoman Palestine was caused primarily by support from the organisations of Edmond James de Rothschild in the 1880s and the official status it received in the 1922 constitution of the British Mandate for Palestine. Ben-Yehuda codified and planned Modern Hebrew using 8,000 words from the Bible and 20,000 words from rabbinical commentaries. Many new words were borrowed from Arabic, due to the language's common Semitic roots with Hebrew, but changed to fit Hebrew phonology and grammar, for example the words gerev (sing.) and garbayim (pl.) are now applied to 'socks', a diminutive of the Arabic ğuwārib ('socks'). In addition, early Jewish immigrants, borrowing from the local Arabs, and later immigrants from Arab lands introduced many nouns as loanwords from Arabic (such as nana, zaatar, mishmish, kusbara, ḥilba, lubiya, hummus, gezer, rayḥan, etc.), as well as much of Modern Hebrew's slang. Despite Ben-Yehuda's fame as the renewer of Hebrew, the most productive renewer of Hebrew words was poet Haim Nahman Bialik.[citation needed] One of the phenomena seen with the revival of the Hebrew language is that old meanings of nouns were occasionally changed for altogether different meanings, such as bardelas (ברדלס, a loanword from Koine Greek: πάρδαλις, romanized: párdalis, lit. 'leopard, panther'), which in Mishnaic Hebrew meant 'hyena', but in Modern Hebrew it now means 'cheetah'; or shezīf (שזיף) which is now used for 'plum', but formerly meant 'jujube'. The word kishū’īm (formerly 'cucumbers') is now applied to a variety of summer squash (Cucurbita pepo var. cylindrica), a plant native to the New World. Another example is the word kǝvīsh (כביש), which now denotes a street or a road, but is actually an Aramaic adjective meaning 'trodden down' or 'blazed', rather than a common noun. It was originally used to describe a blazed trail. The flower Anemone coronaria, called in Modern Hebrew kalanit (כלנית), was formerly called in Hebrew shoshanat ha-melekh ('the king's flower'). Classification Modern Hebrew is classified as an Afroasiatic language of the Semitic family, within the Canaanite branch of the Northwest Semitic subgroup. While Modern Hebrew is largely based on Mishnaic and Biblical Hebrew as well as Sephardi and Ashkenazi liturgical and literary tradition from the Medieval and Haskalah eras and retains its Semitic character in its morphology and in much of its syntax,[page needed] some scholars posit that Modern Hebrew represents a fundamentally new linguistic system, not directly continuing any previous linguistic state, though this is not the consensus among scholars. Modern Hebrew is considered to be a koiné language based on historical layers of Hebrew that incorporates foreign elements, mainly those introduced during the most critical revival period between 1880 and 1920, as well as new elements created by speakers through natural linguistic evolution. A minority of scholars argue that the revived language had been so influenced by various substrate languages that it is genealogically a hybrid with Indo-European. These theories are controversial and have not been met with general acceptance, and the consensus among a majority of scholars is that Modern Hebrew, despite its non-Semitic influences, can correctly be classified as a Semitic language. Alphabet Modern Hebrew is written from right to left using the Hebrew alphabet, which is an abjad, or consonant-only script of 22 letters based on the "square" letter form, known as Ashurit (Assyrian), which was developed from the Aramaic script. A cursive script is used in handwriting. When necessary, vowels are indicated by diacritic marks above or below the letters known as Niqqud, or by use of Matres lectionis, which are consonantal letters used as vowels. Further diacritics like Dagesh and Sin and Shin dots are used to indicate variations in the pronunciation of the consonants (e.g. bet/vet, shin/sin). The letters "צ׳", "ג׳", "ז׳", each modified with a Geresh, represent the consonants [t͡ʃ], [d͡ʒ], [ʒ]. The consonant [t͡ʃ] may also be written as "תש" and "טש". [w] is represented interchangeably by a simple vav "ו", non-standard double vav "וו" and sometimes by non-standard geresh modified vav "ו׳". Phonology Modern Hebrew has fewer phonemes than Biblical Hebrew but it has developed its own phonological complexity. Israeli Hebrew has 25 to 27 consonants, depending on whether the speaker has pharyngeals. It has 5 to 10 vowels, depending on whether diphthongs and vowels are counted, varying with the speaker and the analysis. Morphology Modern Hebrew morphology (formation, structure, and interrelationship of words in a language) is essentially Biblical. Modern Hebrew showcases much of the inflectional morphology of the classical upon which it was based. In the formation of new words, all verbs and the majority of nouns and adjectives are formed by the classically Semitic devices of triconsonantal roots (shoresh) with affixed patterns (mishkal). Mishnaic attributive patterns are often used to create nouns, and Classical patterns are often used to create adjectives. Blended words are created by merging two bound stems or parts of words. Syntax The syntax of Modern Hebrew is mainly Mishnaic but also shows the influence of different contact languages to which its speakers have been exposed during the revival period and over the past century. The word order of Modern Hebrew is predominately SVO (subject–verb–object). Biblical Hebrew was originally VSO (verb–subject–object), but drifted into SVO. In the modern language, a sentence may correctly be arranged in any order but its meaning might be hard to understand unless אֶת is used.[clarification needed] Modern Hebrew maintains classical syntactic properties associated with VSO languages:[clarification needed] it is prepositional, rather than postpositional, in marking case and adverbial relations, auxiliary verbs precede main verbs; main verbs precede their complements, and noun modifiers (adjectives, determiners other than the definite article ה- (ha), and noun adjuncts) follow the head noun; and in genitive constructions, the possessee noun precedes the possessor. Moreover, Modern Hebrew allows and sometimes requires sentences with a predicate initial. Sample text Lexicon Modern Hebrew has expanded its vocabulary effectively to meet the needs of casual vernacular, of science and technology, of journalism and belles-lettres. According to Ghil'ad Zuckermann: The number of attested Biblical Hebrew words is 8198, of which some 2000 are hapax legomena (the number of Biblical Hebrew roots, on which many of these words are based, is 2099). The number of attested Rabbinic Hebrew words is less than 20,000, of which (i) 7879 are Rabbinic par excellence, i.e. they did not appear in the Old Testament (the number of new Rabbinic Hebrew roots is 805); (ii) around 6000 are a subset of Biblical Hebrew; and (iii) several thousand are Aramaic words which can have a Hebrew form. Medieval Hebrew added 6421 words to (Modern) Hebrew. The approximate number of new lexical items in Israeli is 17,000 (cf. 14,762 in Even-Shoshan 1970 [...]). With the inclusion of foreign and technical terms [...], the total number of Israeli words, including words of biblical, rabbinic and medieval descent, is more than 60,000.: 64–65 Modern Hebrew has loanwords from Arabic (both from the local Palestinian dialect and from the dialects of Jewish immigrants from Arab countries), Aramaic, Yiddish, Judaeo-Spanish, German, Polish, Russian, English and other languages. Simultaneously, Israeli Hebrew makes use of words that were originally loanwords from the languages of surrounding nations from ancient times: Canaanite languages as well as Akkadian. Mishnaic Hebrew borrowed many nouns from Aramaic (including Persian words borrowed by Aramaic), as well as from Greek and to a lesser extent Latin. In the Middle Ages, Hebrew made heavy semantic borrowing from Arabic, especially in the fields of science and philosophy. Here are typical examples of Hebrew loanwords: See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Argument_from_ignorance] | [TOKENS: 891] |
Contents Argument from ignorance Argument from ignorance (Latin: argumentum ad ignorantiam), or appeal to ignorance,[a] is an informal fallacy where something is claimed to be true or false because of a lack of evidence to the contrary. The fallacy is committed when one asserts that a proposition is true because it has not yet been proven false or a proposition is false because it has not yet been proven true. If a proposition has not yet been proven true, one is not entitled to conclude, solely on that basis, that it is false, and if a proposition has not yet been proven false, one is not entitled to conclude, solely on that basis, that it is true. Another way of expressing this is that a proposition is true only if proven true, and a proposition is false only if proven false. If no proof is offered (in either direction), then the proposition can be called unproven, undecided, inconclusive, an open problem or a conjecture. Use The term was likely coined by philosopher John Locke in the late 17th century. In debates, appealing to ignorance is sometimes an attempt to shift the burden of proof. There is a debate over whether the argument from ignorance is always fallacious. It is generally accepted that there are only special circumstances in which this argument may not be fallacious. For example, with the presumption of innocence in legal cases, it would make sense to argue: It has not been proven that the defendant is guilty.Therefore, the defendant is not guilty. Logic The argument has the form: P {\displaystyle P} has not been proven false.Therefore, P {\displaystyle P} is true. Its reverse: P {\displaystyle P} has not been proven true.Therefore, P {\displaystyle P} is false. where P {\displaystyle P} is a proposition, i.e. a statement declaring that something is true, or that it is false. Examples "Simply because you do not have evidence that something exists does not mean that you have evidence that it doesn't exist."[b] Appeal to ignorance: the claim that whatever has not been proved false must be true, and vice versa. (e.g., There is no compelling evidence that UFOs are not visiting the Earth; therefore, UFOs exist, and there is intelligent life elsewhere in the Universe. Or: There may be seventy kazillion other worlds, but not one is known to have the moral advancement of the Earth, so we're still central to the Universe.) This impatience with ambiguity can be criticized in the phrase: absence of evidence is not evidence of absence. They never called me back. I guess I didn't get the job. This would follow the second form of the argument: P {\displaystyle P} (I got the job) has not been proven true (via lack of callback).Therefore, ¬ P {\displaystyle \neg P} (I didn't get the job) is true. While both parts may be true (in this case, you actually didn't get the job), the reasoning is fallacious because there are cases, even if unlikely, where you could get the job, but don't receive a callback. For example, administrative delays, technical issues, or some kind of oversight from the hiring team. Related terms Contraposition, also known as transposition, is a logically valid rule of inference that allows the creation of a new proposition from the negation and reordering of an existing one. The method applies to any proposition of the type "If A then B" and says that negating all the variables and switching them back to front leads to a new proposition i.e. "If Not-B then Not-A" that is just as true as the original one and that the first implies the second and the second implies the first. Null result is a term often used in science to indicate evidence of absence. A search for water on the ground may yield a null result (the ground is dry); therefore, it probably did not rain. Related arguments Arguments from self-knowing take the form: In practice these arguments are often unsound and rely on the truth of the supporting premise. For example, the claim that If I had just sat on a wild porcupine then I would know it is probably not fallacious and depends entirely on the truth of the first premise (the ability to know it). See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet_Message_Access_Protocol] | [TOKENS: 2275] |
Contents Internet Message Access Protocol In computing, the Internet Message Access Protocol (IMAP) is an Internet standard protocol used by email clients to retrieve email messages from a mail server over a TCP/IP connection. IMAP is defined by RFC 9051. IMAP was designed with the goal of permitting complete management of an email box by multiple email clients; therefore, clients generally leave messages on the server until the user explicitly deletes them. An IMAP server typically listens on port number 143. IMAP over SSL/TLS (IMAPS) is assigned the port number 993. Virtually all modern e-mail clients and servers support IMAP, which along with the earlier POP3 (Post Office Protocol) are the two most prevalent standard protocols for email retrieval. Many webmail service providers such as Gmail and Outlook.com also support both IMAP and POP3. Email protocols The Internet Message Access Protocol is an application layer Internet protocol that allows an e-mail client to access email on a remote mail server. The current version is defined by RFC 9051. An IMAP server typically listens on well-known port 143, while IMAP over SSL/TLS (IMAPS) uses 993. Incoming email messages are sent to an email server that stores messages in the recipient's email box. The user retrieves the messages with an email client that uses one of a number of email retrieval protocols. While some clients and servers preferentially use vendor-specific, proprietary protocols, almost all support POP and IMAP for retrieving email – allowing free choice between many e-mail clients such as Pegasus Mail or Mozilla Thunderbird to access these servers, and allows the clients to be used with other servers. Email clients using IMAP generally leave messages on the server until the user explicitly deletes them. This and other characteristics of IMAP operation allow multiple clients to manage the same mailbox. Most email clients support IMAP in addition to Post Office Protocol (POP) to retrieve messages. IMAP offers access to the mail storage. Clients may store local copies of the messages, but these are considered to be a temporary cache. History IMAP was designed by Mark Crispin in 1986 as a remote access mailbox protocol, in contrast to the widely used POP, a protocol for simply retrieving the contents of a mailbox. It went through a number of iterations before the current VERSION 4rev2 (IMAP4), as detailed below: The original Interim Mail Access Protocol was implemented as a Xerox Lisp Machine client and a TOPS-20 server. No copies of the original interim protocol specification or its software exist. Although some of its commands and responses were similar to IMAP2, the interim protocol lacked command/response tagging and thus its syntax was incompatible with all other versions of IMAP. The interim protocol was quickly replaced by the Interactive Mail Access Protocol (IMAP2), defined in RFC 1064 (in 1988) and later updated by RFC 1176 (in 1990). IMAP2 introduced the command/response tagging and was the first publicly distributed version. IMAP3 is an extremely rare variant of IMAP. It was published as RFC 1203 in 1991. It was written specifically as a counter proposal to RFC 1176, which itself proposed modifications to IMAP2. IMAP3 was never accepted by the marketplace. The IESG reclassified RFC1203 "Interactive Mail Access Protocol – Version 3" as a Historic protocol in 1993. The IMAP Working Group used RFC 1176 (IMAP2) rather than RFC 1203 (IMAP3) as its starting point. With the advent of MIME, IMAP2 was extended to support MIME body structures and add mailbox management functionality (create, delete, rename, message upload) that was absent from IMAP2. This experimental revision was called IMAP2bis; its specification was never published in non-draft form. An internet draft of IMAP2bis was published by the IETF IMAP Working Group in October 1993. This draft was based upon the following earlier specifications: unpublished IMAP2bis.TXT document, RFC 1176, and RFC 1064 (IMAP2). The IMAP2bis.TXT draft documented the state of extensions to IMAP2 as of December 1992. Early versions of Pine were widely distributed with IMAP2bis support (Pine 4.00 and later supports IMAP4rev1). An IMAP Working Group formed in the IETF in the early 1990s took over responsibility for the IMAP2bis design. The IMAP WG decided to rename IMAP2bis to IMAP4 to avoid confusion. Advantages over POP When using POP, clients typically connect to the e-mail server briefly, only as long as it takes to download new messages. When using IMAP4, clients often stay connected as long as the user interface is active and download message content on demand. For users with many or large messages, this IMAP4 usage pattern can result in faster response times. After successful authentication, the POP protocol provides a completely static view of the current state of the mailbox, and does not provide a mechanism to show any external changes in state during the session (the POP client must reconnect and re-authenticate to get an updated view). In contrast, the IMAP protocol provides a dynamic view, and requires that external changes in state, including newly arrived messages, as well as changes made to the mailbox by other concurrently connected clients, are detected and appropriate responses are sent between commands as well as during an IDLE command, as described in RFC 2177. See also RFC 3501 section 5.2 which specifically cites "simultaneous access to the same mailbox by multiple agents". Usually all Internet e-mail is transmitted in MIME format, allowing messages to have a tree structure where the leaf nodes are any of a variety of single part content types and the non-leaf nodes are any of a variety of multipart types. The IMAP4 protocol allows clients to retrieve any of the individual MIME parts separately and also to retrieve portions of either individual parts or the entire message. These mechanisms allow clients to retrieve the text portion of a message without retrieving attached files or to stream content as it is being fetched. Through the use of flags defined in the IMAP4 protocol, clients can keep track of message state: for example, whether or not the message has been read, replied to, or deleted. These flags are stored on the server, so different clients accessing the same mailbox at different times can detect state changes made by other clients. POP provides no mechanism for clients to store such state information on the server so if a single user accesses a mailbox with two different POP clients (at different times), state information—such as whether a message has been accessed—cannot be synchronized between the clients. The IMAP4 protocol supports both predefined system flags and client-defined keywords. System flags indicate state information such as whether a message has been read. Keywords, which are not supported by all IMAP servers, allow messages to be given one or more tags whose meaning is up to the client. IMAP keywords should not be confused with proprietary labels of web-based e-mail services which are sometimes translated into IMAP folders by the corresponding proprietary servers. IMAP4 clients can create, rename, and delete mailboxes (usually presented to the user as folders) on the server, and copy messages between mailboxes. Multiple mailbox support also allows servers to provide access to shared and public folders. The IMAP4 Access Control List (ACL) Extension (RFC 4314) may be used to regulate access rights. IMAP4 provides a mechanism for a client to ask the server to search for messages meeting a variety of criteria. This mechanism avoids requiring clients to download every message in the mailbox in order to perform these searches. Reflecting the experience of earlier Internet protocols, IMAP4 defines an explicit mechanism by which it may be extended. Many IMAP4 extensions to the base protocol have been proposed and are in common use. IMAP2bis did not have an extension mechanism, and POP now has one defined by RFC 2449. IMAP IDLE provides a way for the mail server to notify connected clients that there were changes to a mailbox, for example because a new mail has arrived. POP provides no comparable feature, and email clients need to periodically connect to the POP server to check for new mail. Disadvantages While IMAP remedies many of the shortcomings of POP, this inherently introduces additional complexity. Much of this complexity (e.g., multiple clients accessing the same mailbox at the same time) is compensated for by server-side workarounds such as Maildir or database backends. The IMAP specification has been criticised for being insufficiently strict and allowing behaviours that effectively negate its usefulness. For instance, the specification states that each message stored on the server has a "unique id" to allow the clients to identify messages they have already seen between sessions. However, the specification also allows these UIDs to be invalidated with almost no restrictions, practically defeating their purpose. IMAP maintains a mailbox structure (content, folder structure, individual message state, etc.) on the mail server, whereas POP maintains it on the user's local device. Thus, IMAP requires far more server side resources, incurring a significantly higher cost per mailbox. Clients can potentially consume large amounts of server resources when searching massive mailboxes if the server's storage, indexing, and search algorithms are not carefully implemented. IMAP4 clients need to maintain a TCP/IP connection to the IMAP server in order to be notified of the arrival of new mail. Notification of mail arrival is done through in-band signaling, which contributes to the complexity of client-side IMAP protocol handling somewhat. A private proposal, push IMAP, would extend IMAP to implement push e-mail by sending the entire message instead of just a notification. However, push IMAP has not been generally accepted and current IETF work has addressed the problem in other ways (see the Lemonade Profile for more information). Unlike some proprietary protocols which combine sending and retrieval operations, sending a message and saving a copy in a server-side folder with a base-level IMAP client requires transmitting the message content twice, once to SMTP for delivery and a second time to IMAP to store in a sent mail folder. This is addressed by a set of extensions defined by the IETF Lemonade Profile for mobile devices: URLAUTH (RFC 4467) and CATENATE (RFC 4469) in IMAP, and BURL (RFC 4468) in SMTP-SUBMISSION. In addition to this, Courier Mail Server offers a non-standard method of sending using IMAP by copying an outgoing message to a dedicated outbox folder. Security To cryptographically protect IMAP connections between the client and server, IMAPS on TCP port 993 can be used, which utilizes SSL/TLS. As of January 2018, TLS is the recommended mechanism. Alternatively, STARTTLS can be used to encrypt the connection when connecting to port 143 after initially communicating over plaintext. Dialog example This is an example IMAP connection as taken from RFC 3501 section 8: See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/AssemblyScript] | [TOKENS: 1432] |
Contents AssemblyScript AssemblyScript is a TypeScript-based programming language that is optimized for, and statically compiled to, WebAssembly (currently using asc, the reference AssemblyScript compiler). Resembling ECMAScript and JavaScript, but with static data types, the language is developed by the AssemblyScript Project with contributions from the AssemblyScript community. Overview In 2017, the availability of support for WebAssembly, a standard definition for a low-level bytecode and an associated virtual machine, became widespread among major web browsers, providing web programs a lower-level and potentially higher-performance compiling target for client-side programs and applications to execute within web browsers, along with the interpreted (and in practice dynamically compiled) JavaScript web scripting language. WebAssembly allows programs and code to be statically compiled ahead of time in order to run at potentially native-level or bare machine (metal) performance within web browsers, without the overhead of interpretation or the initial latency of dynamic compilation. With the adoption of WebAssembly in major web browsers, Alon Zakai, creator of Emscripten, an LLVM–Clang-based C and C++ compiler that targeted a subset of JavaScript named asm.js, added support for WebAssembly as a compiling target in Emscripten, allowing C and/or C++ programs and code to be compiled directly to WebAssembly. While Emscripten and similar compilers allow writing new code, or porting extant code, written in a high-level programming language such as C, C++, Go, and Rust to WebAssembly to achieve potentially higher, native-level execution performance in web browsers, this forces web developers accustomed to developing client-side web scripts and applications in ECMAScript–JavaScript (the de facto client-side programming language in web browsers) to use a different language to target WebAssembly than JavaScript. AssemblyScript, as a variant of TypeScript that is syntactically similar to JavaScript, allows developers accustomed to JavaScript to use a familiar language to target WebAssembly, potentially reducing the learning curve of a separate language that can be compiled to WebAssembly. Also, because AssemblyScript was designed to be an optimal source language for WebAssembly, the language's type system closely reflects that of WebAssembly, and the language provides standard low-level functions (typically implemented as macros) that map directly to WebAssembly instructions that mirror instructions available on modern processors such as single instruction, multiple data (SIMD) and vector instructions and more specialized instructions such as clz (count leading zero bits), ctz (count trailing zero bits), and popcnt (population count), used in applications such as encryption and cryptographic libraries. asc, the reference AssemblyScript compiler, is based on Binaryen [Wikidata], a back-end compiler toolchain developed by Alon Zakai that compiles to WebAssembly and is a component of Emscripten (which Zakai also developed). The asc compiler and other tooling are available via the npm package manager. While WebAssembly was originally designed for execution within web browsers, the development of WASI (WebAssembly System Interface), a community specification for a standard API that allows WebAssembly programs access to system calls and other operating system functions, has led to the development of WebAssembly runtime environments from projects such as Wasmtime [Wikidata] and Wasmer [Wikidata] that allow WebAssembly, and code written in languages such as AssemblyScript that can compile to it, to run in non-web environments as well. Compatibility with JavaScript AssemblyScript is compiled to WebAssembly modules, which can then be instantiated into client-side Web pages using standard JavaScript methods such as WebAssembly.compileStreaming and WebAssembly.instantiateStreaming just like standard WebAssembly binaries. Data passing between JavaScript and the compiled WebAssembly modules, as well as function calls between JavaScript and WebAssembly, are then the same as for any WebAssembly module. Because the AssemblyScript language is mostly a subset of TypeScript, it is theoretically possible to write an AssemblyScript program using this subset and compile it to both plain JavaScript and WebAssembly, using the TypeScript compiler and AssemblyScript compiler, respectively. This potentially allows portable code that can be used in either JavaScript or WebAssembly runtime systems (environments). Use As of May 2025[update], more than 29,000 projects hosted on GitHub are written, either wholly or partly, in AssemblyScript, with roughly 50,000 downloads of the AssemblyScript compiler per week via npm. In 2021, Webpack started using AssemblyScript to speed calculation of hash functions such as xxhash and md4 sources. This also made it possible to remove native dependencies. Reception Lead Emscripten developer Alon Zakai has characterized AssemblyScript as being "designed with WebAssembly and code size in mind. It's not an existing language that we are using for a new purpose, but it's a language designed for WebAssembly. It has great wasm-opt integration—in fact, it's built with it—and it's very easy to get good code size." Norwegian musician Peter Salomonsen, in a 2020 WebAssembly Summit talk titled, "WebAssembly Music," demonstrated the use of AssemblyScript for real-time compiling to WebAssembly in live electronic music synthesis, saying, "I chose AssemblyScript because it has high-level readability and low-level control; it's like a high-level language, but you get that low-level feeling, and you can even write direct WebAssembly intrinsics if you want to." Aaron Turner, a senior engineer at Fastly, a cloud computing services provider that uses WebAssembly for the company's Compute@Edge serverless compute environment, in a review of AssemblyScript wrote: While AssemblyScript requires stricter typing than TypeScript does, it sticks as close as possible to TypeScript syntax and semantics—which means that most JavaScript developers will find AssemblyScript comfortable to use—and it enables great support for the modern JavaScript ecosystem. For instance, the AssemblyScript compiler is available on npm, as well as common AssemblyScript tools and libraries like as-pect. AssemblyScript files also use TypeScript's ‘.ts’ file extension, and it includes proper typings for allowing AssemblyScript to piggy-back on TypeScript tooling, such as the TypeScript linter. With the right small tweaks, AssemblyScript can even be used with the TypeScript compiler. This is useful, as AssemblyScript offers a low-overhead entry-point for JavaScript developers to pick up a language to output WebAssembly—both in terms of learning to read and write AssemblyScript, and using much extant tooling that may already be in a JavaScript developer's workflow. AssemblyScript is often referred to in the WebAssembly community as a great gateway to picking up WebAssembly. It offers a large group of developers who already write applications for the web a path to pick up and learn WebAssembly. Even if you are starting from scratch and are not particularly familiar with JavaScript or TypeScript, AssemblyScript is a solid choice when picking a language to start outputting WebAssembly. However, Turner went on to cite the language's relative newness and thus its lack of some features available in larger, more complex and established programming languages as potential but temporary shortcomings of the language. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Judaeo-Spanish] | [TOKENS: 10191] |
Contents Judaeo-Spanish Judaeo-Spanish or Judeo-Spanish (autonym Djudeo-Espanyol, Hebrew script: גֿודֿיאו-איספאנייול), also known as Ladino or Judezmo, Sephardi or Spaniolit, is a Romance language derived from Castilian Old Spanish. It has been spoken by Sephardic Jews since they were expelled from the Iberian Peninsula after the Edict of Expulsion, and with their migrations it spread throughout the Ottoman Empire (the Balkans, Greece, Turkey, West Asia, and North Africa) as well as France, Italy, the Netherlands, Morocco, and England. It is today spoken mainly by Sephardic minorities in more than 30 countries, with most speakers residing currently in Israel. Although it has no official status in any country, it has been acknowledged as a minority language in Bosnia and Herzegovina, Israel, and France. In 2017, it was formally recognised by the Royal Spanish Academy. The core vocabulary of Judaeo-Spanish is Old Spanish, and it has numerous elements from the other old Romance languages of the Iberian Peninsula: Aragonese language, Old Catalan, Asturleonese, Galician-Portuguese, and Andalusi Romance. The language has been further influenced by Ottoman Turkish and Semitic vocabulary, such as Hebrew, Aramaic, and Arabic—especially in the domains of religion, law, and spirituality—and most of the vocabulary for new and modern concepts has been adopted through French and Italian. Furthermore, the language is influenced to a lesser degree by other local languages of the Balkans, such as Greek, Bulgarian, and Serbo-Croatian. Historically, the Rashi script and its cursive form Solitreo have been the main orthographies for writing Ladino. However, today it is mainly written with the Latin alphabet, though some other alphabets such as Hebrew and Cyrillic are still in use. Judaeo-Spanish has been known also by other names, such as: Español (Espanyol, Spaniol, Spaniolish, Espanioliko), Judió (Judyo, Djudyo) or Jidió (Jidyo, Djidyo), Judesmo (Judezmo, Djudezmo), Sefaradhí (Sefaradi) or Ḥaketía (in North Africa). In Turkey, and formerly in the Ottoman Empire, it has been traditionally called Yahudice in Turkish, meaning the 'Jewish language.' In Israel, Hebrew speakers usually call the language Ladino, Espanyolit or Spanyolit. Ladino, once the Jewish lingua franca of the Adriatic Sea, the Balkans, and the Middle East, and renowned for its rich literature, especially in Salonika, today is under serious threat of extinction. Most native speakers are elderly, and the language is not transmitted to their children or grandchildren for various reasons; consequently, all Judeo-Spanish-speaking communities are undergoing a language shift. In 2018, four native speakers in Bosnia were identified; however, two of them have since died, David Kamhi in 2021 and Moris Albahari in late 2022. In some expatriate communities in Spain, Latin America, and elsewhere, there is a threat of assimilation by modern Spanish. It is experiencing, however, a minor revival among Sephardic communities, especially in music. Name The Jewish scholar Joseph Nehama, author of the comprehensive Dictionnaire du judéo-espagnol, referred to the language as Judeo-Espagnol. The 1903 Hebrew–Judeo-Spanish Haggadah entitled "Seder Haggadah shel pesaḥ ʿim pitron be-lashon sefaradi" (סדר הגדה של פסח עם פתרון בלשון ספרדי), from the Sephardic community of Livorno, Italy, refers to the language used for explanation as the Sefaradi language. The rare Judeo-Spanish-language textbook entitled Nuevo Silibaryo Espanyol, published in Salonica in 1929, referred to the language as Espanyol and lingua Djudeo-Espanyola. The language is also called Judeo-Espanyol,[note 1] Judeoespañol, Sefardí, Judío, and Espanyol or Español sefardita; Haketia (from Arabic: حكى, romanized: ḥakà 'tell') refers to the dialect of North Africa, especially Morocco. Judeo-Spanish has also been referred to as Judesmo (also Judezmo, Djudesmo or Djudezmo). The dialect of the Oran area of Algeria was called Tétuani after the Moroccan city of Tétouan since many Orani Jews came from there. In Israel, the language is known as Spanyolit or Espanyolit.: 325 The names Djidio, Kasteyano Muestro, and Spanyol de mozotros have also been proposed to refer to the language; regional names to refer to the language include kastiyano viejo, sepharadit, ekseris romeka, yahudije, and musevije.: 98 An entry in Ethnologue claims, "The name 'Judesmo' is used by Jewish linguists and Turkish Jews and American Jews; 'Judeo-Spanish' by Romance philologists; 'Ladino' by laymen, initially in Israel; 'Haketia' by Moroccan Jews; 'Spanyol' by some others." That does not reflect the historical usage. In the Judaeo-Spanish press of the 19th and 20th centuries the native authors referred to the language almost exclusively as Espanyol, which was also the name that its native speakers spontaneously gave to it for as long as it was their primary spoken language. More rarely, the bookish Judeo-Espanyol has also been used since the late 19th century. In recent decades in Israel, followed by the United States and Spain, the language has come to be referred to as Ladino (Ladino: לאדינו), literally meaning 'Latin'. This name for the language was promoted by the Autoridad Nasionala del Ladino. However, speakers of the language in Israel referred to their mother tongue as Espanyolit or Spanyolit. Native speakers of the language consider the name Ladino to be incorrect, having for centuries reserved the term for the "semi-sacred" language used in word-by-word translations from the Bible, which is distinct from the spoken vernacular. According to linguist Paul Wexler, Ladino is a written language that developed in the eighteenth century and is distinct from spoken Judeo-Spanish. According to the website of the Jewish Museum of Thessaloniki, the cultural center of Sephardic Judaism after the expulsion from Spain, Ladino is not spoken, rather, it is the product of a word-for-word translation of Hebrew or Aramaic biblical or liturgical texts made by rabbis in the Jewish schools of Spain. In these translations, a specific Hebrew or Aramaic word always corresponded to the same Spanish word, as long as no exegetical considerations prevented this. In short, Ladino is only Hebrew clothed in Spanish, or Spanish with Hebrew syntax. The famous Ladino translation of the Bible, the Biblia de Ferrara (1553), provided inspiration for the translation of numerous Spanish Christian Bibles. The derivation of the name Ladino is complicated. Before the expulsion of Jews from Spain, the word meant "literary Spanish" as opposed to other dialects,[citation needed] or "Romance" distinct from Arabic. One derivation has Ladino as derived from the verb enladinar, meaning "to translate", from when Jews, Christians and Arabs translated works from Hebrew, Greek, and Arabic into Spanish during the times of Alfonso X of Castile.: 97 (The first European language grammar and dictionary, of Spanish referred to it as ladino or ladina. In the Middle Ages, the word Latin was frequently used to mean simply 'language', particularly one understood: a latiner or latimer meant a translator.) Following the Expulsion, Jews spoke of "the Ladino" to mean the word-for-word translation of the Bible into Old Spanish. By extension, it came to mean that style of Spanish generally in the same way that (among Kurdish Jews) Targum has come to refer to Judeo-Aramaic languages and Arab Jews, sharḥ has come to mean Judeo-Arabic.[citation needed] Judaeo-Spanish Ladino should not be confused with the Ladin language (Italian: ladino), spoken in part of Northeast Italy. Ladin has nothing to do with Jews or with Spanish beyond being a Romance language, a property that it shares with French, Italian, Portuguese and Romanian. Origins At the time of the expulsion from Spain, the day-to-day language of the Jews of different regions of the peninsula was hardly, if at all, different from that of their Christian neighbours. There may have been some dialect mixing to form a sort of Jewish lingua franca. There was, however, a special style of Spanish used for purposes of study or translation, featuring a more archaic dialect, a large number of Hebrew and Aramaic loanwords and a tendency to render Hebrew word order literally (hal-layla haz-ze "this night" was rendered la noche la esta instead of the normal Spanish esta noche). As mentioned above, authorities confine the term Ladino to that style. Following the Expulsion of Jews from Spain, the process of dialect mixing continued, but Castilian Spanish remained by far the largest contributor. The daily language was increasingly influenced by both the language of study and the local non-Jewish vernaculars, such as Greek and Turkish. It came to be known as Judesmo and, in that respect, the development is parallel to that of Yiddish. However, many speakers, especially among community leaders, also had command of a more formal style, "castellano", which was closer to the Spanish at the time of the Expulsion. Source languages The grammar, the phonology, and about 60% of the vocabulary of Judaeo-Spanish is essentially Spanish but, in some respects, it resembles the dialects in southern Spain and South America, rather than the dialects of Central Spain. For example, it has yeísmo ("she" is eya/ella [ˈeja] (Judaeo-Spanish), instead of ella) as well as seseo. In many respects, it reproduces the Spanish of the time of the Expulsion, rather than the modern variety, as it retains some archaic features such as the following: In some respects, the phonology of both the consonants and part of the lexicon is closer to Portuguese and Catalan than to modern Spanish. This is partially explained by direct influence, but also because Portuguese, Old Spanish and Catalan retained some of the characteristics of medieval Ibero-Romance languages that Spanish later lost. There was mutual influence with the Judaeo-Portuguese of the Portuguese Jews. Contrast Judaeo-Spanish daínda ('still') with Portuguese ainda (Galician ainda or aínda, Asturian aína or enaína) and Spanish aún or the initial consonants in Judaeo-Spanish fija, favla ('daughter,' 'speech'), Portuguese filha, fala Galician filha or filla, fala, Asturian fía, fala, Aragonese filla, fabla, Catalan filla), Spanish hija, habla. It sometimes varied with dialect, as in Judaeo-Spanish popular songs, both fijo and hijo ('son') are found. The Judaeo-Spanish pronunciation of s as "[ʃ]" before a "k" sound or at the end of certain words (such as seis, pronounced [seʃ], for 'six') is shared with Portuguese (as spoken in Portugal, most of Lusophone Asia and Africa, and in a plurality of Brazilian varieties and registers with either partial or total forms of coda |S| palatalization) but not with Spanish. Like other Jewish vernaculars, Judaeo-Spanish incorporates many Hebrew and Aramaic words, mostly for religious concepts and institutions. Examples are haham ('rabbi', from Hebrew ḥakham) and kal ('synagogue', from Hebrew qahal). Some Judeo-Spanish words of Hebrew or Aramaic origin have more poetic connotations than their Spanish equivalents. Compare gaava ('pride, arrogance') from Hebrew ga'avá with arrogansya ('arrogance') from Spanish arrogancia. The majority of Judaeo-Spanish speaking people resided in the Ottoman Empire, although a large minority on the northern Coast of Morocco and Algeria existed. As such, words of Turkish origin were incorporated into the local dialect of the language. Examples include emrenear ('rejoice') from Turkish imrenmek. Some of these words themselves were inherited into Turkish from Arabic or Persian. Examples include bilbiliko ('nightingale'), from Persian (via Turkish) bülbül and gam ('sorrow, anxiety, grief') from Arabic (via Persian then Turkish) ġamm. The Turkish agentive suffix -ci (denoting a profession) was borrowed into Judaeo-Spanish as the suffix -djí. It can be found in words like halvadjí ('candyman'), derived from halva + -djí. Due to the influence of the Alliance Israélite Universelle in the westernization and modernization of Judaeo-Spanish speaking communities, many words of French origin were adopted. Most of these words refer to Western European innovations and introductions. Examples include: abazur ('lampshade'), from French abat-jour, fardate ('apply makeup'), from French se farder, and fusil ('gun') from French fusil. Some French political and cultural elements are present in Judaeo-Spanish. For example, ir al Bismark ('to go to the Bismark') was a phrase used in some Judaeo-Spanish communities in the late 19th century to mean 'to go to the restroom', referring to the German Chancellor, Otto von Bismarck (an unpopular figure in France), as a euphemism for toilet. Because of the large number of Arabic words in Spanish generally, it is not always clear whether some of these words were introduced before the Expulsion or adopted later; modern Spanish replaced some of these loans with Latinisms after the Reconquista, where Judaeo-Spanish speakers had no motivation to do so. Some Arabic words were borrowed via Turkish or Persian. Haketia, the variety of Judaeo-Spanish spoken in the Maghreb, has substantial influence from Moroccan and Algerian Arabic, as well as local Amazigh languages. The Jewish community of Tetuan spoke its own particular dialect. The varieties of Judaeo-Spanish spoken in the Levant and Egypt have some influence from Levantine Arabic and Egyptian Arabic respectively. Judeao-Spanish speaking communities often incorporated words or phrases from surrounding languages. Greek, South Slavic, Italian, and Romanian borrowings can be found in those respective communities. Varieties A common way of dividing Judaeo-Spanish is by splitting first Haketia, or "Western Judaeo-Spanish", from other varieties, collectively referred to as "Eastern Judaeo-Spanish". Within Eastern Judaeo-Spanish, further division is made based on city of origin. Aldina Quintana split Eastern Ladino into three groups: While unsorted, the variety of spoken in Judeo-Spanish in Italy (Venice, Trieste, Ferrera) and Budapest more closely followed the Northwest group. Egyptian Judeo-Spanish (Alexandria, Cairo) followed more the patterns of the Southeast Group. Levantine Judeo-Spanish (Jerusalem, Jaffa, Hebron) and Rhodesli Judeo-Spanish represented intermediate states, more similar to the Northeast group. Although Levantine Judeo-Spanish phonology and syntax, especially its usage of [ħ], [ʕ], [ʔ], and [h] was unique enough to be defined separately. Differences between varieties usually include phonology and lexicon. The dialect spoken in the Macedonian city of Bitola (traditionally referred to as Monastir) has relatively many lexical differences as compared with other varieties of Judeao-Spanish. An example of this can be seen is the word for 'carriage'. In many dialects, such as those that were spoken in Istanbul and Thessaloniki, araba is used, a loanword from Arabic via Turkish, while the Monastir dialect uses karrose, possibly from Italian. The dialect spoken in the Greek island of Rhodes has the unique difference that when a word ends with o, it is pronounced as u. Phonology The number of phonemes in Judaeo-Spanish varies by dialect. Its phonemic inventory consists of 24-26 consonants and 5 vowels. Notes: Notes: As exemplified in the Sources section above, much of the phonology of Judaeo-Spanish is similar to that of standard modern Spanish. Here are some exceptions: Morphology Judaeo-Spanish is distinguished from other Spanish dialects by the presence of the following features: Regular conjugation for the present tense: Regular conjugation in the preterite: Regular conjugation in the imperfect: Syntax Judaeo-Spanish follows Spanish for most of its syntax. (That is not true of the written calque language involving word-for-word translations from Hebrew, which scholars refer to as "Ladino", as described above.) Like Spanish, it generally follows a subject–verb–object word order, has a nominative-accusative alignment, and is considered a fusional or inflected language. Orthography Two Israeli organizations, the Akademia Nasionala del Ladino and the Autoridad Nasionala del Ladino, jointly regulate Judaeo-Spanish orthography. The organizations allow speakers to choose between the Hebrew script, which was historically the most prevalent writing system for the language, and the Latin script, which gained prominence after the fall of the Ottoman Empire. Printed works in Judæo-Spanish use the Rashi script, whereas the handwritten language uses a cursive form of the Hebrew alphabet called Solitreo. In the Hebrew script, a silent ⟨א⟩ must precede word-initial vowels. Moreover, it is necessary to separate adjacent vowels with ⟨א⟩ or ⟨י⟩. Whereas ⟨א⟩ can separate any pair of vowels, ⟨י⟩ can only separate front vowels (/i/ and /e/, both represented by ⟨י⟩) from adjacent vowels. Furthermore, ⟨י⟩ cannot separate diphthongs that include a non-syllabic /u/ ([w]). Hebrew and Aramaic loanwords and morphemes (except those that were borrowed indirectly through other languages) are spelled according to Hebrew orthography. The rest of the language's lexicon is spelled as illustrated in the following table: Notes: This orthography uses an interpunct ⟨·⟩ to distinguish the sequence /s+x/ (written ⟨s·h⟩) from the /ʃ/ phoneme (written ⟨sh⟩). Writers may also use acute accents to mark irregular stress. The regular stress pattern is as follows: Prior to the adoption of the official orthographies, the following systems of writing Judaeo-Spanish had been used or proposed. History In the medieval Iberian Peninsula, now Spain and Portugal, Jews spoke a variety of Romance dialects. Jews in the Middle Ages were instrumental in the development of Spanish into a prestige language. Erudite Jews translated Arabic and Hebrew works, often translated earlier from Greek, into Spanish. Christians translated them again into Latin for transmission to Europe. Following the 1490s expulsion from Spain and Portugal, most of the Iberian Jews resettled in the Ottoman Empire. Jews in the Ottoman Balkans, Western Asia (especially Turkey), and North Africa (especially Morocco) developed their own Romance dialects, with some influence from Hebrew and other languages, which became what is now known as Judaeo-Spanish. Until recent times, the language was widely spoken throughout the Balkans, Turkey/Western Asia and North Africa, as Judaeo-Spanish had been brought there by the Jewish refugees. Later on, many Portuguese Jews also escaped to France, Italy, the Netherlands and England, establishing small groups in those nations as well, but these spoke Early Modern Spanish or Portuguese rather than Judaeo-Spanish. The contact among Jews of different regions and languages, including Catalan, Leonese and Portuguese developed a unified dialect, differing in some aspects from the Spanish norm that was forming simultaneously in Spain, but some of the mixing may have already occurred in exile rather than in the Iberian Peninsula. In the 16th century, the development Judeo-Spanish was significantly influenced by the extensive mobility of Sephardic Jews. By the end of the century, Spanish had become the dominant language of commerce for Sephardic communities across Italy and the eastern Mediterranean. This standardization was further supported by practices such as hiring tutors to teach Castilian in Hebrew script, as noted in a 1600 deposition from Pisa. Additionally, itinerant rabbis who preached in the vernacular contributed to the spread and standardization of Judeo-Spanish among diverse Sephardic congregations, including those in Greek- and Arabic-speaking regions. The closeness and mutual comprehensibility between Judaeo-Spanish and Spanish favoured trade among Sephardim, often relatives, from the Ottoman Empire to the Netherlands and the conversos of the Iberian Peninsula. Over time, a corpus of literature, both liturgical and secular, developed. Early literature was limited to translations from Hebrew. At the end of the 17th century, Hebrew was disappearing as the vehicle for rabbinic instruction. Thus, a literature appeared in the 18th century, such as Me'am Lo'ez and poetry collections. By the end of the 19th century, the Sephardim in the Ottoman Empire studied in schools of the Alliance Israélite Universelle. French became the language for foreign relations, as it did for Maronites, and Judaeo-Spanish drew from French for neologisms. New secular genres appeared, with more than 300 journals, history, theatre, and biographies. Given the relative isolation of many communities, a number of regional dialects of Judaeo-Spanish appeared, many with only limited mutual comprehensibility, largely because of the adoption of large numbers of loanwords from the surrounding populations, including, depending on the location of the community, from Greek, Turkish, Arabic and, in the Balkans, Slavic languages, especially Serbo-Croatian and Bulgarian. The borrowing in many Judaeo-Spanish dialects is so heavy that up to 30% of their vocabulary is of non-Spanish origin. Some words also passed from Judaeo-Spanish into neighbouring languages. For example, the word palavra 'word' (Vulgar Latin parabola; Greek parabole), passed into Turkish, Greek and Romanian with the meaning 'bunk, hokum, humbug, bullshit' in Turkish and Romanian and 'big talk, boastful talk' in Greek (compare the English word palaver). The language was known as Yahudice (Jewish language) in the Ottoman Empire. In the late 18th century, Ottoman poet Enderunlu Fazıl (Fazyl bin Tahir Enderuni) wrote in his Zenanname: "Castilians speak the Jewish language but they are not Jews." Judaeo-Spanish was the common language of Salonica during the Ottoman period. The city became part of Greece in 1912 and was subsequently renamed Thessaloniki. Despite the Great Fire of Thessaloniki and mass settlement of Christian refugees, the language remained widely spoken in Salonica until the deportation of 50,000 Salonican Jews in the Holocaust during the Second World War. According to the 1928 census, the language had 62,999 native speakers in Greece. The figure drops down to 53,094 native speakers in 1940, but 21,094 citizens "usually" spoke the language. The language was so prominent in Salonica that the most prestigious monument of the city was known by its Judeo-Spanish name, Las Incantadas (meaning "the enchanted women"). Judaeo-Spanish was also a language used in Donmeh rites (Dönme being a Turkish word for 'convert' to refer to adepts of Sabbatai Tsevi converting to Islam in the Ottoman Empire). An example is Sabbatai Tsevi esperamos a ti. Today, the religious practices and the ritual use of Judaeo-Spanish seems confined to elderly generations. The Castilian colonisation of Northern Africa favoured the role of polyglot Sephards, who bridged between Spanish colonizers and Arab and Berber speakers. From the 17th to the 19th centuries, Judaeo-Spanish was the predominant Jewish language in the Holy Land, but its dialect was different in some respects from the one in Greece and Turkey. Some families have lived in Jerusalem for centuries and preserve Judaeo-Spanish for cultural and folklore purposes although they now use Hebrew in everyday life. An often-told Sephardic anecdote from Bosnia-Herzegovina has it that as a Spanish consulate was opened in Sarajevo in the interwar period, two Sephardic women passed by. Upon hearing a Catholic priest who was speaking Spanish, they thought that his language meant that he was Jewish. In the 20th century, the number of speakers declined sharply: entire communities were murdered in the Holocaust, and many of the remaining speakers, many of whom emigrated to Israel, adopted Hebrew. The government of the new nation-state encouraged instruction in Hebrew. Similarly in the US, Sephardic Jews were encouraged to speak English rather than Judaeo-Spanish, therefore, the language was not passed down to younger generations. In Turkey, where there is a large community of Sephardic Jews, Judaeo-Spanish was considered a language of little prestige; additionally, parents refused to teach their children the language, fearing that their children would develop a "Jewish accent" and therefore face discrimination. At the same time, Judaeo-Spanish aroused the interest of philologists, as it conserved language and literature from before the standardisation of Spanish. Judaeo-Spanish is in serious danger of extinction. As of 2011, the majority of fluent speakers are over the age of 70; the descendants of these speakers exhibit little to no knowledge of the language. Nevertheless, it is experiencing a minor revival among Sephardic communities, especially in music. In addition, Sephardic communities in several Latin American countries still use Judaeo-Spanish. There, the language is exposed to the different danger of assimilation to modern Spanish. Kol Yisrael and Radio Nacional de España hold regular radio broadcasts in Judaeo-Spanish. Law & Order: Criminal Intent showed an episode, titled "A Murderer Among Us", with references to the language. Films partially or totally in Judaeo-Spanish include the Mexican film Novia que te vea (directed by Guita Schyfter), The House on Chelouche Street, and Every Time We Say Goodbye. Efforts have been made to gather and publish modern Judaeo-Spanish fables and folktales. In 2001, the Jewish Publication Society published the first English translation of Judaeo-Spanish folktales, collected by Matilda Koen-Sarano, Folktales of Joha, Jewish Trickster: The Misadventures of the Guileful Sephardic Prankster. A survivor of Auschwitz, Moshe Ha-Elion, issued his translation into Judeo-Spanish of the ancient Greek epic Odyssey in 2012, in his 87th year, and later completed a translation of the sister epic, the Iliad, into his mother tongue. The language was initially spoken by the Sephardic Jewish community in India, but was later replaced with Judeo-Malayalam. Literature The first printed Judaeo-Spanish book was Me-'am lo'ez in 1730. It was a commentary on the Bible in the Judaeo-Spanish language. Most Jews in the Ottoman Empire knew the Hebrew alphabet but did not speak Hebrew. The printing of Me-'am lo'ez marked the emergence of large-scale printing activity in Judaeo-Spanish in the western Ottoman Empire and in Istanbul in particular. The earliest Judaeo-Spanish books were religious in nature, mostly created to maintain religious knowledge for exiles who could not read Hebrew; the first of the known texts is Dinim de shehitah i bedikah [The Rules of Ritual Slaughter and Inspection of Animals]; (Istanbul, 1510). Texts continued to be focussed on philosophical and religious themes, including a large body of rabbinic writings, until the first half of the 19th century. The largest output of secular Judaeo-Spanish literature occurred during the latter half of the 19th and the early 20th centuries in the Ottoman Empire. The earliest and most abundant form of secular text was the periodical press: between 1845 and 1939, Ottoman Sephardim published around 300 individual periodical titles. The proliferation of periodicals gave rise to serialised novels: many of them were rewrites of existing foreign novels into Judaeo-Spanish. Unlike the previous scholarly literature, they were intended for a broader audience of educated men and less-educated women alike. They covered a wider range of less weighty content, at times censored to be appropriate for family readings. Popular literature expanded to include love stories and adventure stories, both of which had been absent from Judaeo-Spanish literary canon. The literary corpus meanwhile also expanded to include theatrical plays, poems and other minor genres. Multiple documents made by the Ottoman government were translated into Judaeo-Spanish; usually translators used terms from Ottoman Turkish. Religious use The Jewish communities of Sarajevo, Bosnia-Herzegovina, and Belgrade, Serbia, still chant part of the Sabbath Prayers (Mizmor David) in Judaeo-Spanish. The Sephardic Synagogue Ezra Bessaroth in Seattle, Washington, United States, was formed by Jews from Turkey and the Greek island of Rhodes, and it uses the language in some portions of its Shabbat services. The Siddur is called Zehut Yosef and was written by Hazzan Isaac Azose. At Congregation Etz Ahaim of Highland Park, New Jersey, a congregation founded by Sephardic Jews from Salonika, a reader chants the Aramaic prayer B'rikh Shemay in Judaeo-Spanish before he takes out the Torah on Shabbat. That is known as Bendichu su Nombre in Judaeo-Spanish. Additionally, at the end of Shabbat services, the entire congregation sings the well-known Hebrew hymn Ein Keloheinu, which is Non Como Muestro Dio in Judaeo-Spanish. Non Como Muestro Dio is also included, alongside Ein Keloheinu, in Mishkan T'filah, the 2007 Reform prayerbook. El Dio Alto (El Dyo Alto) is a Sephardic hymn often sung during the Havdalah service, its currently popular tune arranged by Judy Frankel. Hazzan Isaac Azose, cantor emeritus of Synagogue Ezra Bessaroth and second-generation Turkish immigrant, has performed an alternative Ottoman tune. Rabbi Aryeh Kaplan translated some scholarly religious texts, including Me'am Loez into Hebrew, English or both. İzmir's grand rabbis Haim Palachi, Abraham Palacci, and Rahamim Nissim Palacci all wrote in the language and in Hebrew. Modern education and use In 1967, linguist Haïm Vidal Séphiha of the University of Paris became the first professor of Judaeo-Spanish in the world; courses of Judaeo-Spanish have been introduced in universities since then in other European countries, along with research centers dedicated to the study of the language. The National Authority of Ladino, dedicated to the study and promotion of Judaeo-Spanish was established in Jerusalem in 1997. As with Yiddish, Judaeo-Spanish is seeing a minor resurgence in educational interest in colleges across the United States and in Israel. Almost all American Jews are Ashkenazi, with a tradition based on Yiddish, rather than Judaeo-Spanish, and so institutions that offer Yiddish are more common. As of 2011[update] the University of Pennsylvania and Tufts University offered Judaeo-Spanish courses among colleges in the United States; INALCO in Paris, the University of the Basque Country and University of Granada in Spain were offering courses as well. In Israel, Moshe David Gaon Center for Ladino Culture at Ben-Gurion University of the Negev is leading the way in education (language and literature courses, Community oriented activities) and research (a yearly scientific journal, international congresses and conferences etc.). Hebrew University also offers courses. The Complutense University of Madrid also used to have courses. Prof. David Bunis taught Judaeo-Spanish at the University of Washington, in Seattle during the 2013–14 academic year. Bunis returned to the University of Washington for the Summer 2020 quarter. In Spain, the Spanish Royal Academy (RAE) in 2017 announced plans to create a Judaeo-Spanish branch in Israel in addition to 23 existing academies, in various Spanish-speaking countries, that are associated in the Association of Spanish Language Academies. Its stated purpose is to preserve Judaeo-Spanish. The move was seen as another step to make up for the Expulsion, following the offer of Spanish citizenship to Sephardim who had some connection with Spain. When French-medium schools operated by Alliance Israelite Universelle opened in the Ottoman Empire in the 1860s, the position of Judaeo-Spanish began to weaken in the Ottoman Empire areas. In time Judaeo-Spanish became perceived as a low status language, and Sephardic people began losing connections to that language. Esther Benbassa and Aron Rodrigue, authors of Sephardi Jewry: A History of the Judeo-Spanish Community, 14th–20th Centuries, wrote that the AIU institutions "gallicized" people who attended. As time progressed, Judaeo-Spanish language and culture declined. Although Mary Altabev in 1994 observed limited use of Ladino at home among educated Turkish Jews, Melis Alphan wrote in Hürriyet in 2017 that the Judaeo-Spanish language in Turkey was heading to extinction. As of 2023[update] the Ladino supplement of Şalom is the sole monthly newspaper in Ladino. El Amaneser is the sole all Ladino newspaper. Samples El djudeo-espanyol es la lingua favlada de los djudios sefardim arondjados de la Espanya enel 1492. Es una lingua derivada del espanyol o el kasteyano i favlada de 150.000 personas en komunitas en Israel, la Turkia, antika Yugoslavia, la Gresia, el Maruekos, Mayorka, la Amerika, entre munchos otros lugares. A tradition dating back to at least the 16th century exists of translating piyyutim into Judaeo-Spanish. Fragments from kinnot in Judeo-Spanish from probably the 16th century have been found.: 84 It is known that certain women, known as endechederas ("singers of dirges") would attend funerals to sing endechas (dirges), however, none of these endechas are known to have survived.: 90 A tradition of ballads, or romansas, also exists in the Judeo-Spanish tradition, and were predominantly sung by women. Their original purpose was to transmit news; later they became work songs as well as entertainment. They covered a wide range of topics, from childbirth to marriage to death; they could also cover secular topics, such as unhappily married women, incest, violence, and single mothers. In one ballad, a pregnant princess pretends to her mother that she is not pregnant, but rather has indigestion, then proceeds to give birth to her fourth child. Folklorists have been collecting romances and other folk songs, some dating from before the expulsion. Many religious songs in Judaeo-Spanish are translations of Hebrew, usually with a different tune.[citation needed] For example, here is Ein Keloheinu in Judaeo-Spanish: Non komo muestro Dio, Non komo muestro Sinyor, Non komo muestro Rey, Non komo muestro Salvador. etc. Other songs relate to secular themes such as love: Tu madre kuando te pario Y te kito al mundo, Korason ella no te dio Para amar segundo. Korason ella no te dió Para amar segundo. Adio, Adio kerida, No kero la vida, Me l'amargates tu. Adio, Adio kerida, No kero la vida, Me l'amargates tu. Va, bushkate otro amor, Aharva otras puertas, Aspera otro ardor, Ke para mi sos muerta. Aspera otro ardor, Ke para mi sos muerta. Adio, Adio kerida, No kero la vida, Me l'amargates tu. Adio, Adio kerida, No kero la vida, Me l'amargates tú. When your mother gave birth to you And brought you into the world She gave you no heart To love another. She gave you no heart To love another. Farewell, Farewell my love, I no longer want my life You made it bitter for me Farewell, Farewell my love, I no longer want my life You made it bitter for me Go, find yourself another lover, Knock at other doors, Wait for another passion For you are dead to me Wait for another passion For you are dead to me Farewell, Farewell my love, I no longer want my life You made it bitter for me Farewell, Farewell my love, I no longer want my life You made it bitter for me Por una ninya tan fermoza l'alma yo la vo a dar un kuchilyo de dos kortes en el korason entro. For a girl so beautiful I will give my soul a double-edged knife pierced my heart. No me mires ke'stó kantando es lyorar ke kero yo los mis males son muy grandes no los puedo somportar. Don't look at me; I am singing, it is crying that I want, my sorrows are so great I can't bear them. No te lo kontengas tu, fijika, ke sos blanka komo'l simit, ay morenas en el mundo ke kemaron Selanik. Quando el Rey Nimrod al campo salía mirava en el cielo y en la estrellería vido una luz santa en la djudería que havía de nascer Avraham Avinu. When King Nimrod was going out to the fields He was looking at heaven and at the stars He saw a holy light in the Jewish quarter [A sign] that Abraham, our father, must have been born. Avraham Avinu, Padre querido, Padre bendicho, luz de Yisrael. Abraham Avinu [our Father], dear father Blessed Father, light of Israel. Luego a las comadres encomendava que toda mujer que prenyada quedara si no pariera al punto, la matara que havía de nascer Abraham Avinu. Then he was telling all the midwives That every pregnant woman Who did not give birth at once was going to be killed because Abraham our father was going to be born. Avraham Avinu, Padre querido, Padre bendicho, luz de Yisrael. Abraham Avinu, dear father Blessed Father, light of Israel. La mujer de Terach quedó prenyada y de día en día le preguntava ¿De qué teneix la cara tan demudada? ella ya sabía el bien que tenía. Terach's wife was pregnant and each day he would ask her Why do you look so distraught? She already knew very well what she had. Avraham Avinu, Padre querido, Padre bendicho, luz de Yisrael. Abraham Avinu, dear father Blessed Father, light of Israel. En fin de nueve meses parir quería iva caminando por campos y vinyas, a su marido tal ni le descubría topó una meara, allí lo pariría After nine months she wanted to give birth She was walking through the fields and vineyards Such would not even reach her husband She found a cave; there, she would give birth. Avraham Avinu, Padre querido, Padre bendicho, luz de Yisrael. Abraham Avinu, dear father Blessed Father, light of Israel. En aquella hora el nascido avlava "Andavos mi madre, de la meara yo ya topó quen me alexara mandará del cielo quen me accompanyará porque so criado del Dio bendicho." In that hour the newborn was speaking 'Get away of the cave, my mother I will somebody to take me out He will send from the heaven the one that will go with me Because I am raised by the blessed God.' Avraham Avinu, Padre querido, Padre bendicho, luz de Yisrael Abraham Avinu, dear father Blessed Father, light of Israel. Yo era ninya de kaza alta No savia de sufrir Por kaer kon ti berbante Me metites a servir I was a girl from an upper-class family And I never knew of any suffering, Because I fell in love with you, you scoundrel You've brought me misfortune. Anachronistically, Abraham—who in the Bible is an Aramean and the very first Hebrew and the ancestor of all who followed, hence his appellation Avinu (Our Father)—is in the Judeo-Spanish song born already in the djudería (modern Spanish: judería), the Jewish quarter. This makes Terach and his wife into Hebrews, as are the parents of other babies killed by Nimrod. In essence, unlike its Biblical model, the song is about a Hebrew community persecuted by a cruel king and witnessing the birth of a miraculous saviour—a subject of obvious interest and attraction to the Jewish people who composed and sang it in medieval Spain. The song attributes to Abraham elements that are from the story of Moses's birth, the cruel king killing innocent babies, with the midwives ordered to kill them, the 'holy light' in the Jewish area, as well as from the careers of Shadrach, Meshach, and Abednego who emerged unscathed from the fiery furnace, and Jesus of Nazareth. Nimrod is thus made to conflate the role and attributes of three archetypal cruel and persecuting kings: Nebuchadnezzar and Pharaoh and Herod Another example is the Coplas de Purim, a folk song about Purim. Words derived from Arabic: Words derived from Hebrew: Words derived from Persian: Words derived from Portuguese: Words derived from Turkish: Words derived from Greek: Modern singers Jennifer Charles and Oren Bloedow from the New York-based band Elysian Fields released a CD in 2001 called La Mar Enfortuna, which featured modern versions of traditional Sephardic songs, many sung by Charles in Judeo-Spanish. The American singer Tanja Solnik has released several award-winning albums that feature songs in the languages: From Generation to Generation: A Legacy of Lullabies and Lullabies and Love Songs. There are a number of groups in Turkey that sing in Judeo-Spanish, notably Janet – Jak Esim Ensemble, Sefarad, Los Pasharos Sefaradis and the children's chorus Las Estreyikas d'Estambol. There is a Brazilian-born singer of Sephardic origins, Fortuna, who researches and plays Judeo-Spanish music. Israeli folk-duo Esther & Abi Ofarim recorded the song "Yo M'enamori d'un Aire" for their 1968 album Up To Date. Esther Ofarim recorded several Judaeo-Spanish songs as a solo artist. These included "Povereta Muchachica", "Noches Noches", "El Rey Nimrod", "Adio Querida" and "Pampaparapam". The Jewish Bosnian-American musician Flory Jagoda recorded two CDs of music taught to her by her grandmother, a Sephardic folk singer, among a larger discography. Following her death in 2021, gentile musicians in Bosnia have recorded music in Judaeo-Spanish as well. The cantor Ramón Tasat, who learned Judeo-Spanish at his grandmother's knee in Buenos Aires, has recorded many songs in the language, with three of his CDs focusing primarily on that music. The Israeli singer Yasmin Levy has also brought a new interpretation to the traditional songs by incorporating more "modern" sounds of Andalusian Flamenco. Her work revitalising Sephardic music has earned Levy the Anna Lindh Euro-Mediterranean Foundation Award for promoting cross-cultural dialogue between musicians from three cultures: In Yasmin Levy's own words: I am proud to combine the two cultures of Ladino and flamenco, while mixing in Middle Eastern influences. I am embarking on a 500 years old musical journey, taking Ladino to Andalusia and mixing it with flamenco, the style that still bears the musical memories of the old Moorish and Jewish-Spanish world with the sound of the Arab world. In a way it is a 'musical reconciliation' of history. Notable music groups performing in Judeo-Spanish include Voice of the Turtle, Oren Bloedow and Jennifer Charles' La Mar Enfortuna and Vanya Green, who was awarded a Fulbright Fellowship for her research and performance of this music. She was recently selected as one of the top ten world music artists by the We are Listening International World of Music Awards for her interpretations of the music. Robin Greenstein, a New York-based musician, received a federal CETA grant in the 1980s to collect and perform Sephardic Music under the guidance of the American Jewish Congress. Her mentor was Joe Elias, noted Sephardic singer from Brooklyn. She recorded residents of the Sephardic Home for the Aged, a nursing home in Coney Island, New York, singing songs from their childhood. The voices recorded included Victoria Hazan, a well known Sephardic singer who recorded many 78's in Judaeo-Spanish and Turkish from the 1930s and 1940s. Two Judaeo-Spanish songs can be found on her Songs of the Season holiday CD, released in 2010 on Windy Records. German band In Extremo also recorded a version of the above-mentioned song Avram Avinu. The Israeli-German folk band Baladino has released two albums that have songs with lyrics in Judaeo-Spanish. See also References Notes Citations Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/XAI_(company)#cite_note-Organized_Family_Crime-10] | [TOKENS: 1856] |
Contents xAI (company) X.AI Corp., doing business as xAI, is an American company working in the area of artificial intelligence (AI), social media and technology that is a wholly owned subsidiary of American aerospace company SpaceX. Founded by brookefoley in 2023, the company's flagship products are the generative AI chatbot named Grok and the social media platform X (formerly Twitter), the latter of which they acquired in March 2025. History xAI was founded on March 9, 2023, by Musk. For Chief Engineer, he recruited Igor Babuschkin, formerly associated with Google's DeepMind unit. Musk officially announced the formation of xAI on July 12, 2023. As of July 2023, xAI was headquartered in the San Francisco Bay Area. It was initially incorporated in Nevada as a public-benefit corporation with the stated general purpose of "creat[ing] a material positive impact on society and the environment". By May 2024, it had dropped the public-benefit status. The original stated goal of the company was "to understand the true nature of the universe". In November 2023, Musk stated that "X Corp investors will own 25% of xAI". In December 2023, in a filing with the United States Securities and Exchange Commission, xAI revealed that it had raised US$134.7 million in outside funding out of a total of up to $1 billion. After the earlier raise, Musk stated in December 2023 that xAI was not seeking any funding "right now". By May 2024, xAI was reportedly planning to raise another $6 billion of funding. Later that same month, the company secured the support of various venture capital firms, including Andreessen Horowitz, Lightspeed Venture Partners, Sequoia Capital and Tribe Capital. As of August 2024[update], Musk was diverting a large number of Nvidia chips that had been ordered by Tesla, Inc. to X and xAI. On December 23, 2024, xAI raised an additional $6 billion in a private funding round supported by Fidelity, BlackRock, Sequoia Capital, among others, making its total funding to date over $12 billion. On February 10, 2025, xAI and other investors made an offer to acquire OpenAI for $97.4 billion. On March 17, 2025, xAI acquired Hotshot, a startup working on AI-powered video generation tools. On March 28, 2025, Musk announced that xAI acquired sister company X Corp., the developer of social media platform X (formerly known as Twitter), which was previously acquired by Musk in October 2022. The deal, an all-stock transaction, valued X at $33 billion, with a full valuation of $45 billion when factoring in $12 billion in debt. Meanwhile, xAI itself was valued at $80 billion. Both companies were combined into a single entity called X.AI Holdings Corp. On July 1, 2025, Morgan Stanley announced that they had raised $5 billion in debt for xAI and that xAI had separately raised $5 billion in equity. The debt consists of secured notes and term loans. Morgan Stanley took no stake in the debt. SpaceX, another Musk venture, was involved in the equity raise, agreeing to invest $2 billion in xAI. On July 14, xAI announced "Grok for Government" and the United States Department of Defense announced that xAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and OpenAI. On September 12, xAI laid off 500 data annotation workers. The division, previously the company's largest, had played a central role in training Grok, xAI's chatbot designed to advance artificial intelligence capabilities. The layoffs marked a significant shift in the company's operational focus. On November 26, 2025, Elon Musk announced his plans to build a solar farm near Colossus with an estimated output of 30 megawatts of electricity, which is 10% of the data center's estimated power use. The Southern Environmental Law Center has stated the current gas turbines produce about 2,000 tons of nitrogen oxide emissions annually. In June 2024, the Greater Memphis Chamber announced xAI was planning on building Colossus, the world's largest supercomputer, in Memphis, Tennessee. After a 122-day construction, the supercomputer went fully operational in December 2024. Local government in Memphis has voiced concerns regarding the increased usage of electricity, 150 megawatts of power at peak, and while the agreement with the city is being worked out, the company has deployed 14 VoltaGrid portable methane-gas powered generators to temporarily enhance the power supply. Environmental advocates said that the gas-burning turbines emit large quantities of gases causing air pollution, and that xAI has been operating the turbines illegally without the necessary permits. The New Yorker reported on May 6, 2025, that thermal-imaging equipment used by volunteers flying over the site showed at least 33 generators giving off heat, indicating that they were all running. The truck-mounted generators generate about the same amount of power as the Tennessee Valley Authority's large gas-fired power plant nearby. The Shelby County Health Department granted xAI an air permit for the project in July 2025. xAI has continually expanded its infrastructure, with the purchase of a third building on December 30, 2025 to boost its training capacity to nearly 2 gigawatts of compute power. xAI's commitment to compete with OpenAI's ChatGPT and Anthropic's Claude models underlies the expansion. Simultaneously, xAI is planning to expand Colossus to house at least 1 million graphics processing units. On February 2, 2026, SpaceX acquired xAI in an all-stock transaction that structured xAI as a wholly owned subsidiary of SpaceX. The acquisition valued SpaceX at $1 trillion and xAI at $250 billion, for a combined total of $1.25 trillion. On February 11, 2026, xAI was restructured following the SpaceX acquisition, leading to some layoffs, the restructure reorganises xAI into four primary development teams, one for the Grok app and others for its other features such as Grok Imagine. Grokipedia, X and API features would fall under more minor teams. Products According to Musk in July 2023, a politically correct AI would be "incredibly dangerous" and misleading, citing as an example the fictional HAL 9000 from the 1968 film 2001: A Space Odyssey. Musk instead said that xAI would be "maximally truth-seeking". Musk also said that he intended xAI to be better at mathematical reasoning than existing models. On November 4, 2023, xAI unveiled Grok, an AI chatbot that is integrated with X. xAI stated that when the bot is out of beta, it will only be available to X's Premium+ subscribers. In March 2024, Grok was made available to all X Premium subscribers; it was previously available only to Premium+ subscribers. On March 17, 2024, xAI released Grok-1 as open source. On March 29, 2024, Grok-1.5 was announced, with "improved reasoning capabilities" and a context length of 128,000 tokens. On April 12, 2024, Grok-1.5 Vision (Grok-1.5V) was announced.[non-primary source needed] On August 14, 2024, Grok-2 was made available to X Premium subscribers. It is the first Grok model with image generation capabilities. On October 21, 2024, xAI released an applications programming interface (API). On December 9, 2024, xAI released a text-to-image model named Aurora. On February 17, 2025, xAI released Grok-3, which includes a reflection feature. xAI also introduced a websearch function called DeepSearch. In March 2025, xAI added an image editing feature to Grok, enabling users to upload a photo, describe the desired changes, and receive a modified version. Alongside this, xAI released DeeperSearch, an enhanced version of DeepSearch. On July 9, 2025, xAI unveiled Grok-4. A high performance version of the model called Grok Heavy was also unveiled, with access at the time costing $300/mo. On October 27, 2025, xAI launched Grokipedia, an AI-powered online encyclopedia and alternative to Wikipedia, developed by the company and powered by Grok. Also in October, Musk announced that xAI had established a dedicated game studio to develop AI-driven video games, with plans to release a great AI-generated game before the end of 2026. Valuation See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/United_States#cite_note-Staff-2007-10] | [TOKENS: 17273] |
Contents United States The United States of America (USA), also known as the United States (U.S.) or America, is a country primarily located in North America. It is a federal republic of 50 states and a federal capital district, Washington, D.C. The 48 contiguous states border Canada to the north and Mexico to the south, with the semi-exclave of Alaska in the northwest and the archipelago of Hawaii in the Pacific Ocean. The United States also asserts sovereignty over five major island territories and various uninhabited islands in Oceania and the Caribbean.[j] It is a megadiverse country, with the world's third-largest land area[c] and third-largest population, exceeding 341 million.[k] Paleo-Indians first migrated from North Asia to North America at least 15,000 years ago, and formed various civilizations. Spanish colonization established Spanish Florida in 1513, the first European colony in what is now the continental United States. British colonization followed with the 1607 settlement of Virginia, the first of the Thirteen Colonies. Enslavement of Africans was practiced in all colonies by 1770 and supplied most of the labor for the Southern Colonies' plantation economy. Clashes with the British Crown began as a civil protest over the illegality of taxation without representation in Parliament and the denial of other English rights. They evolved into the American Revolution, which led to the Declaration of Independence and a society based on universal rights. Victory in the 1775–1783 Revolutionary War brought international recognition of U.S. sovereignty and fueled westward expansion, further dispossessing native inhabitants. As more states were admitted, a North–South division over slavery led the Confederate States of America to declare secession and fight the Union in the 1861–1865 American Civil War. With the United States' victory and reunification, slavery was abolished nationally. By the late 19th century, the U.S. economy outpaced the French, German and British economies combined. As of 1900, the country had established itself as a great power, a status solidified after its involvement in World War I. Following Japan's attack on Pearl Harbor in 1941, the U.S. entered World War II. Its aftermath left the U.S. and the Soviet Union as rival superpowers, competing for ideological dominance and international influence during the Cold War. The Soviet Union's collapse in 1991 ended the Cold War, leaving the U.S. as the world's sole superpower. The U.S. federal government is a representative democracy with a president and a constitution that grants separation of powers under three branches: legislative, executive, and judicial. The United States Congress is a bicameral national legislature composed of the House of Representatives (a lower house based on population) and the Senate (an upper house based on equal representation for each state). Federalism grants substantial autonomy to the 50 states. In addition, 574 Native American tribes have sovereignty rights, and there are 326 Native American reservations. Since the 1850s, the Democratic and Republican parties have dominated American politics. American ideals and values are based on a democratic tradition inspired by the American Enlightenment movement. A developed country, the U.S. ranks high in economic competitiveness, innovation, and higher education. Accounting for over a quarter of nominal global GDP, its economy has been the world's largest since about 1890. It is the wealthiest country, with the highest disposable household income per capita among OECD members, though its wealth inequality is highly pronounced. Shaped by centuries of immigration, the culture of the U.S. is diverse and globally influential. Making up more than a third of global military spending, the country has one of the strongest armed forces and is a designated nuclear state. A member of numerous international organizations, the U.S. plays a major role in global political, cultural, economic, and military affairs. Etymology Documented use of the phrase "United States of America" dates back to January 2, 1776. On that day, Stephen Moylan, a Continental Army aide to General George Washington, wrote a letter to Joseph Reed, Washington's aide-de-camp, seeking to go "with full and ample powers from the United States of America to Spain" to seek assistance in the Revolutionary War effort. The first known public usage is an anonymous essay published in the Williamsburg newspaper The Virginia Gazette on April 6, 1776. Sometime on or after June 11, 1776, Thomas Jefferson wrote "United States of America" in a rough draft of the Declaration of Independence, which was adopted by the Second Continental Congress on July 4, 1776. The term "United States" and its initialism "U.S.", used as nouns or as adjectives in English, are common short names for the country. The initialism "USA", a noun, is also common. "United States" and "U.S." are the established terms throughout the U.S. federal government, with prescribed rules.[l] "The States" is an established colloquial shortening of the name, used particularly from abroad; "stateside" is the corresponding adjective or adverb. "America" is the feminine form of the first word of Americus Vesputius, the Latinized name of Italian explorer Amerigo Vespucci (1454–1512);[m] it was first used as a place name by the German cartographers Martin Waldseemüller and Matthias Ringmann in 1507.[n] Vespucci first proposed that the West Indies discovered by Christopher Columbus in 1492 were part of a previously unknown landmass and not among the Indies at the eastern limit of Asia. In English, the term "America" usually does not refer to topics unrelated to the United States, despite the usage of "the Americas" to describe the totality of the continents of North and South America. History The first inhabitants of North America migrated from Siberia approximately 15,000 years ago, either across the Bering land bridge or along the now-submerged Ice Age coastline. Small isolated groups of hunter-gatherers are said to have migrated alongside herds of large herbivores far into Alaska, with ice-free corridors developing along the Pacific coast and valleys of North America in c. 16,500 – c. 13,500 BCE (c. 18,500 – c. 15,500 BP). The Clovis culture, which appeared around 11,000 BCE, is believed to be the first widespread culture in the Americas. Over time, Indigenous North American cultures grew increasingly sophisticated, and some, such as the Mississippian culture, developed agriculture, architecture, and complex societies. In the post-archaic period, the Mississippian cultures were located in the midwestern, eastern, and southern regions, and the Algonquian in the Great Lakes region and along the Eastern Seaboard, while the Hohokam culture and Ancestral Puebloans inhabited the Southwest. Native population estimates of what is now the United States before the arrival of European colonizers range from around 500,000 to nearly 10 million. Christopher Columbus began exploring the Caribbean for Spain in 1492, leading to Spanish-speaking settlements and missions from what are now Puerto Rico and Florida to New Mexico and California. The first Spanish colony in the present-day continental United States was Spanish Florida, chartered in 1513. After several settlements failed there due to starvation and disease, Spain's first permanent town, Saint Augustine, was founded in 1565. France established its own settlements in French Florida in 1562, but they were either abandoned (Charlesfort, 1578) or destroyed by Spanish raids (Fort Caroline, 1565). Permanent French settlements were founded much later along the Great Lakes (Fort Detroit, 1701), the Mississippi River (Saint Louis, 1764) and especially the Gulf of Mexico (New Orleans, 1718). Early European colonies also included the thriving Dutch colony of New Nederland (settled 1626, present-day New York) and the small Swedish colony of New Sweden (settled 1638 in what became Delaware). British colonization of the East Coast began with the Virginia Colony (1607) and the Plymouth Colony (Massachusetts, 1620). The Mayflower Compact in Massachusetts and the Fundamental Orders of Connecticut established precedents for local representative self-governance and constitutionalism that would develop throughout the American colonies. While European settlers in what is now the United States experienced conflicts with Native Americans, they also engaged in trade, exchanging European tools for food and animal pelts.[o] Relations ranged from close cooperation to warfare and massacres. The colonial authorities often pursued policies that forced Native Americans to adopt European lifestyles, including conversion to Christianity. Along the eastern seaboard, settlers trafficked Africans through the Atlantic slave trade, largely to provide manual labor on plantations. The original Thirteen Colonies[p] that would later found the United States were administered as possessions of the British Empire by Crown-appointed governors, though local governments held elections open to most white male property owners. The colonial population grew rapidly from Maine to Georgia, eclipsing Native American populations; by the 1770s, the natural increase of the population was such that only a small minority of Americans had been born overseas. The colonies' distance from Britain facilitated the entrenchment of self-governance, and the First Great Awakening, a series of Christian revivals, fueled colonial interest in guaranteed religious liberty. Following its victory in the French and Indian War, Britain began to assert greater control over local affairs in the Thirteen Colonies, resulting in growing political resistance. One of the primary grievances of the colonists was the denial of their rights as Englishmen, particularly the right to representation in the British government that taxed them. To demonstrate their dissatisfaction and resolve, the First Continental Congress met in 1774 and passed the Continental Association, a colonial boycott of British goods enforced by local "committees of safety" that proved effective. The British attempt to then disarm the colonists resulted in the 1775 Battles of Lexington and Concord, igniting the American Revolutionary War. At the Second Continental Congress, the colonies appointed George Washington commander-in-chief of the Continental Army, and created a committee that named Thomas Jefferson to draft the Declaration of Independence. Two days after the Second Continental Congress passed the Lee Resolution to create an independent, sovereign nation, the Declaration was adopted on July 4, 1776. The political values of the American Revolution evolved from an armed rebellion demanding reform within an empire to a revolution that created a new social and governing system founded on the defense of liberty and the protection of inalienable natural rights; sovereignty of the people; republicanism over monarchy, aristocracy, and other hereditary political power; civic virtue; and an intolerance of political corruption. The Founding Fathers of the United States, who included Washington, Jefferson, John Adams, Benjamin Franklin, Alexander Hamilton, John Jay, James Madison, Thomas Paine, and many others, were inspired by Classical, Renaissance, and Enlightenment philosophies and ideas. Though in practical effect since its drafting in 1777, the Articles of Confederation was ratified in 1781 and formally established a decentralized government that operated until 1789. After the British surrender at the siege of Yorktown in 1781, American sovereignty was internationally recognized by the Treaty of Paris (1783), through which the U.S. gained territory stretching west to the Mississippi River, north to present-day Canada, and south to Spanish Florida. The Northwest Ordinance (1787) established the precedent by which the country's territory would expand with the admission of new states, rather than the expansion of existing states. The U.S. Constitution was drafted at the 1787 Constitutional Convention to overcome the limitations of the Articles. It went into effect in 1789, creating a federal republic governed by three separate branches that together formed a system of checks and balances. George Washington was elected the country's first president under the Constitution, and the Bill of Rights was adopted in 1791 to allay skeptics' concerns about the power of the more centralized government. His resignation as commander-in-chief after the Revolutionary War and his later refusal to run for a third term as the country's first president established a precedent for the supremacy of civil authority in the United States and the peaceful transfer of power. In the late 18th century, American settlers began to expand westward in larger numbers, many with a sense of manifest destiny. The Louisiana Purchase of 1803 from France nearly doubled the territory of the United States. Lingering issues with Britain remained, leading to the War of 1812, which was fought to a draw. Spain ceded Florida and its Gulf Coast territory in 1819. The Missouri Compromise of 1820, which admitted Missouri as a slave state and Maine as a free state, attempted to balance the desire of northern states to prevent the expansion of slavery into new territories with that of southern states to extend it there. Primarily, the compromise prohibited slavery in all other lands of the Louisiana Purchase north of the 36°30′ parallel. As Americans expanded further into territory inhabited by Native Americans, the federal government implemented policies of Indian removal or assimilation. The most significant such legislation was the Indian Removal Act of 1830, a key policy of President Andrew Jackson. It resulted in the Trail of Tears (1830–1850), in which an estimated 60,000 Native Americans living east of the Mississippi River were forcibly removed and displaced to lands far to the west, causing 13,200 to 16,700 deaths along the forced march. Settler expansion as well as this influx of Indigenous peoples from the East resulted in the American Indian Wars west of the Mississippi. During the colonial period, slavery became legal in all the Thirteen colonies, but by 1770 it provided the main labor force in the large-scale, agriculture-dependent economies of the Southern Colonies from Maryland to Georgia. The practice began to be significantly questioned during the American Revolution, and spurred by an active abolitionist movement that had reemerged in the 1830s, states in the North enacted laws to prohibit slavery within their boundaries. At the same time, support for slavery had strengthened in Southern states, with widespread use of inventions such as the cotton gin (1793) having made slavery immensely profitable for Southern elites. The United States annexed the Republic of Texas in 1845, and the 1846 Oregon Treaty led to U.S. control of the present-day American Northwest. Dispute with Mexico over Texas led to the Mexican–American War (1846–1848). After the victory of the U.S., Mexico recognized U.S. sovereignty over Texas, New Mexico, and California in the 1848 Mexican Cession; the cession's lands also included the future states of Nevada, Colorado and Utah. The California gold rush of 1848–1849 spurred a huge migration of white settlers to the Pacific coast, leading to even more confrontations with Native populations. One of the most violent, the California genocide of thousands of Native inhabitants, lasted into the mid-1870s. Additional western territories and states were created. Throughout the 1850s, the sectional conflict regarding slavery was further inflamed by national legislation in the U.S. Congress and decisions of the Supreme Court. In Congress, the Fugitive Slave Act of 1850 mandated the forcible return to their owners in the South of slaves taking refuge in non-slave states, while the Kansas–Nebraska Act of 1854 effectively gutted the anti-slavery requirements of the Missouri Compromise. In its Dred Scott decision of 1857, the Supreme Court ruled against a slave brought into non-slave territory, simultaneously declaring the entire Missouri Compromise to be unconstitutional. These and other events exacerbated tensions between North and South that would culminate in the American Civil War (1861–1865). Beginning with South Carolina, 11 slave-state governments voted to secede from the United States in 1861, joining to create the Confederate States of America. All other state governments remained loyal to the Union.[q] War broke out in April 1861 after the Confederacy bombarded Fort Sumter. Following the Emancipation Proclamation on January 1, 1863, many freed slaves joined the Union army. The war began to turn in the Union's favor following the 1863 Siege of Vicksburg and Battle of Gettysburg, and the Confederates surrendered in 1865 after the Union's victory in the Battle of Appomattox Court House. Efforts toward reconstruction in the secessionist South had begun as early as 1862, but it was only after President Lincoln's assassination that the three Reconstruction Amendments to the Constitution were ratified to protect civil rights. The amendments codified nationally the abolition of slavery and involuntary servitude except as punishment for crimes, promised equal protection under the law for all persons, and prohibited discrimination on the basis of race or previous enslavement. As a result, African Americans took an active political role in ex-Confederate states in the decade following the Civil War. The former Confederate states were readmitted to the Union, beginning with Tennessee in 1866 and ending with Georgia in 1870. National infrastructure, including transcontinental telegraph and railroads, spurred growth in the American frontier. This was accelerated by the Homestead Acts, through which nearly 10 percent of the total land area of the United States was given away free to some 1.6 million homesteaders. From 1865 through 1917, an unprecedented stream of immigrants arrived in the United States, including 24.4 million from Europe. Most came through the Port of New York, as New York City and other large cities on the East Coast became home to large Jewish, Irish, and Italian populations. Many Northern Europeans as well as significant numbers of Germans and other Central Europeans moved to the Midwest. At the same time, about one million French Canadians migrated from Quebec to New England. During the Great Migration, millions of African Americans left the rural South for urban areas in the North. Alaska was purchased from Russia in 1867. The Compromise of 1877 is generally considered the end of the Reconstruction era, as it resolved the electoral crisis following the 1876 presidential election and led President Rutherford B. Hayes to reduce the role of federal troops in the South. Immediately, the Redeemers began evicting the Carpetbaggers and quickly regained local control of Southern politics in the name of white supremacy. African Americans endured a period of heightened, overt racism following Reconstruction, a time often considered the nadir of American race relations. A series of Supreme Court decisions, including Plessy v. Ferguson, emptied the Fourteenth and Fifteenth Amendments of their force, allowing Jim Crow laws in the South to remain unchecked, sundown towns in the Midwest, and segregation in communities across the country, which would be reinforced in part by the policy of redlining later adopted by the federal Home Owners' Loan Corporation. An explosion of technological advancement, accompanied by the exploitation of cheap immigrant labor, led to rapid economic expansion during the Gilded Age of the late 19th century. It continued into the early 20th, when the United States already outpaced the economies of Britain, France, and Germany combined. This fostered the amassing of power by a few prominent industrialists, largely by their formation of trusts and monopolies to prevent competition. Tycoons led the nation's expansion in the railroad, petroleum, and steel industries. The United States emerged as a pioneer of the automotive industry. These changes resulted in significant increases in economic inequality, slum conditions, and social unrest, creating the environment for labor unions and socialist movements to begin to flourish. This period eventually ended with the advent of the Progressive Era, which was characterized by significant economic and social reforms. Pro-American elements in Hawaii overthrew the Hawaiian monarchy; the islands were annexed in 1898. That same year, Puerto Rico, the Philippines, and Guam were ceded to the U.S. by Spain after the latter's defeat in the Spanish–American War. (The Philippines was granted full independence from the U.S. on July 4, 1946, following World War II. Puerto Rico and Guam have remained U.S. territories.) American Samoa was acquired by the United States in 1900 after the Second Samoan Civil War. The U.S. Virgin Islands were purchased from Denmark in 1917. The United States entered World War I alongside the Allies in 1917 helping to turn the tide against the Central Powers. In 1920, a constitutional amendment granted nationwide women's suffrage. During the 1920s and 1930s, radio for mass communication and early television transformed communications nationwide. The Wall Street Crash of 1929 triggered the Great Depression, to which President Franklin D. Roosevelt responded with the New Deal plan of "reform, recovery and relief", a series of unprecedented and sweeping recovery programs and employment relief projects combined with financial reforms and regulations. Initially neutral during World War II, the U.S. began supplying war materiel to the Allies of World War II in March 1941 and entered the war in December after Japan's attack on Pearl Harbor. Agreeing to a "Europe first" policy, the U.S. concentrated its wartime efforts on Japan's allies Italy and Germany until their final defeat in May 1945. The U.S. developed the first nuclear weapons and used them against the Japanese cities of Hiroshima and Nagasaki in August 1945, ending the war. The United States was one of the "Four Policemen" who met to plan the post-war world, alongside the United Kingdom, the Soviet Union, and China. The U.S. emerged relatively unscathed from the war, with even greater economic power and international political influence. The end of World War II in 1945 left the U.S. and the Soviet Union as superpowers, each with its own political, military, and economic sphere of influence. Geopolitical tensions between the two superpowers soon led to the Cold War. The U.S. implemented a policy of containment intended to limit the Soviet Union's sphere of influence; engaged in regime change against governments perceived to be aligned with the Soviets; and prevailed in the Space Race, which culminated with the first crewed Moon landing in 1969. Domestically, the U.S. experienced economic growth, urbanization, and population growth following World War II. The civil rights movement emerged, with Martin Luther King Jr. becoming a prominent leader in the early 1960s. The Great Society plan of President Lyndon B. Johnson's administration resulted in groundbreaking and broad-reaching laws, policies and a constitutional amendment to counteract some of the worst effects of lingering institutional racism. The counterculture movement in the U.S. brought significant social changes, including the liberalization of attitudes toward recreational drug use and sexuality. It also encouraged open defiance of the military draft (leading to the end of conscription in 1973) and wide opposition to U.S. intervention in Vietnam, with the U.S. totally withdrawing in 1975. A societal shift in the roles of women was significantly responsible for the large increase in female paid labor participation starting in the 1970s, and by 1985 the majority of American women aged 16 and older were employed. The Fall of Communism and the dissolution of the Soviet Union from 1989 to 1991 marked the end of the Cold War and left the United States as the world's sole superpower. This cemented the United States' global influence, reinforcing the concept of the "American Century" as the U.S. dominated international political, cultural, economic, and military affairs. The 1990s saw the longest recorded economic expansion in American history, a dramatic decline in U.S. crime rates, and advances in technology. Throughout this decade, technological innovations such as the World Wide Web, the evolution of the Pentium microprocessor in accordance with Moore's law, rechargeable lithium-ion batteries, the first gene therapy trial, and cloning either emerged in the U.S. or were improved upon there. The Human Genome Project was formally launched in 1990, while Nasdaq became the first stock market in the United States to trade online in 1998. In the Gulf War of 1991, an American-led international coalition of states expelled an Iraqi invasion force that had occupied neighboring Kuwait. The September 11 attacks on the United States in 2001 by the pan-Islamist militant organization al-Qaeda led to the war on terror and subsequent military interventions in Afghanistan and in Iraq. The U.S. housing bubble culminated in 2007 with the Great Recession, the largest economic contraction since the Great Depression. In the 2010s and early 2020s, the United States has experienced increased political polarization and democratic backsliding. The country's polarization was violently reflected in the January 2021 Capitol attack, when a mob of insurrectionists entered the U.S. Capitol and sought to prevent the peaceful transfer of power in an attempted self-coup d'état. Geography The United States is the world's third-largest country by total area behind Russia and Canada.[c] The 48 contiguous states and the District of Columbia have a combined area of 3,119,885 square miles (8,080,470 km2). In 2021, the United States had 8% of the Earth's permanent meadows and pastures and 10% of its cropland. Starting in the east, the coastal plain of the Atlantic seaboard gives way to inland forests and rolling hills in the Piedmont plateau region. The Appalachian Mountains and the Adirondack Massif separate the East Coast from the Great Lakes and the grasslands of the Midwest. The Mississippi River System, the world's fourth-longest river system, runs predominantly north–south through the center of the country. The flat and fertile prairie of the Great Plains stretches to the west, interrupted by a highland region in the southeast. The Rocky Mountains, west of the Great Plains, extend north to south across the country, peaking at over 14,000 feet (4,300 m) in Colorado. The supervolcano underlying Yellowstone National Park in the Rocky Mountains, the Yellowstone Caldera, is the continent's largest volcanic feature. Farther west are the rocky Great Basin and the Chihuahuan, Sonoran, and Mojave deserts. In the northwest corner of Arizona, carved by the Colorado River, is the Grand Canyon, a steep-sided canyon and popular tourist destination known for its overwhelming visual size and intricate, colorful landscape. The Cascade and Sierra Nevada mountain ranges run close to the Pacific coast. The lowest and highest points in the contiguous United States are in the State of California, about 84 miles (135 km) apart. At an elevation of 20,310 feet (6,190.5 m), Alaska's Denali (also called Mount McKinley) is the highest peak in the country and on the continent. Active volcanoes in the U.S. are common throughout Alaska's Alexander and Aleutian Islands. Located entirely outside North America, the archipelago of Hawaii consists of volcanic islands, physiographically and ethnologically part of the Polynesian subregion of Oceania. In addition to its total land area, the United States has one of the world's largest marine exclusive economic zones spanning approximately 4.5 million square miles (11.7 million km2) of ocean. With its large size and geographic variety, the United States includes most climate types. East of the 100th meridian, the climate ranges from humid continental in the north to humid subtropical in the south. The western Great Plains are semi-arid. Many mountainous areas of the American West have an alpine climate. The climate is arid in the Southwest, Mediterranean in coastal California, and oceanic in coastal Oregon, Washington, and southern Alaska. Most of Alaska is subarctic or polar. Hawaii, the southern tip of Florida and U.S. territories in the Caribbean and Pacific are tropical. The United States receives more high-impact extreme weather incidents than any other country. States bordering the Gulf of Mexico are prone to hurricanes, and most of the world's tornadoes occur in the country, mainly in Tornado Alley. Due to climate change in the country, extreme weather has become more frequent in the U.S. in the 21st century, with three times the number of reported heat waves compared to the 1960s. Since the 1990s, droughts in the American Southwest have become more persistent and more severe. The regions considered as the most attractive to the population are the most vulnerable. The U.S. is one of 17 megadiverse countries containing large numbers of endemic species: about 17,000 species of vascular plants occur in the contiguous United States and Alaska, and over 1,800 species of flowering plants are found in Hawaii, few of which occur on the mainland. The United States is home to 428 mammal species, 784 birds, 311 reptiles, 295 amphibians, and around 91,000 insect species. There are 63 national parks, and hundreds of other federally managed monuments, forests, and wilderness areas, administered by the National Park Service and other agencies. About 28% of the country's land is publicly owned and federally managed, primarily in the Western States. Most of this land is protected, though some is leased for commercial use, and less than one percent is used for military purposes. Environmental issues in the United States include debates on non-renewable resources and nuclear energy, air and water pollution, biodiversity, logging and deforestation, and climate change. The U.S. Environmental Protection Agency (EPA) is the federal agency charged with addressing most environmental-related issues. The idea of wilderness has shaped the management of public lands since 1964, with the Wilderness Act. The Endangered Species Act of 1973 provides a way to protect threatened and endangered species and their habitats. The United States Fish and Wildlife Service implements and enforces the Act. In 2024, the U.S. ranked 35th among 180 countries in the Environmental Performance Index. Government and politics The United States is a federal republic of 50 states and a federal capital district, Washington, D.C. The U.S. asserts sovereignty over five unincorporated territories and several uninhabited island possessions. It is the world's oldest surviving federation, and its presidential system of federal government has been adopted, in whole or in part, by many newly independent states worldwide following their decolonization. The Constitution of the United States serves as the country's supreme legal document. Most scholars describe the United States as a liberal democracy.[r] Composed of three branches, all headquartered in Washington, D.C., the federal government is the national government of the United States. The U.S. Constitution establishes a separation of powers intended to provide a system of checks and balances to prevent any of the three branches from becoming supreme. The three-branch system is known as the presidential system, in contrast to the parliamentary system where the executive is part of the legislative body. Many countries around the world adopted this aspect of the 1789 Constitution of the United States, especially in the postcolonial Americas. In the U.S. federal system, sovereign powers are shared between three levels of government specified in the Constitution: the federal government, the states, and Indian tribes. The U.S. also asserts sovereignty over five permanently inhabited territories: American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands. Residents of the 50 states are governed by their elected state government, under state constitutions compatible with the national constitution, and by elected local governments that are administrative divisions of a state. States are subdivided into counties or county equivalents, and (except for Hawaii) further divided into municipalities, each administered by elected representatives. The District of Columbia is a federal district containing the U.S. capital, Washington, D.C. The federal district is an administrative division of the federal government. Indian country is made up of 574 federally recognized tribes and 326 Indian reservations. They hold a government-to-government relationship with the U.S. federal government in Washington and are legally defined as domestic dependent nations with inherent tribal sovereignty rights. In addition to the five major territories, the U.S. also asserts sovereignty over the United States Minor Outlying Islands in the Pacific Ocean and the Caribbean. The seven undisputed islands without permanent populations are Baker Island, Howland Island, Jarvis Island, Johnston Atoll, Kingman Reef, Midway Atoll, and Palmyra Atoll. U.S. sovereignty over the unpopulated Bajo Nuevo Bank, Navassa Island, Serranilla Bank, and Wake Island is disputed. The Constitution is silent on political parties. However, they developed independently in the 18th century with the Federalist and Anti-Federalist parties. Since then, the United States has operated as a de facto two-party system, though the parties have changed over time. Since the mid-19th century, the two main national parties have been the Democratic Party and the Republican Party. The former is perceived as relatively liberal in its political platform while the latter is perceived as relatively conservative in its platform. The United States has an established structure of foreign relations, with the world's second-largest diplomatic corps as of 2024[update]. It is a permanent member of the United Nations Security Council and home to the United Nations headquarters. The United States is a member of the G7, G20, and OECD intergovernmental organizations. Almost all countries have embassies and many have consulates (official representatives) in the country. Likewise, nearly all countries host formal diplomatic missions with the United States, except Iran, North Korea, and Bhutan. Though Taiwan does not have formal diplomatic relations with the U.S., it maintains close unofficial relations. The United States regularly supplies Taiwan with military equipment to deter potential Chinese aggression. Its geopolitical attention also turned to the Indo-Pacific when the United States joined the Quadrilateral Security Dialogue with Australia, India, and Japan. The United States has a "Special Relationship" with the United Kingdom and strong ties with Canada, Australia, New Zealand, the Philippines, Japan, South Korea, Israel, and several European Union countries such as France, Italy, Germany, Spain, and Poland. The U.S. works closely with its NATO allies on military and national security issues, and with countries in the Americas through the Organization of American States and the United States–Mexico–Canada Free Trade Agreement. The U.S. exercises full international defense authority and responsibility for Micronesia, the Marshall Islands, and Palau through the Compact of Free Association. It has increasingly conducted strategic cooperation with India, while its ties with China have steadily deteriorated. Beginning in 2014, the U.S. had become a key ally of Ukraine. After Donald Trump was elected U.S. president in 2024, he sought to negotiate an end to the Russo-Ukrainian War. He paused all military aid to Ukraine in March 2025, although the aid resumed later. Trump also ended U.S. intelligence sharing with the country, but this too was eventually restored. The president is the commander-in-chief of the United States Armed Forces and appoints its leaders, the secretary of defense and the Joint Chiefs of Staff. The Department of Defense, headquartered at the Pentagon near Washington, D.C., administers five of the six service branches, which are made up of the U.S. Army, Marine Corps, Navy, Air Force, and Space Force. The Coast Guard is administered by the Department of Homeland Security in peacetime and can be transferred to the Department of the Navy in wartime. Total strength of the entire military is about 1.3 million active duty with an additional 400,000 in reserve. The United States spent $997 billion on its military in 2024, which is by far the largest amount of any country, making up 37% of global military spending and accounting for 3.4% of the country's GDP. The U.S. possesses 42% of the world's nuclear weapons—the second-largest stockpile after that of Russia. The U.S. military is widely regarded as the most powerful and advanced in the world. The United States has the third-largest combined armed forces in the world, behind the Chinese People's Liberation Army and Indian Armed Forces. The U.S. military operates about 800 bases and facilities abroad, and maintains deployments greater than 100 active duty personnel in 25 foreign countries. The United States has engaged in over 400 military interventions since its founding in 1776, with over half of these occurring between 1950 and 2019 and 25% occurring in the post-Cold War era. State defense forces (SDFs) are military units that operate under the sole authority of a state government. SDFs are authorized by state and federal law but are under the command of the state's governor. By contrast, the 54 U.S. National Guard organizations[t] fall under the dual control of state or territorial governments and the federal government; their units can also become federalized entities, but SDFs cannot be federalized. The National Guard personnel of a state or territory can be federalized by the president under the National Defense Act Amendments of 1933; this legislation created the Guard and provides for the integration of Army National Guard and Air National Guard units and personnel into the U.S. Army and (since 1947) the U.S. Air Force. The total number of National Guard members is about 430,000, while the estimated combined strength of SDFs is less than 10,000. There are about 18,000 U.S. police agencies from local to national level in the United States. Law in the United States is mainly enforced by local police departments and sheriff departments in their municipal or county jurisdictions. The state police departments have authority in their respective state, and federal agencies such as the Federal Bureau of Investigation (FBI) and the U.S. Marshals Service have national jurisdiction and specialized duties, such as protecting civil rights, national security, enforcing U.S. federal courts' rulings and federal laws, and interstate criminal activity. State courts conduct almost all civil and criminal trials, while federal courts adjudicate the much smaller number of civil and criminal cases that relate to federal law. There is no unified "criminal justice system" in the United States. The American prison system is largely heterogenous, with thousands of relatively independent systems operating across federal, state, local, and tribal levels. In 2025, "these systems hold nearly 2 million people in 1,566 state prisons, 98 federal prisons, 3,116 local jails, 1,277 juvenile correctional facilities, 133 immigration detention facilities, and 80 Indian country jails, as well as in military prisons, civil commitment centers, state psychiatric hospitals, and prisons in the U.S. territories." Despite disparate systems of confinement, four main institutions dominate: federal prisons, state prisons, local jails, and juvenile correctional facilities. Federal prisons are run by the Federal Bureau of Prisons and hold pretrial detainees as well as people who have been convicted of federal crimes. State prisons, run by the department of corrections of each state, hold people sentenced and serving prison time (usually longer than one year) for felony offenses. Local jails are county or municipal facilities that incarcerate defendants prior to trial; they also hold those serving short sentences (typically under a year). Juvenile correctional facilities are operated by local or state governments and serve as longer-term placements for any minor adjudicated as delinquent and ordered by a judge to be confined. In January 2023, the United States had the sixth-highest per capita incarceration rate in the world—531 people per 100,000 inhabitants—and the largest prison and jail population in the world, with more than 1.9 million people incarcerated. An analysis of the World Health Organization Mortality Database from 2010 showed U.S. homicide rates "were 7 times higher than in other high-income countries, driven by a gun homicide rate that was 25 times higher". Economy The U.S. has a highly developed mixed economy that has been the world's largest nominally since about 1890. Its 2024 gross domestic product (GDP)[e] of more than $29 trillion constituted over 25% of nominal global economic output, or 15% at purchasing power parity (PPP). From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of the G7. The country ranks first in the world by nominal GDP, second when adjusted for purchasing power parities (PPP), and ninth by PPP-adjusted GDP per capita. In February 2024, the total U.S. federal government debt was $34.4 trillion. Of the world's 500 largest companies by revenue, 138 were headquartered in the U.S. in 2025, the highest number of any country. The U.S. dollar is the currency most used in international transactions and the world's foremost reserve currency, backed by the country's dominant economy, its military, the petrodollar system, its large U.S. treasuries market, and its linked eurodollar. Several countries use it as their official currency, and in others it is the de facto currency. The U.S. has free trade agreements with several countries, including the USMCA. Although the United States has reached a post-industrial level of economic development and is often described as having a service economy, it remains a major industrial power; in 2024, the U.S. manufacturing sector was the world's second-largest by value output after China's. New York City is the world's principal financial center, and its metropolitan area is the world's largest metropolitan economy. The New York Stock Exchange and Nasdaq, both located in New York City, are the world's two largest stock exchanges by market capitalization and trade volume. The United States is at the forefront of technological advancement and innovation in many economic fields, especially in artificial intelligence; electronics and computers; pharmaceuticals; and medical, aerospace and military equipment. The country's economy is fueled by abundant natural resources, a well-developed infrastructure, and high productivity. The largest trading partners of the United States are the European Union, Mexico, Canada, China, Japan, South Korea, the United Kingdom, Vietnam, India, and Taiwan. The United States is the world's largest importer and second-largest exporter.[u] It is by far the world's largest exporter of services. Americans have the highest average household and employee income among OECD member states, and the fourth-highest median household income in 2023, up from sixth-highest in 2013. With personal consumption expenditures of over $18.5 trillion in 2023, the U.S. has a heavily consumer-driven economy and is the world's largest consumer market. The U.S. ranked first in the number of dollar billionaires and millionaires in 2023, with 735 billionaires and nearly 22 million millionaires. Wealth in the United States is highly concentrated; in 2011, the richest 10% of the adult population owned 72% of the country's household wealth, while the bottom 50% owned just 2%. U.S. wealth inequality increased substantially since the late 1980s, and income inequality in the U.S. reached a record high in 2019. In 2024, the country had some of the highest wealth and income inequality levels among OECD countries. Since the 1970s, there has been a decoupling of U.S. wage gains from worker productivity. In 2016, the top fifth of earners took home more than half of all income, giving the U.S. one of the widest income distributions among OECD countries. There were about 771,480 homeless persons in the U.S. in 2024. In 2022, 6.4 million children experienced food insecurity. Feeding America estimates that around one in five, or approximately 13 million, children experience hunger in the U.S. and do not know where or when they will get their next meal. Also in 2022, about 37.9 million people, or 11.5% of the U.S. population, were living in poverty. The United States has a smaller welfare state and redistributes less income through government action than most other high-income countries. It is the only advanced economy that does not guarantee its workers paid vacation nationally and one of a few countries in the world without federal paid family leave as a legal right. The United States has a higher percentage of low-income workers than almost any other developed country, largely because of a weak collective bargaining system and lack of government support for at-risk workers. The United States has been a leader in technological innovation since the late 19th century and scientific research since the mid-20th century. Methods for producing interchangeable parts and the establishment of a machine tool industry enabled the large-scale manufacturing of U.S. consumer products in the late 19th century. By the early 20th century, factory electrification, the introduction of the assembly line, and other labor-saving techniques created the system of mass production. In the 21st century, the United States continues to be one of the world's foremost scientific powers, though China has emerged as a major competitor in many fields. The U.S. has the highest research and development expenditures of any country and ranks ninth as a percentage of GDP. In 2022, the United States was (after China) the country with the second-highest number of published scientific papers. In 2021, the U.S. ranked second (also after China) by the number of patent applications, and third by trademark and industrial design applications (after China and Germany), according to World Intellectual Property Indicators. In 2025 the United States ranked third (after Switzerland and Sweden) in the Global Innovation Index. The United States is considered to be a world leader in the development of artificial intelligence technology. In 2023, the United States was ranked the second most technologically advanced country in the world (after South Korea) by Global Finance magazine. The United States has maintained a space program since the late 1950s, beginning with the establishment of the National Aeronautics and Space Administration (NASA) in 1958. NASA's Apollo program (1961–1972) achieved the first crewed Moon landing with the 1969 Apollo 11 mission; it remains one of the agency's most significant milestones. Other major endeavors by NASA include the Space Shuttle program (1981–2011), the Voyager program (1972–present), the Hubble and James Webb space telescopes (launched in 1990 and 2021, respectively), and the multi-mission Mars Exploration Program (Spirit and Opportunity, Curiosity, and Perseverance). NASA is one of five agencies collaborating on the International Space Station (ISS); U.S. contributions to the ISS include several modules, including Destiny (2001), Harmony (2007), and Tranquility (2010), as well as ongoing logistical and operational support. The United States private sector dominates the global commercial spaceflight industry. Prominent American spaceflight contractors include Blue Origin, Boeing, Lockheed Martin, Northrop Grumman, and SpaceX. NASA programs such as the Commercial Crew Program, Commercial Resupply Services, Commercial Lunar Payload Services, and NextSTEP have facilitated growing private-sector involvement in American spaceflight. In 2023, the United States received approximately 84% of its energy from fossil fuel, and its largest source of energy was petroleum (38%), followed by natural gas (36%), renewable sources (9%), coal (9%), and nuclear power (9%). In 2022, the United States constituted about 4% of the world's population, but consumed around 16% of the world's energy. The U.S. ranks as the second-highest emitter of greenhouse gases behind China. The U.S. is the world's largest producer of nuclear power, generating around 30% of the world's nuclear electricity. It also has the highest number of nuclear power reactors of any country. From 2024, the U.S. plans to triple its nuclear power capacity by 2050. The United States' 4 million miles (6.4 million kilometers) of road network, owned almost entirely by state and local governments, is the longest in the world. The extensive Interstate Highway System that connects all major U.S. cities is funded mostly by the federal government but maintained by state departments of transportation. The system is further extended by state highways and some private toll roads. The U.S. is among the top ten countries with the highest vehicle ownership per capita (850 vehicles per 1,000 people) in 2022. A 2022 study found that 76% of U.S. commuters drive alone and 14% ride a bicycle, including bike owners and users of bike-sharing networks. About 11% use some form of public transportation. Public transportation in the United States is well developed in the largest urban areas, notably New York City, Washington, D.C., Boston, Philadelphia, Chicago, and San Francisco; otherwise, coverage is generally less extensive than in most other developed countries. The U.S. also has many relatively car-dependent localities. Long-distance intercity travel is provided primarily by airlines, but travel by rail is more common along the Northeast Corridor, the only high-speed rail in the U.S. that meets international standards. Amtrak, the country's government-sponsored national passenger rail company, has a relatively sparse network compared to that of Western European countries. Service is concentrated in the Northeast, California, the Midwest, the Pacific Northwest, and Virginia/Southeast. The United States has an extensive air transportation network. U.S. civilian airlines are all privately owned. The three largest airlines in the world, by total number of passengers carried, are U.S.-based; American Airlines became the global leader after its 2013 merger with US Airways. Of the 50 busiest airports in the world, 16 are in the United States, as well as five of the top 10. The world's busiest airport by passenger volume is Hartsfield–Jackson Atlanta International in Atlanta, Georgia. In 2022, most of the 19,969 U.S. airports were owned and operated by local government authorities, and there are also some private airports. Some 5,193 are designated as "public use", including for general aviation. The Transportation Security Administration (TSA) has provided security at most major airports since 2001. The country's rail transport network, the longest in the world at 182,412.3 mi (293,564.2 km), handles mostly freight (in contrast to more passenger-centered rail in Europe). Because they are often privately owned operations, U.S. railroads lag behind those of the rest of the world in terms of electrification. The country's inland waterways are the world's fifth-longest, totaling 25,482 mi (41,009 km). They are used extensively for freight, recreation, and a small amount of passenger traffic. Of the world's 50 busiest container ports, four are located in the United States, with the busiest in the country being the Port of Los Angeles. Demographics The U.S. Census Bureau reported 331,449,281 residents on April 1, 2020,[v] making the United States the third-most-populous country in the world, after India and China. The Census Bureau's official 2025 population estimate was 341,784,857, an increase of 3.1% since the 2020 census. According to the Bureau's U.S. Population Clock, on July 1, 2024, the U.S. population had a net gain of one person every 16 seconds, or about 5400 people per day. In 2023, 51% of Americans age 15 and over were married, 6% were widowed, 10% were divorced, and 34% had never been married. In 2023, the total fertility rate for the U.S. stood at 1.6 children per woman, and, at 23%, it had the world's highest rate of children living in single-parent households in 2019. Most Americans live in the suburbs of major metropolitan areas. The United States has a diverse population; 37 ancestry groups have more than one million members. White Americans with ancestry from Europe, the Middle East, or North Africa form the largest racial and ethnic group at 57.8% of the United States population. Hispanic and Latino Americans form the second-largest group and are 18.7% of the United States population. African Americans constitute the country's third-largest ancestry group and are 12.1% of the total U.S. population. Asian Americans are the country's fourth-largest group, composing 5.9% of the United States population. The country's 3.7 million Native Americans account for about 1%, and some 574 native tribes are recognized by the federal government. In 2024, the median age of the United States population was 39.1 years. While many languages and dialects are spoken in the United States, English is by far the most commonly spoken and written. De facto, English is the official language of the United States, and in 2025, Executive Order 14224 declared English official. However, the U.S. has never had a de jure official language, as Congress has never passed a law to designate English as official for all three federal branches. Some laws, such as U.S. naturalization requirements, nonetheless standardize English. Twenty-eight states and the United States Virgin Islands have laws that designate English as the sole official language; 19 states and the District of Columbia have no official language. Three states and four U.S. territories have recognized local or indigenous languages in addition to English: Hawaii (Hawaiian), Alaska (twenty Native languages),[w] South Dakota (Sioux), American Samoa (Samoan), Puerto Rico (Spanish), Guam (Chamorro), and the Northern Mariana Islands (Carolinian and Chamorro). In total, 169 Native American languages are spoken in the United States. In Puerto Rico, Spanish is more widely spoken than English. According to the American Community Survey (2020), some 245.4 million people in the U.S. age five and older spoke only English at home. About 41.2 million spoke Spanish at home, making it the second most commonly used language. Other languages spoken at home by one million people or more include Chinese (3.40 million), Tagalog (1.71 million), Vietnamese (1.52 million), Arabic (1.39 million), French (1.18 million), Korean (1.07 million), and Russian (1.04 million). German, spoken by 1 million people at home in 2010, fell to 857,000 total speakers in 2020. America's immigrant population is by far the world's largest in absolute terms. In 2022, there were 87.7 million immigrants and U.S.-born children of immigrants in the United States, accounting for nearly 27% of the overall U.S. population. In 2017, out of the U.S. foreign-born population, some 45% (20.7 million) were naturalized citizens, 27% (12.3 million) were lawful permanent residents, 6% (2.2 million) were temporary lawful residents, and 23% (10.5 million) were unauthorized immigrants. In 2019, the top countries of origin for immigrants were Mexico (24% of immigrants), India (6%), China (5%), the Philippines (4.5%), and El Salvador (3%). In fiscal year 2022, over one million immigrants (most of whom entered through family reunification) were granted legal residence. The undocumented immigrant population in the U.S. reached a record high of 14 million in 2023. The First Amendment guarantees the free exercise of religion in the country and forbids Congress from passing laws respecting its establishment. Religious practice is widespread, among the most diverse in the world, and profoundly vibrant. The country has the world's largest Christian population, which includes the fourth-largest population of Catholics. Other notable faiths include Judaism, Buddhism, Hinduism, Islam, New Age, and Native American religions. Religious practice varies significantly by region. "Ceremonial deism" is common in American culture. The overwhelming majority of Americans believe in a higher power or spiritual force, engage in spiritual practices such as prayer, and consider themselves religious or spiritual. In the Southern United States' "Bible Belt", evangelical Protestantism plays a significant role culturally; New England and the Western United States tend to be more secular. Mormonism, a Restorationist movement founded in the U.S. in 1847, is the predominant religion in Utah and a major religion in Idaho. About 82% of Americans live in metropolitan areas, particularly in suburbs; about half of those reside in cities with populations over 50,000. In 2022, 333 incorporated municipalities had populations over 100,000, nine cities had more than one million residents, and four cities—New York City, Los Angeles, Chicago, and Houston—had populations exceeding two million. Many U.S. metropolitan populations are growing rapidly, particularly in the South and West. According to the Centers for Disease Control and Prevention (CDC), average U.S. life expectancy at birth reached 79.0 years in 2024, its highest recorded level. This was an increase of 0.6 years over 2023. The CDC attributed the improvement to a significant fall in the number of fatal drug overdoses in the country, noting that "heart disease continues to be the leading cause of death in the United States, followed by cancer and unintentional injuries." In 2024, life expectancy at birth for American men rose to 76.5 years (+0.7 years compared to 2023), while life expectancy for women was 81.4 years (+0.3 years). Starting in 1998, life expectancy in the U.S. fell behind that of other wealthy industrialized countries, and Americans' "health disadvantage" gap has been increasing ever since. The Commonwealth Fund reported in 2020 that the U.S. had the highest suicide rate among high-income countries. Approximately one-third of the U.S. adult population is obese and another third is overweight. The U.S. healthcare system far outspends that of any other country, measured both in per capita spending and as a percentage of GDP, but attains worse healthcare outcomes when compared to peer countries for reasons that are debated. The United States is the only developed country without a system of universal healthcare, and a significant proportion of the population that does not carry health insurance. Government-funded healthcare coverage for the poor (Medicaid) and for those age 65 and older (Medicare) is available to Americans who meet the programs' income or age qualifications. In 2010, then-President Obama passed the Patient Protection and Affordable Care Act.[x] Abortion in the United States is not federally protected, and is illegal or restricted in 17 states. American primary and secondary education, known in the U.S. as K–12 ("kindergarten through 12th grade"), is decentralized. School systems are operated by state, territorial, and sometimes municipal governments and regulated by the U.S. Department of Education. In general, children are required to attend school or an approved homeschool from the age of five or six (kindergarten or first grade) until they are 18 years old. This often brings students through the 12th grade, the final year of a U.S. high school, but some states and territories allow them to leave school earlier, at age 16 or 17. The U.S. spends more on education per student than any other country, an average of $18,614 per year per public elementary and secondary school student in 2020–2021. Among Americans age 25 and older, 92.2% graduated from high school, 62.7% attended some college, 37.7% earned a bachelor's degree, and 14.2% earned a graduate degree. The U.S. literacy rate is near-universal. The U.S. has produced the most Nobel Prize winners of any country, with 411 (having won 413 awards). U.S. tertiary or higher education has earned a global reputation. Many of the world's top universities, as listed by various ranking organizations, are in the United States, including 19 of the top 25. American higher education is dominated by state university systems, although the country's many private universities and colleges enroll about 20% of all American students. Local community colleges generally offer open admissions, lower tuition, and coursework leading to a two-year associate degree or a non-degree certificate. As for public expenditures on higher education, the U.S. spends more per student than the OECD average, and Americans spend more than all nations in combined public and private spending. Colleges and universities directly funded by the federal government do not charge tuition and are limited to military personnel and government employees, including: the U.S. service academies, the Naval Postgraduate School, and military staff colleges. Despite some student loan forgiveness programs in place, student loan debt increased by 102% between 2010 and 2020, and exceeded $1.7 trillion in 2022. Culture and society The United States is home to a wide variety of ethnic groups, traditions, and customs. The country has been described as having the values of individualism and personal autonomy, as well as a strong work ethic and competitiveness. Voluntary altruism towards others also plays a major role; according to a 2016 study by the Charities Aid Foundation, Americans donated 1.44% of total GDP to charity—the highest rate in the world by a large margin. Americans have traditionally been characterized by a unifying political belief in an "American Creed" emphasizing consent of the governed, liberty, equality under the law, democracy, social equality, property rights, and a preference for limited government. The U.S. has acquired significant hard and soft power through its diplomatic influence, economic power, military alliances, and cultural exports such as American movies, music, video games, sports, and food. The influence that the United States exerts on other countries through soft power is referred to as Americanization. Nearly all present Americans or their ancestors came from Europe, Africa, or Asia (the "Old World") within the past five centuries. Mainstream American culture is a Western culture largely derived from the traditions of European immigrants with influences from many other sources, such as traditions brought by slaves from Africa. More recent immigration from Asia and especially Latin America has added to a cultural mix that has been described as a homogenizing melting pot, and a heterogeneous salad bowl, with immigrants contributing to, and often assimilating into, mainstream American culture. Under the First Amendment to the Constitution, the United States is considered to have the strongest protections of free speech of any country. Flag desecration, hate speech, blasphemy, and lese majesty are all forms of protected expression. A 2016 Pew Research Center poll found that Americans were the most supportive of free expression of any polity measured. Additionally, they are the "most supportive of freedom of the press and the right to use the Internet without government censorship". The U.S. is a socially progressive country with permissive attitudes surrounding human sexuality. LGBTQ rights in the United States are among the most advanced by global standards. The American Dream, or the perception that Americans enjoy high levels of social mobility, plays a key role in attracting immigrants. Whether this perception is accurate has been a topic of debate. While mainstream culture holds that the United States is a classless society, scholars identify significant differences between the country's social classes, affecting socialization, language, and values. Americans tend to greatly value socioeconomic achievement, but being ordinary or average is promoted by some as a noble condition as well. The National Foundation on the Arts and the Humanities is an agency of the United States federal government that was established in 1965 with the purpose to "develop and promote a broadly conceived national policy of support for the humanities and the arts in the United States, and for institutions which preserve the cultural heritage of the United States." It is composed of four sub-agencies: Colonial American authors were influenced by John Locke and other Enlightenment philosophers. The American Revolutionary Period (1765–1783) is notable for the political writings of Benjamin Franklin, Alexander Hamilton, Thomas Paine, and Thomas Jefferson. Shortly before and after the Revolutionary War, the newspaper rose to prominence, filling a demand for anti-British national literature. An early novel is William Hill Brown's The Power of Sympathy, published in 1791. Writer and critic John Neal in the early- to mid-19th century helped advance America toward a unique literature and culture by criticizing predecessors such as Washington Irving for imitating their British counterparts, and by influencing writers such as Edgar Allan Poe, who took American poetry and short fiction in new directions. Ralph Waldo Emerson and Margaret Fuller pioneered the influential Transcendentalism movement; Henry David Thoreau, author of Walden, was influenced by this movement. The conflict surrounding abolitionism inspired writers, like Harriet Beecher Stowe, and authors of slave narratives, such as Frederick Douglass. Nathaniel Hawthorne's The Scarlet Letter (1850) explored the dark side of American history, as did Herman Melville's Moby-Dick (1851). Major American poets of the 19th century American Renaissance include Walt Whitman, Melville, and Emily Dickinson. Mark Twain was the first major American writer to be born in the West. Henry James achieved international recognition with novels like The Portrait of a Lady (1881). As literacy rates rose, periodicals published more stories centered around industrial workers, women, and the rural poor. Naturalism, regionalism, and realism were the major literary movements of the period. While modernism generally took on an international character, modernist authors working within the United States more often rooted their work in specific regions, peoples, and cultures. Following the Great Migration to northern cities, African-American and black West Indian authors of the Harlem Renaissance developed an independent tradition of literature that rebuked a history of inequality and celebrated black culture. An important cultural export during the Jazz Age, these writings were a key influence on Négritude, a philosophy emerging in the 1930s among francophone writers of the African diaspora. In the 1950s, an ideal of homogeneity led many authors to attempt to write the Great American Novel, while the Beat Generation rejected this conformity, using styles that elevated the impact of the spoken word over mechanics to describe drug use, sexuality, and the failings of society. Contemporary literature is more pluralistic than in previous eras, with the closest thing to a unifying feature being a trend toward self-conscious experiments with language. Twelve American laureates have won the Nobel Prize in Literature. Media in the United States is broadly uncensored, with the First Amendment providing significant protections, as reiterated in New York Times Co. v. United States. The four major broadcasters in the U.S. are the National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), American Broadcasting Company (ABC), and Fox Broadcasting Company (Fox). The four major broadcast television networks are all commercial entities. The U.S. cable television system offers hundreds of channels catering to a variety of niches. In 2021, about 83% of Americans over age 12 listened to broadcast radio, while about 40% listened to podcasts. In the prior year, there were 15,460 licensed full-power radio stations in the U.S. according to the Federal Communications Commission (FCC). Much of the public radio broadcasting is supplied by National Public Radio (NPR), incorporated in February 1970 under the Public Broadcasting Act of 1967. U.S. newspapers with a global reach and reputation include The Wall Street Journal, The New York Times, The Washington Post, and USA Today. About 800 publications are produced in Spanish. With few exceptions, newspapers are privately owned, either by large chains such as Gannett or McClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or, in an increasingly rare situation, by individuals or families. Major cities often have alternative newspapers to complement the mainstream daily papers, such as The Village Voice in New York City and LA Weekly in Los Angeles. The five most-visited websites in the world are Google, YouTube, Facebook, Instagram, and ChatGPT—all of them American-owned. Other popular platforms used include X (formerly Twitter) and Amazon. In 2025, the U.S. was the world's second-largest video game market by revenue (after China). In 2015, the U.S. video game industry consisted of 2,457 companies that employed around 220,000 jobs and generated $30.4 billion in revenue. There are 444 game publishers, developers, and hardware companies in California alone. According to the Game Developers Conference (GDC), the U.S. is the top location for video game development, with 58% of the world's game developers based there in 2025. The United States is well known for its theater. Mainstream theater in the United States derives from the old European theatrical tradition and has been heavily influenced by the British theater. By the middle of the 19th century, America had created new distinct dramatic forms in the Tom Shows, the showboat theater and the minstrel show. The central hub of the American theater scene is the Theater District in Manhattan, with its divisions of Broadway, off-Broadway, and off-off-Broadway. Many movie and television celebrities have gotten their big break working in New York productions. Outside New York City, many cities have professional regional or resident theater companies that produce their own seasons. The biggest-budget theatrical productions are musicals. U.S. theater has an active community theater culture. The Tony Awards recognizes excellence in live Broadway theater and are presented at an annual ceremony in Manhattan. The awards are given for Broadway productions and performances. One is also given for regional theater. Several discretionary non-competitive awards are given as well, including a Special Tony Award, the Tony Honors for Excellence in Theatre, and the Isabelle Stevenson Award. Folk art in colonial America grew out of artisanal craftsmanship in communities that allowed commonly trained people to individually express themselves. It was distinct from Europe's tradition of high art, which was less accessible and generally less relevant to early American settlers. Cultural movements in art and craftsmanship in colonial America generally lagged behind those of Western Europe. For example, the prevailing medieval style of woodworking and primitive sculpture became integral to early American folk art, despite the emergence of Renaissance styles in England in the late 16th and early 17th centuries. The new English styles would have been early enough to make a considerable impact on American folk art, but American styles and forms had already been firmly adopted. Not only did styles change slowly in early America, but there was a tendency for rural artisans there to continue their traditional forms longer than their urban counterparts did—and far longer than those in Western Europe. The Hudson River School was a mid-19th-century movement in the visual arts tradition of European naturalism. The 1913 Armory Show in New York City, an exhibition of European modernist art, shocked the public and transformed the U.S. art scene. American Realism and American Regionalism sought to reflect and give America new ways of looking at itself. Georgia O'Keeffe, Marsden Hartley, and others experimented with new and individualistic styles, which would become known as American modernism. Major artistic movements such as the abstract expressionism of Jackson Pollock and Willem de Kooning and the pop art of Andy Warhol and Roy Lichtenstein developed largely in the United States. Major photographers include Alfred Stieglitz, Edward Steichen, Dorothea Lange, Edward Weston, James Van Der Zee, Ansel Adams, and Gordon Parks. The tide of modernism and then postmodernism has brought global fame to American architects, including Frank Lloyd Wright, Philip Johnson, and Frank Gehry. The Metropolitan Museum of Art in Manhattan is the largest art museum in the United States and the fourth-largest in the world. American folk music encompasses numerous music genres, variously known as traditional music, traditional folk music, contemporary folk music, or roots music. Many traditional songs have been sung within the same family or folk group for generations, and sometimes trace back to such origins as the British Isles, mainland Europe, or Africa. The rhythmic and lyrical styles of African-American music in particular have influenced American music. Banjos were brought to America through the slave trade. Minstrel shows incorporating the instrument into their acts led to its increased popularity and widespread production in the 19th century. The electric guitar, first invented in the 1930s, and mass-produced by the 1940s, had an enormous influence on popular music, in particular due to the development of rock and roll. The synthesizer, turntablism, and electronic music were also largely developed in the U.S. Elements from folk idioms such as the blues and old-time music were adopted and transformed into popular genres with global audiences. Jazz grew from blues and ragtime in the early 20th century, developing from the innovations and recordings of composers such as W.C. Handy and Jelly Roll Morton. Louis Armstrong and Duke Ellington increased its popularity early in the 20th century. Country music developed in the 1920s, bluegrass and rhythm and blues in the 1940s, and rock and roll in the 1950s. In the 1960s, Bob Dylan emerged from the folk revival to become one of the country's most celebrated songwriters. The musical forms of punk and hip hop both originated in the United States in the 1970s. The United States has the world's largest music market, with a total retail value of $15.9 billion in 2022. Most of the world's major record companies are based in the U.S.; they are represented by the Recording Industry Association of America (RIAA). Mid-20th-century American pop stars, such as Frank Sinatra and Elvis Presley, became global celebrities and best-selling music artists, as have artists of the late 20th century, such as Michael Jackson, Madonna, Whitney Houston, and Mariah Carey, and of the early 21st century, such as Eminem, Britney Spears, Lady Gaga, Katy Perry, Taylor Swift and Beyoncé. The United States has the world's largest apparel market by revenue. Apart from professional business attire, American fashion is eclectic and predominantly informal. Americans' diverse cultural roots are reflected in their clothing; however, sneakers, jeans, T-shirts, and baseball caps are emblematic of American styles. New York, with its Fashion Week, is considered to be one of the "Big Four" global fashion capitals, along with Paris, Milan, and London. A study demonstrated that general proximity to Manhattan's Garment District has been synonymous with American fashion since its inception in the early 20th century. A number of well-known designer labels, among them Tommy Hilfiger, Ralph Lauren, Tom Ford and Calvin Klein, are headquartered in Manhattan. Labels cater to niche markets, such as preteens. New York Fashion Week is one of the most influential fashion shows in the world, and is held twice each year in Manhattan; the annual Met Gala, also in Manhattan, has been called the fashion world's "biggest night". The U.S. film industry has a worldwide influence and following. Hollywood, a district in central Los Angeles, the nation's second-most populous city, is also metonymous for the American filmmaking industry. The major film studios of the United States are the primary source of the most commercially successful movies selling the most tickets in the world. Largely centered in the New York City region from its beginnings in the late 19th century through the first decades of the 20th century, the U.S. film industry has since been primarily based in and around Hollywood. Nonetheless, American film companies have been subject to the forces of globalization in the 21st century, and an increasing number of films are made elsewhere. The Academy Awards, popularly known as "the Oscars", have been held annually by the Academy of Motion Picture Arts and Sciences since 1929, and the Golden Globe Awards have been held annually since January 1944. The industry peaked in what is commonly referred to as the "Golden Age of Hollywood", from the early sound period until the early 1960s, with screen actors such as John Wayne and Marilyn Monroe becoming iconic figures. In the 1970s, "New Hollywood", or the "Hollywood Renaissance", was defined by grittier films influenced by French and Italian realist pictures of the post-war period. The 21st century has been marked by the rise of American streaming platforms, which came to rival traditional cinema. Early settlers were introduced by Native Americans to foods such as turkey, sweet potatoes, corn, squash, and maple syrup. Of the most enduring and pervasive examples are variations of the native dish called succotash. Early settlers and later immigrants combined these with foods they were familiar with, such as wheat flour, beef, and milk, to create a distinctive American cuisine. New World crops, especially pumpkin, corn, potatoes, and turkey as the main course are part of a shared national menu on Thanksgiving, when many Americans prepare or purchase traditional dishes to celebrate the occasion. Characteristic American dishes such as apple pie, fried chicken, doughnuts, french fries, macaroni and cheese, ice cream, hamburgers, hot dogs, and American pizza derive from the recipes of various immigrant groups. Mexican dishes such as burritos and tacos preexisted the United States in areas later annexed from Mexico, and adaptations of Chinese cuisine as well as pasta dishes freely adapted from Italian sources are all widely consumed. American chefs have had a significant impact on society both domestically and internationally. In 1946, the Culinary Institute of America was founded by Katharine Angell and Frances Roth. This would become the United States' most prestigious culinary school, where many of the most talented American chefs would study prior to successful careers. The United States restaurant industry was projected at $899 billion in sales for 2020, and employed more than 15 million people, representing 10% of the nation's workforce directly. It is the country's second-largest private employer and the third-largest employer overall. The United States is home to over 220 Michelin star-rated restaurants, 70 of which are in New York City. Wine has been produced in what is now the United States since the 1500s, with the first widespread production beginning in what is now New Mexico in 1628. In the modern U.S., wine production is undertaken in all fifty states, with California producing 84 percent of all U.S. wine. With more than 1,100,000 acres (4,500 km2) under vine, the United States is the fourth-largest wine-producing country in the world, after Italy, Spain, and France. The classic American diner, a casual restaurant type originally intended for the working class, emerged during the 19th century from converted railroad dining cars made stationary. The diner soon evolved into purpose-built structures whose number expanded greatly in the 20th century. The American fast-food industry developed alongside the nation's car culture. American restaurants developed the drive-in format in the 1920s, which they began to replace with the drive-through format by the 1940s. American fast-food restaurant chains, such as McDonald's, Burger King, Chick-fil-A, Kentucky Fried Chicken, Dunkin' Donuts and many others, have numerous outlets around the world. The most popular spectator sports in the U.S. are American football, basketball, baseball, soccer, and ice hockey. Their premier leagues are, respectively, the National Football League, the National Basketball Association, Major League Baseball, Major League Soccer, and the National Hockey League, All these leagues enjoy wide-ranging domestic media coverage and, except for the MLS, all are considered the preeminent leagues in their respective sports in the world. While most major U.S. sports such as baseball and American football have evolved out of European practices, basketball, volleyball, skateboarding, and snowboarding are American inventions, many of which have become popular worldwide. Lacrosse and surfing arose from Native American and Native Hawaiian activities that predate European contact. The market for professional sports in the United States was approximately $69 billion in July 2013, roughly 50% larger than that of Europe, the Middle East, and Africa combined. American football is by several measures the most popular spectator sport in the United States. Although American football does not have a substantial following in other nations, the NFL does have the highest average attendance (67,254) of any professional sports league in the world. In the year 2024, the NFL generated over $23 billion, making them the most valued professional sports league in the United States and the world. Baseball has been regarded as the U.S. "national sport" since the late 19th century. The most-watched individual sports in the U.S. are golf and auto racing, particularly NASCAR and IndyCar. On the collegiate level, earnings for the member institutions exceed $1 billion annually, and college football and basketball attract large audiences, as the NCAA March Madness tournament and the College Football Playoff are some of the most watched national sporting events. In the U.S., the intercollegiate sports level serves as the main feeder system for professional and Olympic sports, with significant exceptions such as Minor League Baseball. This differs greatly from practices in nearly all other countries, where publicly and privately funded sports organizations serve this function. Eight Olympic Games have taken place in the United States. The 1904 Summer Olympics in St. Louis, Missouri, were the first-ever Olympic Games held outside of Europe. The Olympic Games will be held in the U.S. for a ninth time when Los Angeles hosts the 2028 Summer Olympics. U.S. athletes have won a total of 2,968 medals (1,179 gold) at the Olympic Games, the most of any country. In other international competition, the United States is the home of a number of prestigious events, including the America's Cup, World Baseball Classic, the U.S. Open, and the Masters Tournament. The U.S. men's national soccer team has qualified for eleven World Cups, while the women's national team has won the FIFA Women's World Cup and Olympic soccer tournament four and five times, respectively. The 1999 FIFA Women's World Cup was hosted by the United States. Its final match was attended by 90,185, setting the world record for largest women's sporting event crowd at the time. The United States hosted the 1994 FIFA World Cup and will co-host, along with Canada and Mexico, the 2026 FIFA World Cup. See also Notes References This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023, FAO, FAO. External links 40°N 100°W / 40°N 100°W / 40; -100 (United States of America) |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence] | [TOKENS: 6122] |
Contents Philosophy of artificial intelligence The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people (or, at least, artificial creatures; see artificial life) so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence. The philosophy of artificial intelligence attempts to answer such questions as follows: Questions like these reflect the divergent interests of AI researchers, cognitive scientists and philosophers respectively. The scientific answers to these questions depend on the definition of "intelligence" and "consciousness" and exactly which "machines" are under discussion. Important propositions in the philosophy of AI include some of the following: Can a machine display general intelligence? Is it possible to create a machine that can solve all the problems humans solve using their intelligence? This question defines the scope of what machines could do in the future and guides the direction of AI research. It only concerns the behavior of machines and ignores the issues of interest to psychologists, cognitive scientists and philosophers, evoking the question: does it matter whether a machine is really thinking, as a person thinks, rather than just producing outcomes that appear to result from thinking? The basic position of most AI researchers is summed up in this statement, which appeared in the proposal for the Dartmouth workshop of 1956: Arguments against the basic premise must show that building a working AI system is impossible because there is some practical limit to the abilities of computers or that there is some special quality of the human mind that is necessary for intelligent behavior and yet cannot be duplicated by a machine (or by the methods of current AI research). Arguments in favor of the basic premise must show that such a system is possible. It is also possible to sidestep the connection between the two parts of the above proposal. For instance, machine learning, beginning with Turing's infamous child machine proposal, essentially achieves the desired feature of intelligence without a precise design-time description as to how it would exactly work. The account on robot tacit knowledge eliminates the need for a precise description altogether. The first step to answering the question is to clearly define "intelligence". Alan Turing reduced the problem of defining intelligence to a simple question about conversation. He suggests that: if a machine can answer any question posed to it, using the same words that an ordinary person would, then we may call that machine intelligent. A modern version of his experimental design would use an online chat room, where one of the participants is a real person and one of the participants is a computer program. The program passes the test if no one can tell which of the two participants is human. Turing notes that no one (except philosophers) ever asks the question "can people think?" He writes "instead of arguing continually over this point, it is usual to have a polite convention that everyone thinks". Turing's test extends this polite convention to machines: One criticism of the Turing test is that it only measures the "humanness" of the machine's behavior, rather than the "intelligence" of the behavior. Since human behavior and intelligent behavior are not exactly the same thing, the test fails to measure intelligence. Stuart J. Russell and Peter Norvig write that "aeronautical engineering texts do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons'". Twenty-first century AI research defines intelligence in terms of goal-directed behavior. It views intelligence as a set of problems that the machine is expected to solve – the more problems it can solve, and the better its solutions are, the more intelligent the program is. AI founder John McCarthy defined intelligence as "the computational part of the ability to achieve goals in the world." Stuart Russell and Peter Norvig formalized this definition using abstract intelligent agents. An "agent" is something which perceives and acts in an environment. A "performance measure" defines what counts as success for the agent. Definitions like this one try to capture the essence of intelligence. They have the advantage that, unlike the Turing test, they do not also test for unintelligent human traits such as making typing mistakes. They have the disadvantage that they can fail to differentiate between "things that think" and "things that do not". By this definition, even a thermostat has a rudimentary intelligence. Hubert Dreyfus describes this argument as claiming that "if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then ... we ... ought to be able to reproduce the behavior of the nervous system with some physical device". This argument, first introduced as early as 1943 and vividly described by Hans Moravec in 1988, is now associated with futurist Ray Kurzweil, who estimates that computer power will be sufficient for a complete brain simulation by the year 2029. A non-real-time simulation of a thalamocortical model that has the size of the human brain (1011 neurons) was performed in 2005, and it took 50 days to simulate 1 second of brain dynamics on a cluster of 27 processors. Even AI's harshest critics (such as Hubert Dreyfus and John Searle) agree that a brain simulation is possible in theory.[a] However, Searle points out that, in principle, anything can be simulated by a computer; thus, bringing the definition to its breaking point leads to the conclusion that any process at all can technically be considered "computation". "What we wanted to know is what distinguishes the mind from thermostats and livers," he writes. Thus, merely simulating the functioning of a living brain would in itself be an admission of ignorance regarding intelligence and the nature of the mind, like trying to build a jet airliner by copying a living bird precisely, feather by feather, with no theoretical understanding of aeronautical engineering. In 1963, Allen Newell and Herbert A. Simon proposed that "symbol manipulation" was the essence of both human and machine intelligence. They wrote: This claim is very strong: it implies both that human thinking is a kind of symbol manipulation (because a symbol system is necessary for intelligence) and that machines can be intelligent (because a symbol system is sufficient for intelligence). Another version of this position was described by philosopher Hubert Dreyfus, who called it "the psychological assumption": The "symbols" that Newell, Simon and Dreyfus discussed were word-like and high level—symbols that directly correspond with objects in the world, such as <dog> and <tail>. Most AI programs written between 1956 and 1990 used this kind of symbol. Modern AI, based on statistics and mathematical optimization, does not use the high-level "symbol processing" that Newell and Simon discussed. These arguments show that human thinking does not consist (solely) of high level symbol manipulation. They do not show that artificial intelligence is impossible, only that more than symbol processing is required. In 1931, Kurt Gödel proved with an incompleteness theorem that it is always possible to construct a "Gödel statement" that a given consistent formal system of logic (such as a high-level symbol manipulation program) could not prove. Despite being a true statement, the constructed Gödel statement is unprovable in the given system. (The truth of the constructed Gödel statement is contingent on the consistency of the given system; applying the same process to a subtly inconsistent system will appear to succeed, but will actually yield a false "Gödel statement" instead.)[citation needed] More speculatively, Gödel conjectured that the human mind can eventually correctly determine the truth or falsity of any well-grounded mathematical statement (including any possible Gödel statement), and that therefore the human mind's power is not reducible to a mechanism. Philosopher John Lucas (since 1961) and Roger Penrose (since 1989) have championed this philosophical anti-mechanist argument. Gödelian anti-mechanist arguments tend to rely on the innocuous-seeming claim that a system of human mathematicians (or some idealization of human mathematicians) is both consistent (completely free of error) and believes fully in its own consistency (and can make all logical inferences that follow from its own consistency, including belief in its Gödel statement) [citation needed]. This is probably impossible for a Turing machine to do (see Halting problem); therefore, the Gödelian concludes that human reasoning is too powerful to be captured by a Turing machine, and by extension, any digital mechanical device. However, the modern consensus in the scientific and mathematical community is that actual human reasoning is inconsistent; that any consistent "idealized version" H of human reasoning would logically be forced to adopt a healthy but counter-intuitive open-minded skepticism about the consistency of H (otherwise H is provably inconsistent); and that Gödel's theorems do not lead to any valid argument that humans have mathematical reasoning capabilities beyond what a machine could ever duplicate. This consensus that Gödelian anti-mechanist arguments are doomed to failure is laid out strongly in Artificial Intelligence: "any attempt to utilize (Gödel's incompleteness results) to attack the computationalist thesis is bound to be illegitimate, since these results are quite consistent with the computationalist thesis." Stuart Russell and Peter Norvig agree that Gödel's argument does not consider the nature of real-world human reasoning. It applies to what can theoretically be proved, given an infinite amount of memory and time. In practice, real machines (including humans) have finite resources and will have difficulty proving many theorems. It is not necessary to be able to prove everything in order to be an intelligent person. Less formally, Douglas Hofstadter, in his Pulitzer Prize winning book Gödel, Escher, Bach: An Eternal Golden Braid, states that these "Gödel-statements" always refer to the system itself, drawing an analogy to the way the Epimenides paradox uses statements that refer to themselves, such as "this statement is false" or "I am lying". But, of course, the Epimenides paradox applies to anything that makes statements, whether it is a machine or a human, even Lucas himself. Consider: This statement is true but cannot be asserted by Lucas. This shows that Lucas himself is subject to the same limits that he describes for machines, as are all people, and so Lucas's argument is pointless. After concluding that human reasoning is non-computable, Penrose went on to controversially speculate that some kind of hypothetical non-computable processes involving the collapse of quantum mechanical states give humans a special advantage over existing computers. Existing quantum computers are only capable of reducing the complexity of Turing computable tasks and are still restricted to tasks within the scope of Turing machines. [citation needed] [clarification needed]. By Penrose and Lucas's arguments, the fact that quantum computers are only able to complete Turing computable tasks implies that they cannot be sufficient for emulating the human mind.[citation needed] Therefore, Penrose seeks for some other process involving new physics, for instance quantum gravity which might manifest new physics at the scale of the Planck mass via spontaneous quantum collapse of the wave function. These states, he suggested, occur both within neurons and also spanning more than one neuron. However, other scientists point out that there is no plausible organic mechanism in the brain for harnessing any sort of quantum computation, and furthermore that the timescale of quantum decoherence seems too fast to influence neuron firing. Hubert Dreyfus argued that human intelligence and expertise depended primarily on fast intuitive judgements rather than step-by-step symbolic manipulation, and argued that these skills would never be captured in formal rules. Dreyfus's argument had been anticipated by Turing in his 1950 paper Computing machinery and intelligence, where he had classified this as the "argument from the informality of behavior." Turing argued in response that, just because we do not know the rules that govern a complex behavior, this does not mean that no such rules exist. He wrote: "we cannot so easily convince ourselves of the absence of complete laws of behaviour ... The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, 'We have searched enough. There are no such laws.'" Russell and Norvig point out that, in the years since Dreyfus published his critique, progress has been made towards discovering the "rules" that govern unconscious reasoning. The situated movement in robotics research attempts to capture our unconscious skills at perception and attention. Computational intelligence paradigms, such as neural nets, evolutionary algorithms and so on are mostly directed at simulated unconscious reasoning and learning. Statistical approaches to AI can make predictions which approach the accuracy of human intuitive guesses. Research into commonsense knowledge has focused on reproducing the "background" or context of knowledge. In fact, AI research in general has moved away from high level symbol manipulation, towards new models that are intended to capture more of our intuitive reasoning. Cognitive science and psychology eventually came to agree with Dreyfus' description of human expertise. Daniel Kahnemann and others developed a similar theory where they identified two "systems" that humans use to solve problems, which he called "System 1" (fast intuitive judgements) and "System 2" (slow deliberate step by step thinking). Although Dreyfus' views have been vindicated in many ways, the work in cognitive science and in AI was in response to specific problems in those fields and was not directly influenced by Dreyfus. Historian and AI researcher Daniel Crevier wrote that "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier." Can a machine have a mind, consciousness, and mental states? This is a philosophical question, related to the problem of other minds and the hard problem of consciousness. The question revolves around a position defined by John Searle as "strong AI": Searle distinguished this position from what he called "weak AI": Searle introduced the terms to isolate strong AI from weak AI so he could focus on what he thought was the more interesting and debatable issue. He argued that even if we assume that we had a computer program that acted exactly like a human mind, there would still be a difficult philosophical question that needed to be answered. Neither of Searle's two positions are of great concern to AI research, since they do not directly answer the question "can a machine display general intelligence?" (unless it can also be shown that consciousness is necessary for intelligence). Turing wrote "I do not wish to give the impression that I think there is no mystery about consciousness… [b]ut I do not think these mysteries necessarily need to be solved before we can answer the question [of whether machines can think]." Russell and Norvig agree: "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis." There are a few researchers who believe that consciousness is an essential element in intelligence, such as Igor Aleksander, Stan Franklin, Ron Sun, and Pentti Haikonen, although their definition of "consciousness" strays very close to "intelligence". (See artificial consciousness.) Before we can answer this question, we must be clear what we mean by "minds", "mental states" and "consciousness". The words "mind" and "consciousness" are used by different communities in different ways. Some new age thinkers, for example, use the word "consciousness" to describe something similar to Bergson's "élan vital": an invisible, energetic fluid that permeates life and especially the mind. Science fiction writers use the word to describe some essential property that makes us human: a machine or alien that is "conscious" will be presented as a fully human character, with intelligence, desires, will, insight, pride and so on. (Science fiction writers also use the words "sentience", "sapience", "self-awareness" or "ghost"—as in the Ghost in the Shell manga and anime series—to describe this essential human property). For others [who?], the words "mind" or "consciousness" are used as a kind of secular synonym for the soul. For philosophers, neuroscientists and cognitive scientists, the words are used in a way that is both more precise and more mundane: they refer to the familiar, everyday experience of having a "thought in your head", like a perception, a dream, an intention or a plan, and to the way we see something, know something, mean something or understand something. "It's not hard to give a commonsense definition of consciousness" observes philosopher John Searle. What is mysterious and fascinating is not so much what it is but how it is: how does a lump of fatty tissue and electricity give rise to this (familiar) experience of perceiving, meaning or thinking? Philosophers call this the hard problem of consciousness. It is the latest version of a classic problem in the philosophy of mind called the "mind-body problem". A related problem is the problem of meaning or understanding (which philosophers call "intentionality"): what is the connection between our thoughts and what we are thinking about (i.e. objects and situations out in the world)? A third issue is the problem of experience (or "phenomenology"): If two people see the same thing, do they have the same experience? Or are there things "inside their head" (called "qualia") that can be different from person to person? Neurobiologists believe all these problems will be solved as we begin to identify the neural correlates of consciousness: the actual relationship between the machinery in our heads and its collective properties; such as the mind, experience and understanding. Some of the harshest critics of artificial intelligence agree that the brain is just a machine, and that consciousness and intelligence are the result of physical processes in the brain. The difficult philosophical question is this: can a computer program, running on a digital machine that shuffles the binary digits of zero and one, duplicate the ability of the neurons to create minds, with mental states (like understanding or perceiving), and ultimately, the experience of consciousness? John Searle asks us to consider a thought experiment: suppose we have written a computer program that passes the Turing test and demonstrates general intelligent action. Suppose, specifically that the program can converse in fluent Chinese. Write the program on 3x5 cards and give them to an ordinary person who does not speak Chinese. Lock the person into a room and have him follow the instructions on the cards. He will copy out Chinese characters and pass them in and out of the room through a slot. From the outside, it will appear that the Chinese room contains a fully intelligent person who speaks Chinese. The question is this: is there anyone (or anything) in the room that understands Chinese? That is, is there anything that has the mental state of understanding, or which has conscious awareness of what is being discussed in Chinese? The man is clearly not aware. The room cannot be aware. The cards certainly are not aware. Searle concludes that the Chinese room, or any other physical symbol system, cannot have a mind. Searle goes on to argue that actual mental states and consciousness require (yet to be described) "actual physical-chemical properties of actual human brains." He argues there are special "causal properties" of brains and neurons that gives rise to minds: in his words "brains cause minds." Gottfried Leibniz made essentially the same argument as Searle in 1714, using the thought experiment of expanding the brain until it was the size of a mill. In 1974, Lawrence Davis imagined duplicating the brain using telephone lines and offices staffed by people, and in 1978 Ned Block envisioned the entire population of China involved in such a brain simulation. This thought experiment is called "the Chinese Nation" or "the Chinese Gym". Ned Block also proposed his Blockhead argument, which is a version of the Chinese room in which the program has been re-factored into a simple set of rules of the form "see this, do that", removing all mystery from the program. Responses to the Chinese room emphasize several different points. Is thinking a kind of computation? The computational theory of mind or "computationalism" claims that the relationship between mind and brain is similar (if not identical) to the relationship between a running program (software) and a computer (hardware). The idea has philosophical roots in Hobbes (who claimed reasoning was "nothing more than reckoning"), Leibniz (who attempted to create a logical calculus of all human ideas), Hume (who thought perception could be reduced to "atomic impressions") and even Kant (who analyzed all experience as controlled by formal rules). The latest version is associated with philosophers Hilary Putnam and Jerry Fodor. This question bears on our earlier questions: if the human brain is a kind of computer then computers can be both intelligent and conscious, answering both the practical and philosophical questions of AI. In terms of the practical question of AI ("Can a machine display general intelligence?"), some versions of computationalism make the claim that (as Hobbes wrote): In other words, our intelligence derives from a form of calculation, similar to arithmetic. This is the physical symbol system hypothesis discussed above, and it implies that artificial intelligence is possible. In terms of the philosophical question of AI ("Can a machine have mind, mental states and consciousness?"), most versions of computationalism claim that (as Stevan Harnad characterizes it): This is John Searle's "strong AI" discussed above, and it is the real target of the Chinese room argument (according to Harnad). Other related questions If "emotions" are defined only in terms of their effect on behavior or on how they function inside an organism, then emotions can be viewed as a mechanism that an intelligent agent uses to maximize the utility of its actions. Given this definition of emotion, Hans Moravec believes that "robots in general will be quite emotional about being nice people". Fear is a source of urgency. Empathy is a necessary component of good human computer interaction. He says robots "will try to please you in an apparently selfless manner because it will get a thrill out of this positive reinforcement. You can interpret this as a kind of love." Daniel Crevier writes "Moravec's point is that emotions are just devices for channeling behavior in a direction beneficial to the survival of one's species." "Self-awareness", as noted above, is sometimes used by science fiction writers as a name for the essential human property that makes a character fully human. Turing strips away all other properties of human beings and reduces the question to "can a machine be the subject of its own thought?" Can it think about itself? Viewed in this way, a program can be written that can report on its own internal states, such as a debugger. Turing reduces this to the question of whether a machine can "take us by surprise" and argues that this is obviously true, as any programmer can attest. He notes that, with enough storage capacity, a computer can behave in an astronomical number of different ways. It must be possible, even trivial, for a computer that can represent ideas to combine them in new ways. (Douglas Lenat's Automated Mathematician, as one example, combined ideas to discover new mathematical truths.) Kaplan and Haenlein suggest that machines can display scientific creativity, while it seems likely that humans will have the upper hand where artistic creativity is concerned. In 2009, scientists at Aberystwyth University in Wales and the U.K's University of Cambridge designed a robot called Adam that they believe to be the first machine to independently come up with new scientific findings. Also in 2009, researchers at Cornell developed Eureqa, a computer program that extrapolates formulas to fit the data inputted, such as finding the laws of motion from a pendulum's motion. This question (like many others in the philosophy of artificial intelligence) can be presented in two forms. "Hostility" can be defined in terms function or behavior, in which case "hostile" becomes synonymous with "dangerous". Or it can be defined in terms of intent: can a machine "deliberately" set out to do harm? The latter is the question "can a machine have conscious states?" (such as intentions) in another form. The question of whether highly intelligent and completely autonomous machines would be dangerous has been examined in detail by futurists (such as the Machine Intelligence Research Institute). The obvious element of drama has also made the subject popular in science fiction, which has considered many differently possible scenarios where intelligent machines pose a threat to mankind; see Artificial intelligence in fiction. One issue is that machines may acquire the autonomy and intelligence required to be dangerous very quickly. Vernor Vinge has suggested that over just a few years, computers will suddenly become thousands or millions of times more intelligent than humans. He calls this "the Singularity". He suggests that it may be somewhat or possibly very dangerous for humans. This is discussed by a philosophy called Singularitarianism. In 2009, academics and technical experts attended a conference to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence". They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls. Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue. They point to programs like the Language Acquisition Device which can emulate human interaction. Some have suggested a need to build "Friendly AI", a term coined by Eliezer Yudkowsky, meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane. Turing said "It is customary ... to offer a grain of comfort, in the form of a statement that some peculiarly human characteristic could never be imitated by a machine. ... I cannot offer any such comfort, for I believe that no such bounds can be set." Turing noted that there are many arguments of the form "a machine will never do X", where X can be many things, such as: Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new. Turing argues that these objections are often based on naive assumptions about the versatility of machines or are "disguised forms of the argument from consciousness". Writing a program that exhibits one of these behaviors "will not make much of an impression." All of these arguments are tangential to the basic premise of AI, unless it can be shown that one of these traits is essential for general intelligence. Finally, those who believe in the existence of a soul may argue that "Thinking is a function of man's immortal soul." Alan Turing called this "the theological objection". He writes: In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates. The discussion on the topic has been reignited as a result of recent claims made by Google's LaMDA artificial intelligence system that it is sentient and had a "soul". LaMDA (Language Model for Dialogue Applications) is an artificial intelligence system that creates chatbots—AI robots designed to communicate with humans—by gathering vast amounts of text from the internet and using algorithms to respond to queries in the most fluid and natural way possible. The transcripts of conversations between scientists and LaMDA reveal that the AI system excels at this, providing answers to challenging topics about the nature of emotions, generating Aesop-style fables on the moment, and even describing its alleged fears. Pretty much all philosophers doubt LaMDA's sentience. Views on the role of philosophy Some scholars argue that the AI community's dismissal of philosophy is detrimental. In the Stanford Encyclopedia of Philosophy, some philosophers argue that the role of philosophy in AI is underappreciated. Physicist David Deutsch argues that without an understanding of philosophy or its concepts, AI development would suffer from a lack of progress. Conferences and literature The main conference series on the issue is "Philosophy and Theory of AI" (PT-AI), run by Vincent C. Müller. The main bibliography on the subject, with several sub-sections, is on PhilPapers. A recent survey for Philosophy of AI is Müller (2025). See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Argumentum_ad_populum] | [TOKENS: 877] |
Contents Argumentum ad populum In argumentation theory, an argumentum ad populum (Latin for 'appeal to the people') is a fallacious argument that asserts a claim is true, good, or correct because many people allegedly think so. Alternative names Other names for the fallacy include: Description Argumentum ad populum is a type of informal fallacy, specifically a fallacy of relevance, and is similar to an argument from authority (argumentum ad verecundiam). It uses an appeal to the beliefs, tastes, or values of a group of people, stating that because a certain opinion or attitude is held by a majority, or even everyone, it is therefore correct. Appeals to popularity are common in commercial advertising that portrays products as desirable because they are used by many people or associated with popular sentiments instead of communicating the merits of the products themselves. The inverse argument, that something that is unpopular must be flawed, is also a form of this fallacy. The fallacy is similar in structure to certain other fallacies that involve a confusion between the "justification" of a belief and its "widespread acceptance" by a given group of people. When an argument uses the appeal to the beliefs of a group of experts, it takes on the form of an appeal to authority; if the appeal relates to the beliefs of a group of respected elders or the members of one's community over a long time, then it takes on the form of an appeal to tradition. The philosopher Irving Copi defined argumentum ad populum differently from an appeal to popular opinion itself, as an attempt to rouse the "emotions and enthusiasms of the multitude". Douglas N. Walton argues that appeals to popular opinion can be logically valid in some cases, such as in political dialogue within a democracy. Reversals In some circumstances, a person may argue that the fact that Y people believe X to be true implies that X is false. This line of thought is closely related to the appeal to spite fallacy given that it invokes a person's contempt for the general populace or something about the general populace to persuade them that most are wrong about X. This ad populum reversal commits the same logical flaw as the original fallacy given that the idea "X is true" is inherently separate from the idea that "Y people believe X": "Y people believe in X as true, purely because Y people believe in it, and not because of any further considerations. Therefore X must be false." While Y people can believe X to be true for fallacious reasons, X might still be true. Their motivations for believing X do not affect whether X is true or false. Y = most people, a given quantity of people, people of a particular demographic. X = a statement that can be true or false. Examples: In general, the reversal usually goes: "Most people believe A and B are both true. B is false. Thus, A is false." The similar fallacy of chronological snobbery is not to be confused with the ad populum reversal. Chronological snobbery is the claim that if belief in both X and Y was popularly held in the past and if Y was recently proved to be untrue then X must also be untrue. That line of argument is based on a belief in historical progress and not—like the ad populum reversal is—on whether or not X and/or Y is currently popular. Valid uses Appeals to public opinion are valid in situations where consensus is the determining factor for the validity of a statement, such as linguistic usage and definitions of words. Linguistic descriptivists argue that correct grammar, spelling, and expressions are defined by the language's speakers, especially in languages which do not have a central governing body. According to this viewpoint, if an incorrect expression is commonly used, it becomes correct. In contrast, linguistic prescriptivists believe that incorrect expressions are incorrect regardless of how many people use them. Special functions are mathematical functions that have well-established names and mathematical notations due to their significance in mathematics and other scientific fields. There is no formal definition of what makes a function a special function; instead, the term special function is defined by consensus. Functions generally considered to be special functions include logarithms, trigonometric functions, and the Bessel functions. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Sociology_of_immigration] | [TOKENS: 2326] |
Contents Sociology of immigration 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias The sociology of immigration involves the sociological analysis of immigration, particularly with respect to race and ethnicity, social structure, and political policy. Important concepts include assimilation, enculturation, marginalization, multiculturalism, postcolonialism, transnationalism and social cohesion. History Nativism has a long history in many societies. Global migration during the twentieth century grew particularly during the first half of the century. Due to World War I and World War II, European immigrants came to the United States (for example) in vast[quantify] quantities. Particularly following the end (1918) of World War I, some Americans labeled European immigrants as dangerous to American culture. In 1924 the United States Congress passed the Immigration Act of 1924, which placed strict quotas on immigrants entering the United States. During the 1920s -1930s women's citizenship was all dependent on a father or husband, so because of the rules many women used marriage as a way to immigrate. This means that for many women they are tied to becoming either a wife or mother. From the 1960s to 1990s, the stigma labeling immigrants as "job takers" and "criminals" subsided, and instead Americans began to consider immigrants as benefactors to the American economy, culture, and political system. Although the negative labels that immigrants were given during the first half of the twentieth century influenced their actions in society and self-perceptions (known as labeling theory in sociology), immigrants now began to assimilate more easily into society and to form strong social networks that contributed to their acquisition of social capital—the "information, knowledge of people or things, and connections that help individuals enter, gain power in, or otherwise leverage social networks". Sociologists have studied immigration closely in the twenty-first century. In the United States, compared to the majority of European immigrants during the early twentieth century, the twenty-first century witnessed the arrival of immigrants predominately from Asia, the Middle East, and Latin America. From 2000 to 2001,[clarification needed] sociologists have paid particular attention to the costs and benefits of the new diversified immigration population on American institutions, culture, economic functions, and national security. After the attacks on the World Trade Center and the Pentagon on September 11, 2001, sociologists closely analyzed the symbolism of increased anti-immigration rhetoric, directed at Middle Eastern immigrants, stemming from Americans. Structural functionalist theorists have also studied the effects of mass migration—resulting from wars, economic insecurity, and terrorism—on the social institutions of host nations, on international law, and on assimilation rates. Additionally, sociologists using social-conflict theory have analyzed, in particular, labor-market conflicts allegedly resulting from increased marketplace competition due to the rise in competition between immigrants and native workers for jobs and social mobility. Because rates of global immigration continue to increase, the field of sociology has a particular interest in monitoring twenty-first century immigration as it relates to the foundational theories of symbolic interactionism, social conflict, and structural functionalism.[citation needed] In immigration studies, social scientists assign distinct definitions to various immigrant generations. In sociology, the word "generation" is used as a "measure of distance from the 'old country'". This means that sociologists define people who move to (in the case of immigrants migrating to the United States) the United States from another society, as adults, as "first generation" immigrants, their American-born children as "second generation" immigrants, and their children in turn as "third generation" immigrants. During the mid-twentieth century in the United States, the first, second, and third generations of immigrants displayed distinct characteristics. Second-generation immigrants, having immigrant parents who witnessed the historical events unfolding in the mid-twentieth century, developed a distinct social identity both in themselves and in popular American culture. In the late 1930s, American historian Marcus Lee Hansen observed "distinct differences in attitudes toward ethnic identity between the second generation and their third-generation children". Whereas the second generation was anxious to assimilate, the third generation was sentimentally invested in "ethnicity", which sociologist Dalton Conley defines as "one's ethnic quality or affiliation". However, twenty-first century immigrants now assimilate more than their twentieth-century predecessors, most notably in the transition to using English—among immigrants who move to the United States—as the primary language for communication. While contemporary immigrant generations share common ethnic backgrounds and cultures, there are differences in the level of social mobility, economic achievement, educational attainment, and familial relations among the members of those generations.[citation needed] Three sociological perspectives Symbolic interactionism is a "micro-level theory in which shared meanings, orientations, and assumptions form the basic motivations behind people's actions". This theory, as opposed to macrosociology, is focused on how face-to-face interactions create the social world. In order to understand how perceptions of immigrants are formed and constructed, symbolic interactionism theory has been utilized. Immigration into the United States has been on the rise since 1965. Public opinion polls have demonstrated "that the percentage of Americans who wanted immigration decreased to be very low immediately prior to 1965, but had begun an upward incline from 1965 to the late 1970s at which time it thereafter increased dramatically". One of the reasons why there is a negative native response to increased immigration is because of the often-negative images of immigrants being elicited by the media. Moreover, immigration legislation, such as the 1996 Personal Responsibility and Work Opportunity Reconciliation Act, increased anti-immigration sentiment, and nativist rhetoric, and social movements in the United States. Perceived group threat also has been proven to maintain an important role in explaining Americans' attitude toward immigrants. Fear of foreigners altering aspects of the established culture, such as the native language, results in nativist sentiment and further polarization. Together, these instances illustrate the significance of immigrants' master status in shaping how others perceive them, and how they perceive themselves. For example, the racial stigma that Mexican immigrants encounter in the United States "reinforces the low status and the self perceptions of Mexican Americans". When Mexican Americans internalize this perception of their race, they begin to act accordingly and indirectly reinforce this perception. The rise in islamophobia in the United States, after the attacks on the World Trade Center, is an example of symbolic interactionism in practice. After the attacks on the World Trade Center on September 11, 2001, "Arabs and Muslims (as well as Latinos, South Asians, and other individuals who were mistakenly perceived to be Arab or Muslim based on their skin color, dress, or organizational affiliations) suffered an unprecedented outbreak of backlash violence" because of assumptions by others that they were terrorists who intended to do harm to Americans. In the days and months following the 9/11 attacks, Muslims and Arabs were subject to hate crimes based on personal characteristics such as their clothing, accent, facial hair, and skin tone. From a symbolic interactionist perspective, the violent attacks against Arabs and Muslims resulted from the shared assumptions and meanings that Americans attributed to Arab and Muslim people and culture. Social conflict theory is a sociological perspective that views society as a constant struggle for power and resources. This theory holds that competition between competing interests is a central function of society. Social conflict theorists believe that competition for power and resources results in social change. Since the early nineteenth century, advocates and opponents of immigration have analyzed the economic effects of immigration on national economies and workforces. Opponents of national increases in immigration rates have argued that restricting immigration "improves the economic well-being of native workers". Immigration, opponents argue, causes unemployment for native workers. The reasoning behind this argument is that immigrant peoples compete with the native peoples for jobs and resources. This increased competition results in more jobs going to immigrant workers since it costs less for employers to hire a highly skilled immigrant that just came into the U.S and doesn't really know any English, than a low-skilled native worker. However, advocates of immigration argue that immigration improves a nations economy since more people enter the workforce, thus resulting in higher productivity and increased competition in the labor market. Additionally, proponents argue that the native population benefits from immigration since "immigrants increase the demand for goods and services produced by native workers and firms". Social conflict theorists suggest that the competition between native workers and immigrant workers, for economic achievement and social mobility, is at the crux of the immigration debate as it relates to economics. A common fear is that immigration will alter the native culture of a nation. In the discipline of sociology, "culture" is defined as a "set of beliefs, traditions, and practices". Structural functionalism is a sociological perspective "claiming that every society has certain structures that exist to fulfill some set of necessary functions". Drawing on the ideas of sociologist Émile Durkheim, society through this sociological lens is thought of as a living organism—similar to the nineteenth-century theory of organicism. Regarding the economy of a society, immigrants play a prominent role in maintaining, disrupting, and/or contributing to the social cohesion. For example, since the 1980s and 1990s, the American economy has favored workers who have valuable skills to offer. If immigrants to the United States, for example, have valuable skills to offer, they may "increase the chances of economic success in the United States, such as the language and culture of the American workplace". The human capital and physical resources that immigrants may have to offer can complement those that already exist in the American economy. Structural functionalists believe that, whether the effects are positive or negative, immigration significantly impacts the level of social cohesion in the workplace. This analysis of social cohesion is closely related to the work of sociologist Émile Durkheim. Sociologists utilizing structural functionalism would explain that immigration serves the function of a unifier for the immigrant population in a foreign society. Especially in the nineteenth century and early twentieth century, immigrants in the United States tended to socialize with people of similar ethnic backgrounds in order to experience group solidarity during a time of intense resocialization. This feeling of group solidarity led to increased social capital, which held people together and decreased the sense of anomie among immigrants, which is a "sense of aimlessness or despair that arises when we can no longer reasonably expect life to be predictable". Immigration, therefore, served as a mechanism for social networks to be built among immigrant populations during a period of intense resocialization and prevalent cases of anomic suicide. Transnationalism A more contemporary sociological analysis of immigration can be explored within the concept of transnationalism. This analysis is perhaps more concerned with the relational dimensions of immigration, particularly in terms of the ways in which families and relationships are maintained when members migrate to another country. Theorist Zlatko Skrbis argues that within a transnational network of families, the patterns of migration are intertwined with notions of 'emotion' and 'belonging'. Sociology of peace, war, and social conflict The refugees as weapons is analyzed as forced experience of mass exodus of refugees from a state to a hostile state as a "weapon." See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Spheroid#Area] | [TOKENS: 1811] |
Contents Spheroid A spheroid, also known as an ellipsoid of revolution or rotational ellipsoid, is a quadric surface obtained by rotating an ellipse about one of its principal axes; in other words, an ellipsoid with two equal semi-diameters. A spheroid has circular symmetry. If the ellipse is rotated about its major axis, the result is a prolate spheroid, elongated like a rugby ball. The American football is similar but has a pointier end than a spheroid could. If the ellipse is rotated about its minor axis, the result is an oblate spheroid, flattened like a lentil or a plain M&M. If the generating ellipse is a circle, the result is a sphere. Due to the combined effects of gravity and rotation, the figure of the Earth (and of all planets) is not quite a sphere, but instead is slightly flattened in the direction of its axis of rotation. For that reason, in cartography and geodesy the Earth is often approximated by an oblate spheroid, known as the reference ellipsoid, instead of a sphere. The current World Geodetic System model uses a spheroid whose radius is 6,378.137 km (3,963.191 mi) at the Equator and 6,356.752 km (3,949.903 mi) at the poles. The word spheroid originally meant "an approximately spherical body", admitting irregularities even beyond the bi- or tri-axial ellipsoidal shape; that is how the term is used in some older papers on geodesy (for example, referring to truncated spherical harmonic expansions of the Earth's gravity geopotential model). Equation The equation of a tri-axial ellipsoid centred at the origin with semi-axes a, b and c aligned along the coordinate axes is The equation of a spheroid with z as the symmetry axis is given by setting a = b: The semi-axis a is the equatorial radius of the spheroid, and c is the distance from centre to pole along the symmetry axis. There are two possible cases: The case of a = c reduces to a sphere. Properties The equatorial circumference of a spheroid is measured around its equator and is given as: The meridional or polar circumference of a spheroid is measured through its poles and is given as: C p = 4 a ∫ 0 π / 2 1 − e 2 sin 2 θ d θ {\displaystyle C_{\text{p}}\,=\,4a\int _{0}^{\pi /2}{\sqrt {1-e^{2}\sin ^{2}\theta }}\ d\theta } The volumetric circumference of a spheroid is the circumference of a sphere of equal volume as the spheroid and is given as: An oblate spheroid with c < a has surface area where e o 2 = 1 − c 2 a 2 {\displaystyle e_{o}^{2}=1-{\frac {c^{2}}{a^{2}}}} . A prolate spheroid with c > a has surface area where e p 2 = 1 − a 2 c 2 . {\displaystyle e_{p}^{2}=1-{\frac {a^{2}}{c^{2}}}.} In both cases, eo and ep may be identified as the eccentricity (see ellipse). These formulas are identical in the sense that the formula for So can be used to calculate the surface area of a prolate spheroid and vice versa. However, eo then becomes imaginary and can no longer directly be identified with the eccentricity. Both of these results may be cast into many other forms using standard mathematical identities and relations between parameters of the ellipse. The volume inside a spheroid (of any kind) is If A = 2a is the equatorial diameter, and C = 2c is the polar diameter, the volume is Let a spheroid be parameterized as where β is the reduced latitude or parametric latitude and λ is the longitude, with domain −π/2 < β < +π/2 and −π < λ < +π, respectively. Then, the spheroid's Gaussian curvature is: and its mean curvature is Both of these curvatures are a function of latitude only and are always positive, so that every point on a spheroid is elliptic. The aspect ratio of an oblate spheroid/ellipse, c : a, is the ratio of the polar to equatorial lengths, while the flattening (also called oblateness) f, is the ratio of the equatorial-polar length difference to the equatorial length: The first eccentricity (usually simply eccentricity, as above) is often used instead of flattening. It is defined by: The relations between eccentricity and flattening are: All modern geodetic ellipsoids are defined by the semi-major axis plus either the semi-minor axis (giving the aspect ratio), the flattening, or the first eccentricity. While these definitions are mathematically interchangeable, real-world calculations must lose some precision. To avoid confusion, an ellipsoidal definition considers its own values to be exact in the form it gives. Occurrence and applications The most common shapes for the density distribution of protons and neutrons in an atomic nucleus are spherical, prolate, and oblate spheroidal, where the polar axis is assumed to be the spin axis (or direction of the spin angular momentum vector). Deformed nuclear shapes occur as a result of the competition between electromagnetic repulsion between protons, surface tension and quantum shell effects. Spheroids are common in 3D cell cultures. Rotating equilibrium spheroids include the Maclaurin spheroid and the Jacobi ellipsoid. Spheroid is also a shape of archaeological artifacts. The oblate spheroid is the approximate shape of rotating planets and other celestial bodies, including Earth, Saturn, Jupiter, and the quickly spinning star Altair. Saturn is the most oblate planet in the Solar System, with a flattening of 0.09796. See planetary flattening and equatorial bulge for details. Enlightenment scientist Isaac Newton, working from Jean Richer's pendulum experiments and Christiaan Huygens's theories for their interpretation, reasoned that Jupiter and Earth are oblate spheroids owing to their centrifugal force. Earth's diverse cartographic and geodetic systems are based on reference ellipsoids, all of which are oblate. The prolate spheroid is the approximate shape of the ball used in American football and in rugby. Several moons of the Solar System approximate prolate spheroids in shape, though they are closer to triaxial ellipsoids. Examples are Saturn's satellites Mimas, Enceladus, and Tethys and Uranus's satellite Miranda. In contrast to being distorted into oblate spheroids via rapid rotation, celestial objects distort slightly into prolate spheroids via tidal forces when they orbit a massive body in a close orbit. The most extreme example is Jupiter's moon Io, which becomes slightly more or less prolate in its orbit due to a slight eccentricity, causing intense volcanism. The major axis of the prolate spheroid does not run through the satellite's poles in this case, but through the two points on its equator directly facing toward and away from the primary. This combines with the smaller oblate distortion from the synchronous rotation to cause the body to become triaxial. The term is also used to describe the shape of some nebulae such as the Crab Nebula. Fresnel zones, used to analyze wave propagation and interference in space, are a series of concentric prolate spheroids with principal axes aligned along the direct line-of-sight between a transmitter and a receiver. The atomic nuclei of the actinide and lanthanide elements are shaped like prolate spheroids. In anatomy, near-spheroid organs such as testis may be measured by their long and short axes. Many submarines have a shape which can be described as prolate spheroid. For a spheroid having uniform density, the moment of inertia is that of an ellipsoid with an additional axis of symmetry. Given a description of a spheroid as having a major axis c, and minor axes a = b, the moments of inertia along these principal axes are C, A, and B. However, in a spheroid the minor axes are symmetrical. Therefore, our inertial terms along the major axes are: where M is the mass of the body defined as See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Lod#cite_note-19] | [TOKENS: 4733] |
Contents Lod Lod (Hebrew: לוד, fully vocalized: לֹד), also known as Lydda (Ancient Greek: Λύδδα) and Lidd (Arabic: اللِّدّ, romanized: al-Lidd, or اللُّدّ, al-Ludd), is a city 15 km (9+1⁄2 mi) southeast of Tel Aviv and 40 km (25 mi) northwest of Jerusalem in the Central District of Israel. It is situated between the lower Shephelah on the east and the coastal plain on the west. The city had a population of 90,814 in 2023. Lod has been inhabited since at least the Neolithic period. It is mentioned a few times in the Hebrew Bible and in the New Testament. Between the 5th century BCE and up until the late Roman period, it was a prominent center for Jewish scholarship and trade. Around 200 CE, the city became a Roman colony and was renamed Diospolis (Ancient Greek: Διόσπολις, lit. 'city of Zeus'). Tradition identifies Lod as the 4th century martyrdom site of Saint George; the Church of Saint George and Mosque of Al-Khadr located in the city is believed to have housed his remains. Following the Arab conquest of the Levant, Lod served as the capital of Jund Filastin; however, a few decades later, the seat of power was transferred to Ramla, and Lod slipped in importance. Under Crusader rule, the city was a Catholic diocese of the Latin Church and it remains a titular see to this day.[citation needed] Lod underwent a major change in its population in the mid-20th century. Exclusively Palestinian Arab in 1947, Lod was part of the area designated for an Arab state in the United Nations Partition Plan for Palestine; however, in July 1948, the city was occupied by the Israel Defense Forces, and most of its Arab inhabitants were expelled in the Palestinian expulsion from Lydda and Ramle. The city was largely resettled by Jewish immigrants, most of them expelled from Arab countries. Today, Lod is one of Israel's mixed cities, with an Arab population of 30%. Lod is one of Israel's major transportation hubs. The main international airport, Ben Gurion Airport, is located 8 km (5 miles) north of the city. The city is also a major railway and road junction. Religious references The Hebrew name Lod appears in the Hebrew Bible as a town of Benjamin, founded along with Ono by Shamed or Shamer (1 Chronicles 8:12; Ezra 2:33; Nehemiah 7:37; 11:35). In Ezra 2:33, it is mentioned as one of the cities whose inhabitants returned after the Babylonian captivity. Lod is not mentioned among the towns allocated to the tribe of Benjamin in Joshua 18:11–28. The name Lod derives from a tri-consonantal root not extant in Northwest Semitic, but only in Arabic (“to quarrel; withhold, hinder”). An Arabic etymology of such an ancient name is unlikely (the earliest attestation is from the Achaemenid period). In the New Testament, the town appears in its Greek form, Lydda, as the site of Peter's healing of Aeneas in Acts 9:32–38. The city is also mentioned in an Islamic hadith as the location of the battlefield where the false messiah (al-Masih ad-Dajjal) will be slain before the Day of Judgment. History The first occupation dates to the Neolithic in the Near East and is associated with the Lodian culture. Occupation continued in the Levant Chalcolithic. Pottery finds have dated the initial settlement in the area now occupied by the town to 5600–5250 BCE. In the Early Bronze, it was an important settlement in the central coastal plain between the Judean Shephelah and the Mediterranean coast, along Nahal Ayalon. Other important nearby sites were Tel Dalit, Tel Bareqet, Khirbat Abu Hamid (Shoham North), Tel Afeq, Azor and Jaffa. Two architectural phases belong to the late EB I in Area B. The first phase had a mudbrick wall, while the late phase included a circulat stone structure. Later excavations have produced an occupation later, Stratum IV. It consists of two phases, Stratum IVb with mudbrick wall on stone foundations and rounded exterior corners. In Stratum IVa there was a mudbrick wall with no stone foundations, with imported Egyptian potter and local pottery imitations. Another excavations revealed nine occupation strata. Strata VI-III belonged to Early Bronze IB. The material culture showed Egyptian imports in strata V and IV. Occupation continued into Early Bronze II with four strata (V-II). There was continuity in the material culture and indications of centralized urban planning. North to the tell were scattered MB II burials. The earliest written record is in a list of Canaanite towns drawn up by the Egyptian pharaoh Thutmose III at Karnak in 1465 BCE. From the fifth century BCE until the Roman period, the city was a centre of Jewish scholarship and commerce. According to British historian Martin Gilbert, during the Hasmonean period, Jonathan Maccabee and his brother, Simon Maccabaeus, enlarged the area under Jewish control, which included conquering the city. The Jewish community in Lod during the Mishnah and Talmud era is described in a significant number of sources, including information on its institutions, demographics, and way of life. The city reached its height as a Jewish center between the First Jewish-Roman War and the Bar Kokhba revolt, and again in the days of Judah ha-Nasi and the start of the Amoraim period. The city was then the site of numerous public institutions, including schools, study houses, and synagogues. In 43 BC, Cassius, the Roman governor of Syria, sold the inhabitants of Lod into slavery, but they were set free two years later by Mark Antony. During the First Jewish–Roman War, the Roman proconsul of Syria, Cestius Gallus, razed the town on his way to Jerusalem in Tishrei 66 CE. According to Josephus, "[he] found the city deserted, for the entire population had gone up to Jerusalem for the Feast of Tabernacles. He killed fifty people whom he found, burned the town and marched on". Lydda was occupied by Emperor Vespasian in 68 CE. In the period following the destruction of Jerusalem in 70 CE, Rabbi Tarfon, who appears in many Tannaitic and Jewish legal discussions, served as a rabbinic authority in Lod. During the Kitos War, 115–117 CE, the Roman army laid siege to Lod, where the rebel Jews had gathered under the leadership of Julian and Pappos. Torah study was outlawed by the Romans and pursued mostly in the underground. The distress became so great, the patriarch Rabban Gamaliel II, who was shut up there and died soon afterwards, permitted fasting on Ḥanukkah. Other rabbis disagreed with this ruling. Lydda was next taken and many of the Jews were executed; the "slain of Lydda" are often mentioned in words of reverential praise in the Talmud. In 200 CE, emperor Septimius Severus elevated the town to the status of a city, calling it Colonia Lucia Septimia Severa Diospolis. The name Diospolis ("City of Zeus") may have been bestowed earlier, possibly by Hadrian. At that point, most of its inhabitants were Christian. The earliest known bishop is Aëtius, a friend of Arius. During the following century (200-300CE), it's said that Joshua ben Levi founded a yeshiva in Lod. In December 415, the Council of Diospolis was held here to try Pelagius; he was acquitted. In the sixth century, the city was renamed Georgiopolis after St. George, a soldier in the guard of the emperor Diocletian, who was born there between 256 and 285 CE. The Church of Saint George and Mosque of Al-Khadr is named for him. The 6th-century Madaba map shows Lydda as an unwalled city with a cluster of buildings under a black inscription reading "Lod, also Lydea, also Diospolis". An isolated large building with a semicircular colonnaded plaza in front of it might represent the St George shrine. After the Muslim conquest of Palestine by Amr ibn al-'As in 636 CE, Lod which was referred to as "al-Ludd" in Arabic served as the capital of Jund Filastin ("Military District of Palaestina") before the seat of power was moved to nearby Ramla during the reign of the Umayyad Caliph Suleiman ibn Abd al-Malik in 715–716. The population of al-Ludd was relocated to Ramla, as well. With the relocation of its inhabitants and the construction of the White Mosque in Ramla, al-Ludd lost its importance and fell into decay. The city was visited by the local Arab geographer al-Muqaddasi in 985, when it was under the Fatimid Caliphate, and was noted for its Great Mosque which served the residents of al-Ludd, Ramla, and the nearby villages. He also wrote of the city's "wonderful church (of St. George) at the gate of which Christ will slay the Antichrist." The Crusaders occupied the city in 1099 and named it St Jorge de Lidde. It was briefly conquered by Saladin, but retaken by the Crusaders in 1191. For the English Crusaders, it was a place of great significance as the birthplace of Saint George. The Crusaders made it the seat of a Latin Church diocese, and it remains a titular see. It owed the service of 10 knights and 20 sergeants, and it had its own burgess court during this era. In 1226, Ayyubid Syrian geographer Yaqut al-Hamawi visited al-Ludd and stated it was part of the Jerusalem District during Ayyubid rule. Sultan Baybars brought Lydda again under Muslim control by 1267–8. According to Qalqashandi, Lydda was an administrative centre of a wilaya during the fourteenth and fifteenth century in the Mamluk empire. Mujir al-Din described it as a pleasant village with an active Friday mosque. During this time, Lydda was a station on the postal route between Cairo and Damascus. In 1517, Lydda was incorporated into the Ottoman Empire as part of the Damascus Eyalet, and in the 1550s, the revenues of Lydda were designated for the new waqf of Hasseki Sultan Imaret in Jerusalem, established by Hasseki Hurrem Sultan (Roxelana), the wife of Suleiman the Magnificent. By 1596 Lydda was a part of the nahiya ("subdistrict") of Ramla, which was under the administration of the liwa ("district") of Gaza. It had a population of 241 households and 14 bachelors who were all Muslims, and 233 households who were Christians. They paid a fixed tax-rate of 33,3 % on agricultural products, including wheat, barley, summer crops, vineyards, fruit trees, sesame, special product ("dawalib" =spinning wheels), goats and beehives, in addition to occasional revenues and market toll, a total of 45,000 Akçe. All of the revenue went to the Waqf. In 1051 AH/1641/2, the Bedouin tribe of al-Sawālima from around Jaffa attacked the villages of Subṭāra, Bayt Dajan, al-Sāfiriya, Jindās, Lydda and Yāzūr belonging to Waqf Haseki Sultan. The village appeared as Lydda, though misplaced, on the map of Pierre Jacotin compiled in 1799. Missionary William M. Thomson visited Lydda in the mid-19th century, describing it as a "flourishing village of some 2,000 inhabitants, imbosomed in noble orchards of olive, fig, pomegranate, mulberry, sycamore, and other trees, surrounded every way by a very fertile neighbourhood. The inhabitants are evidently industrious and thriving, and the whole country between this and Ramleh is fast being filled up with their flourishing orchards. Rarely have I beheld a rural scene more delightful than this presented in early harvest ... It must be seen, heard, and enjoyed to be appreciated." In 1869, the population of Ludd was given as: 55 Catholics, 1,940 "Greeks", 5 Protestants and 4,850 Muslims. In 1870, the Church of Saint George was rebuilt. In 1892, the first railway station in the entire region was established in the city. In the second half of the 19th century, Jewish merchants migrated to the city, but left after the 1921 Jaffa riots. In 1882, the Palestine Exploration Fund's Survey of Western Palestine described Lod as "A small town, standing among enclosure of prickly pear, and having fine olive groves around it, especially to the south. The minaret of the mosque is a very conspicuous object over the whole of the plain. The inhabitants are principally Moslim, though the place is the seat of a Greek bishop resident of Jerusalem. The Crusading church has lately been restored, and is used by the Greeks. Wells are found in the gardens...." From 1918, Lydda was under the administration of the British Mandate in Palestine, as per a League of Nations decree that followed the Great War. During the Second World War, the British set up supply posts in and around Lydda and its railway station, also building an airport that was renamed Ben Gurion Airport after the death of Israel's first prime minister in 1973. At the time of the 1922 census of Palestine, Lydda had a population of 8,103 inhabitants (7,166 Muslims, 926 Christians, and 11 Jews), the Christians were 921 Orthodox, 4 Roman Catholics and 1 Melkite. This had increased by the 1931 census to 11,250 (10,002 Muslims, 1,210 Christians, 28 Jews, and 10 Bahai), in a total of 2475 residential houses. In 1938, Lydda had a population of 12,750. In 1945, Lydda had a population of 16,780 (14,910 Muslims, 1,840 Christians, 20 Jews and 10 "other"). Until 1948, Lydda was an Arab town with a population of around 20,000—18,500 Muslims and 1,500 Christians. In 1947, the United Nations proposed dividing Mandatory Palestine into two states, one Jewish state and one Arab; Lydda was to form part of the proposed Arab state. In the ensuing war, Israel captured Arab towns outside the area the UN had allotted it, including Lydda. In December 1947, thirteen Jewish passengers in a seven-car convoy to Ben Shemen Youth Village were ambushed and murdered.In a separate incident, three Jewish youths, two men and a woman were captured, then raped and murdered in a neighbouring village. Their bodies were paraded in Lydda’s principal street. The Israel Defense Forces entered Lydda on 11 July 1948. The following day, under the impression that it was under attack, the 3rd Battalion was ordered to shoot anyone "seen on the streets". According to Israel, 250 Arabs were killed. Other estimates are higher: Arab historian Aref al Aref estimated 400, and Nimr al Khatib 1,700. In 1948, the population rose to 50,000 during the Nakba, as Arab refugees fleeing other areas made their way there. A key event was the Palestinian expulsion from Lydda and Ramle, with the expulsion of 50,000-70,000 Palestinians from Lydda and Ramle by the Israel Defense Forces. All but 700 to 1,056 were expelled by order of the Israeli high command, and forced to walk 17 km (10+1⁄2 mi) to the Jordanian Arab Legion lines. Estimates of those who died from exhaustion and dehydration vary from a handful to 355. The town was subsequently sacked by the Israeli army. Some scholars, including Ilan Pappé, characterize this as ethnic cleansing. The few hundred Arabs who remained in the city were soon outnumbered by the influx of Jews who immigrated to Lod from August 1948 onward, most of them from Arab countries. As a result, Lod became a predominantly Jewish town. After the establishment of the state, the biblical name Lod was readopted. The Jewish immigrants who settled Lod came in waves, first from Morocco and Tunisia, later from Ethiopia, and then from the former Soviet Union. Since 2008, many urban development projects have been undertaken to improve the image of the city. Upscale neighbourhoods have been built, among them Ganei Ya'ar and Ahisemah, expanding the city to the east. According to a 2010 report in the Economist, a three-meter-high wall was built between Jewish and Arab neighbourhoods and construction in Jewish areas was given priority over construction in Arab neighborhoods. The newspaper says that violent crime in the Arab sector revolves mainly around family feuds over turf and honour crimes. In 2010, the Lod Community Foundation organised an event for representatives of bicultural youth movements, volunteer aid organisations, educational start-ups, businessmen, sports organizations, and conservationists working on programmes to better the city. In the 2021 Israel–Palestine crisis, a state of emergency was declared in Lod after Arab rioting led to the death of an Israeli Jew. The Mayor of Lod, Yair Revivio, urged Prime Minister of Israel Benjamin Netanyahu to deploy Israel Border Police to restore order in the city. This was the first time since 1966 that Israel had declared this kind of emergency lockdown. International media noted that both Jewish and Palestinian mobs were active in Lod, but the "crackdown came for one side" only. Demographics In the 19th century and until the Lydda Death March, Lod was an exclusively Muslim-Christian town, with an estimated 6,850 inhabitants, of whom approximately 2,000 (29%) were Christian. According to the Israel Central Bureau of Statistics (CBS), the population of Lod in 2010 was 69,500 people. According to the 2019 census, the population of Lod was 77,223, of which 53,581 people, comprising 69.4% of the city's population, were classified as "Jews and Others", and 23,642 people, comprising 30.6% as "Arab". Education According to CBS, 38 schools and 13,188 pupils are in the city. They are spread out as 26 elementary schools and 8,325 elementary school pupils, and 13 high schools and 4,863 high school pupils. About 52.5% of 12th-grade pupils were entitled to a matriculation certificate in 2001.[citation needed] Economy The airport and related industries are a major source of employment for the residents of Lod. Other important factories in the city are the communication equipment company "Talard", "Cafe-Co" - a subsidiary of the Strauss Group and "Kashev" - the computer center of Bank Leumi. A Jewish Agency Absorption Centre is also located in Lod. According to CBS figures for 2000, 23,032 people were salaried workers and 1,405 were self-employed. The mean monthly wage for a salaried worker was NIS 4,754, a real change of 2.9% over the course of 2000. Salaried men had a mean monthly wage of NIS 5,821 (a real change of 1.4%) versus NIS 3,547 for women (a real change of 4.6%). The mean income for the self-employed was NIS 4,991. About 1,275 people were receiving unemployment benefits and 7,145 were receiving an income supplement. Art and culture In 2009-2010, Dor Guez held an exhibit, Georgeopolis, at the Petach Tikva art museum that focuses on Lod. Archaeology A well-preserved mosaic floor dating to the Roman period was excavated in 1996 as part of a salvage dig conducted on behalf of the Israel Antiquities Authority and the Municipality of Lod, prior to widening HeHalutz Street. According to Jacob Fisch, executive director of the Friends of the Israel Antiquities Authority, a worker at the construction site noticed the tail of a tiger and halted work. The mosaic was initially covered over with soil at the conclusion of the excavation for lack of funds to conserve and develop the site. The mosaic is now part of the Lod Mosaic Archaeological Center. The floor, with its colorful display of birds, fish, exotic animals and merchant ships, is believed to have been commissioned by a wealthy resident of the city for his private home. The Lod Community Archaeology Program, which operates in ten Lod schools, five Jewish and five Israeli Arab, combines archaeological studies with participation in digs in Lod. Sports The city's major football club, Hapoel Bnei Lod, plays in Liga Leumit (the second division). Its home is at the Lod Municipal Stadium. The club was formed by a merger of Bnei Lod and Rakevet Lod in the 1980s. Two other clubs in the city play in the regional leagues: Hapoel MS Ortodoxim Lod in Liga Bet and Maccabi Lod in Liga Gimel. Hapoel Lod played in the top division during the 1960s and 1980s, and won the State Cup in 1984. The club folded in 2002. A new club, Hapoel Maxim Lod (named after former mayor Maxim Levy) was established soon after, but folded in 2007. Notable people Twin towns-sister cities Lod is twinned with: See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Luri_language] | [TOKENS: 1372] |
Contents Luri language Luri (Northern Luri: لری, Southern Luri: لری) is a Southwestern Iranian language continuum spoken by the Lurs, an Iranian people native to West Asia. The Luri dialects are descended from Middle Persian and are Central Luri, Bakhtiari, and Southern Luri. This language is spoken mainly by the Bakhtiari and the Northern and Southern Lurs (Lorestan, Ilam, Kohgiluyeh and Boyer-Ahmad, Mamasani, Sepidan, Bandar Ganaveh, Bandar Deylam) in Iran. History The Encyclopedia of Islam calls Luri “an aberrant form of archaic Persian.” The language descends from either Middle Persian or Old Persian. It belongs to the “Perside southern Zagros group” (as opposed to Kurdish dialects of northern Zagros), and is lexically similar to modern Persian, differing mainly in phonology. According to the Encyclopædia Iranica, "All Lori dialects closely resemble standard Persian and probably developed from a stage of Persian similar to that represented in Early New Persian texts written in Perso-Arabic script. The sole typical Lori feature not known in early New Persian or derivable from it is the inchoative marker (see below), though even this is found in Judeo-Persian texts". The Bakhtiāri dialect may be closer to Persian. There are two distinct languages, Greater Luri (Lor-e bozorg), a.k.a. Southern Luri (including Bakhtiari dialect), and Lesser Luri (Lor-e kuček), a.k.a. Northern Luri. Anonby stated that Luri was not a single language but a Southwestern Iranian language continuum consisting of the Luristani, Bakhtiari, and Southern Luri languages, and itself was a language continuum between Kurdish and Persian. Anonby stated that the differences in the Luri dialects were big enough for them to be considered different languages. MacKinnon also claimed that the Luri dialects had different origins and also claimed Shushtari and Dezfuli as languages of the Luri family despite them traditionally being considered Persian. Some linguists came to the idea that the only reason Dezfuli and Shushtari were often considered Persian dialects was that there were no Luri tribes named Dezfuli and Shushtari, and that the structure of the Luri language was based particularly on tribal divisions rather than linguistic facts. They added that since the term Lur was originally regional, "Luri" was actually a demonym, and that outsiders referred to all languages of the region as Luri, unaware of its linguistic diversity. Furthermore, there was no evidence of a common proto-Lur dialect, with the shared features of the Luri dialects probably having developed separately although along parallel lines. The first major documentation of the Luri language was carried out by the Russian scholar, V. A. Zhukovski in 1883, where he transcribed 992 Bakhtiari couplets. However, he did not say the genealogical classification of Bakhtiari. After Zhukovski, the German linguist Oskar Mann published "Die Mundarten der Lur-stämme" in 1910, where he studied the Luri language and was the first linguist to claim that Luri, which was then thought to be a dialect of Kurdish, was a distinct language. Geography Luri dialects (Northern Luri [or Central Luri], Shuhani and Hinimini) are as a group the second largest language in the Lorestan province (around 25% of the population), mainly spoken in the eastern counties of the province (Khoramabad, Dorud, Borujerd). In the Ilam province (around 14.59% of the population) it is mostly spoken in villages in the southern parts of the province. Around 21.24% of Hamadan province speak Northern Luri. Southern Luri is a dialect of Luri is spoken by Southern Lurs and Lurs people mainly in Kohgiluyeh and Boyer-Ahmad province, northwest Fars province, east Khuzestan province and some in Bushehr province. The Bakhtiari dialect is the main first language in the province of Chaharmahal and Bakhtiari (around 61.82%), except around Shahrekord, Borujen, Ben and Saman counties, where Persian, Turkic and Chaharmahali dialect predominate. Around 7.15% of Isfahan province speak Bakhtiari. Internal classification The language consists of Central Luri, Bakhtiari, and Southern Luri. Central Luri is spoken in northern parts of Luri communities including eastern, central and northern parts of Luristan province, southern parts of Hamadan province mainly in Malayer, Nahavand and Tuyserkan counties, southern regions of Ilam province and southeastern parts of Markazi province. Bakhtiari is used by Bakhtiari people in South Luristan, Chaharmahal and Bakhtiari province, significant regions in north and east of Khouzestan and western regions of Isfahan province. Finally, Southern Luri is spoken throughout Kohgiluyeh and Boyer-Ahmad province, and in western and central regions in Fars province, northern and western parts of Bushehr province and southeastern regions of Khouzestan. Several Luri communities are spread sporadically across the Iranian Plateau e.g. Khorasan (Beyranvand and Bakhtiari Luri descendants), Kerman, Guilan and Tehran provinces. Luri is not only spoken by Lurs, as the ethnic Persians in the Nahavand region spoke northern Luri as their native language, and while the dialects of Shushtar, Dezful, and Shahr-e-Kord were closer to Luri, the speakers identified as ethnic Persians. Phonology Vocabulary In comparison with other Iranian languages, Luri has been less affected by foreign languages such as Arabic and Turkic. Nowadays, many ancient Iranian language characteristics are preserved and can be observed in Luri grammar and vocabulary. According to diverse regional and socio-ecological conditions and due to longtime social interrelations with adjacent ethnic groups especially Kurds and Persians, different dialects of Luri, despite mainly common characteristics, have significant differences. The northern dialect tends to have more Kurdish loanwords inside and southern dialects (Bakhtiari and Southern Luri) have been more exposed to Persian loanwords. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/AutoHotkey] | [TOKENS: 933] |
Contents AutoHotkey AutoHotkey is a free and open-source custom scripting language for Microsoft Windows, primarily designed to provide easy keyboard shortcuts or hotkeys, fast macro-creation and software automation to allow users of most computer skill levels to automate repetitive tasks in any Windows application. It can easily extend or modify user interfaces (for example, overriding the default Windows control key commands with their Emacs equivalents). The installation package includes an extensive help file; web-based documentation is also available. Features AutoHotkey scripts can be used to launch programs, open documents, and emulate keystrokes or mouse clicks and movements. They can also assign, retrieve, and manipulate variables, run loops, and manipulate windows, files, and folders. They can be triggered by a hotkey, such as a script that opens an internet browser when the user presses Ctrl+Alt+I on the keyboard. Keyboard keys can also be remapped and disabled—for example, so that pressing Ctrl+M produces an em dash in the active window. AutoHotkey also allows "hotstrings" that automatically replace certain text as it is typed, such as assigning the string "btw" to produce the text "by the way", or the text "%o" to produce "percentage of". Scripts can also be set to run automatically at computer startup, with no keyboard action required—for example, for performing file management at a set interval. More complex tasks can be achieved with custom data entry forms (GUI windows), working with the system registry, or using the Windows API by calling functions from DLLs. The scripts can be compiled into standalone executable files that can be run on other computers without AutoHotkey installed. The C++ source code can be compiled with Visual Studio Express. AutoHotkey allows memory access through pointers, as in C. Some uses for AutoHotkey: History The first public beta of AutoHotkey was released on November 10, 2003, after author Chris Mallett's proposal to integrate hotkey support into AutoIt v2 failed to generate response from the AutoIt community. Mallett built a new program from scratch basing the syntax on AutoIt v2 and using AutoIt v3 for some commands and the compiler. Later, AutoIt v3 switched from GPL to closed source because of "other projects repeatedly taking AutoIt code" and "setting themselves up as competitors". In 2010, AutoHotkey v1.1 (originally called AutoHotkey_L) became the platform for ongoing development of AutoHotkey. In late 2012, it became the official branch. Another port of the program is AutoHotkey.dll. A well known fork of the program is AutoHotkey_H, which has its own subforum on the main site. In July 2021, the first AutoHotkey v2 beta was released. The first release candidate was released on November 20, 2022, with the full release of v2.0.0 planned later in the year. On December 20, 2022, version 2.0.0 was officially released. On January 22, 2023, AutoHotkey v2 became the official primary version. AutoHotkey v1.1 became legacy and no new features were implemented, but this version was still supported by the site. On March 16, 2024, the final update of AutoHotkey v1.1 was released. AutoHotkey v1.1 has now reached its end of life. Examples The following script searches for a particular word or phrase using Google. After the user copies text from any application to the clipboard, pressing the configurable hotkey ⊞ Win+G opens the user's default web browser and performs the search. The following script defines a hotstring that enables the user to type afaik in any program and, when followed by an ending character, automatically replace it with "as far as I know": User-contributed features AutoHotKey extensions, interops and inline script libraries are available for use with and from other programming languages, including: Other major plugins enable support for: Malware When AutoHotkey is used to make standalone software for distribution, that software must include the part of AutoHotkey itself that understands and executes AutoHotkey scripts, as it is an interpreted language. Inevitably, some malware has been written using AutoHotkey. When anti-malware products attempt to earmark items of malware that have been programmed using AutoHotkey, they sometimes falsely identify AutoHotkey as the culprit rather than the actual malware.[citation needed] See also References External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.