text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/Bc_(programming_language)] | [TOKENS: 1544]
Contents bc (programming language) bc is an arbitrary-precision mathematical calculator program with an input language similar to C. It supports both interactive, command-line user-interface and script processing. Overview A typical interactive usage is typing the command bc on a Unix command prompt and entering a mathematical expression, such as (1 + 3) * 2, whereupon 8 will be output. While bc can work with arbitrary precision, it actually defaults to zero digits after the decimal point, so the expression 2/3 yields 0 (results are truncated, not rounded). This can surprise new bc users unaware of this fact. The -l option to bc sets the default scale (digits after the decimal point) to 20 and adds several additional mathematical functions to the language. History bc first appeared in Version 6 Unix in 1975. It was written by Lorinda Cherry of Bell Labs as a front end to dc, an arbitrary-precision calculator written by Robert Morris and Cherry. dc performed arbitrary-precision computations specified in reverse Polish notation. bc provided a conventional programming-language interface to the same capability via a simple compiler (a single yacc source file comprising a few hundred lines of code), which converted a C-like syntax into dc notation and piped the results through dc. In 1991, POSIX rigorously defined and standardized bc. Four implementations of this standard survive today: The first is the traditional Unix implementation, a front-end to dc, which survives in Unix and Plan 9 systems. The second is the free software GNU bc, first released in 1991 by Philip A. Nelson. The GNU implementation has numerous extensions beyond the POSIX standard and is no longer a front-end to dc (it is a bytecode interpreter). The third is a re-implementation by OpenBSD in 2003. The fourth is an independent implementation by Gavin Howard that is included in Android (operating system), FreeBSD as of 13.3-RELEASE, and macOS as of 13.0. Etymology The original UNIX version 6 manual does not explain what “bc” stands for. Various sources over the years referred to it as “basic calculator”, “bench calculator” and “binary calculator”. Implementations The POSIX standardized bc language is traditionally written as a program in the dc programming language to provide a higher level of access to the features of the dc language without the complexities of dc's terse syntax. In this form, the bc language contains single-letter variable, array and function names and most standard arithmetic operators, as well as the familiar control-flow constructs (if(cond)..., while(cond)... and for(init;cond;inc)...) from C. Unlike C, an if clause may not be followed by an else. Functions are defined using a define keyword, and values are returned from them using a return followed by the return value in parentheses. The auto keyword (optional in C) is used to declare a variable as local to a function. All numbers and variable contents are arbitrary-precision numbers whose precision (in decimal places) is determined by the global scale variable. The numeric base of input (in interactive mode), output and program constants may be specified by setting the reserved ibase (input base) and obase (output base) variables. Output is generated by deliberately not assigning the result of a calculation to a variable. Comments may be added to bc code by use of the C /* and */ (start and end comment) symbols. The following POSIX bc operators behave exactly like their C counterparts: The modulus operators, % and %= behave exactly like their C counterparts only when the global scale variable is set to 0, i.e. all calculations are integer-only. Otherwise the computation is done with the appropriate scale. a%b is defined as a-(a/b)*b. Examples: The operators superficially resemble the C bitwise exclusive-or operators, but are in fact the bc integer exponentiation operators. Of particular note, the use of the ^ operator with negative numbers does not follow the C operator precedence. -2^2 gives the answer of 4 under bc rather than −4. The bitwise, Boolean and conditional operators: are not available in POSIX bc. The sqrt() function for calculating square roots is POSIX bc's only built-in mathematical function. Other functions are available in an external standard library. The scale() function for determining the precision (as with the scale variable) of its argument and the length() function for determining the number of significant decimal digits in its argument are also built-in. bc's standard math library (defined with the -l option) contains functions for calculating sine, cosine, arctangent, natural logarithm, the exponential function and the two parameter Bessel function J. Most standard mathematical functions (including the other inverse trigonometric functions) can be constructed using these. See external links for implementations of many other functions. The -l option changes the scale to 20, so things such as modulo may work unexpectedly. For example, writing bc -l and then the command print 3%2 outputs 0. But writing scale=0 after bc -l and then the command print 3%2 will output 1. Plan 9 bc is identical to POSIX bc but for an additional print statement. GNU bc derives from the POSIX standard and includes many extensions. It is entirely separate from dc-based implementations of the POSIX standard and is instead written in C. Nevertheless, it is fully backwards compatible as all POSIX bc programs will run unmodified as GNU bc programs. GNU bc variables, arrays and function names may contain more than one character, some more operators have been included from C, and notably, an if clause may be followed by an else. Output is achieved either by deliberately not assigning a result of a calculation to a variable (the POSIX way) or by using the added print statement. Furthermore, a read statement allows the interactive input of a number into a running calculation. In addition to C-style comments, a # character will cause everything after it until the next new-line to be ignored. The value of the last calculation is always stored within the additional built-in last variable. The following logical operators are additional to those in POSIX bc: They are available for use in conditional statements (such as within an if statement). Note, however, that there are still no equivalent bitwise or assignment operations. All functions available in GNU bc are inherited from POSIX. No further functions are provided as standard with the GNU distribution. Example code Since the bc ^ operator only allows an integer power to its right, one of the first functions a bc user might write is a power function with a floating-point exponent. Both of the below assume the standard library has been included: Calculate pi using the builtin arctangent function, a(): Because the syntax of bc is similar to that of C, published numerical functions written in C can often be translated into bc quite easily, which immediately provides the arbitrary precision of bc. For example, in the Journal of Statistical Software (July 2004, Volume 11, Issue 5), George Marsaglia published the following C code for the cumulative normal distribution: With some necessary changes to accommodate bc's different syntax, and noting that the constant "0.9189..." is actually ln(2*PI)/2, this can be translated to the following GNU bc code: Using bc in shell scripts bc can be used non-interactively, with input through a pipe. This is useful inside shell scripts. For example: In contrast, note that the bash shell only performs integer arithmetic, e.g.: One can also use the here-string idiom (in bash, ksh, csh): See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Punjabi_language] | [TOKENS: 8422]
Contents Punjabi language Punjabi,[g] sometimes spelled Panjabi,[h] is an Indo-Aryan language native to the Punjab region of Pakistan and India. It is one of the most widely spoken native languages in the world, with approximately 150 million native speakers.[i] Punjabi is the most widely-spoken first language in Pakistan, with 88.9 million native speakers according to the 2023 Pakistani census, and the 11th most widely-spoken in India, with 31.1 million native speakers, according to the 2011 census. It is spoken among a significant overseas diaspora, particularly in Canada, the United Kingdom, the United States, Australia, and the Gulf states. In Pakistan, Punjabi is written using the Shahmukhi alphabet, based on the Perso-Arabic script; in India, it is written using the Gurmukhi alphabet, based on the Indic scripts. Punjabi is unusual among the Indo-Aryan languages and the broader Indo-European language family in its usage of lexical tone. History The word Punjabi (sometimes spelled Panjabi) has been derived from the word Panj-āb, Persian for 'Five Waters', referring to the five major eastern tributaries of the Indus River. The name of the region was introduced by the Turko-Persian conquerors of South Asia and was a translation of the Sanskrit name, Panchanada, which means 'Land of the Five Rivers'. Panj is cognate with other Indo-European languages' words for five, like Sanskrit pañca (पञ्च), Greek pénte (πέντε) and Lithuanian penki, but also English five; āb is cognate with Sanskrit áp (अप्) and with the Av- of Avon. The historical Punjab region, now divided between India and Pakistan, is defined physiographically by the Indus River and these five tributaries. One of the five, the Beas River, is a tributary of another, the Sutlej. Punjabi developed from Prakrit languages and later Apabhraṃśa (Sanskrit: अपभ्रंश, 'deviated' or 'non-grammatical speech') From 600 BC, Sanskrit developed as the standard literary and administrative language and Prakrit languages evolved into many regional languages in different parts of India. All these languages are called Prakrit languages (Sanskrit: प्राकृत, prākṛta) collectively. Paishachi Prakrit was one of these Prakrit languages, which was spoken in north and north-western India and Punjabi developed from this Prakrit. Later in northern India Paishachi Prakrit gave rise to Paishachi Apabhraṃśa, a descendant of Prakrit. Punjabi emerged as an Apabhramsha, a form of Prakrit, in the 7th century AD and became stable by the 10th century. The earliest writings in Punjabi belong to the Nath Yogi-era from 9th to 14th century. The language of these compositions is morphologically closer to Shauraseni Apbhramsa, though vocabulary and rhythm is surcharged with extreme colloquialism and folklore. Writing in 1317–1318, Amir Khusrau referred to the language spoken by locals around the area of Lahore as Lahauri. The precursor stage of Punjabi between the 10th and 16th centuries is termed 'Old Punjabi', whilst the stage between the 16th and 19th centuries is termed as 'Medieval Punjabi'. The Arabic and Modern Persian influence in the historical Punjab region began with the late first millennium Muslim conquests in the Indian subcontinent. Since then, many Persian words have been incorporated into Punjabi (such as zamīn, śahir etc.) and are used with a liberal approach. Through Persian, Punjabi also absorbed many Arabic-derived words like dukān, ġazal and more, as well as Turkic words like qēncī, sōġāt, etc. After the fall of the Sikh empire, Urdu was made the official language of Punjab under the British (in Pakistani Punjab, it is still the primary official language) and influenced the language as well. In the second millennium, Punjabi was lexically influenced by Portuguese (words like almārī), Greek (words like dām), Japanese (words like rikśā), Chinese (words like cāh, līcī, lukāṭh) and English (words like jajj, apīl, māsṭar), though these influences have been minor in comparison to Persian and Arabic. In fact, the sounds /z/ (ਜ਼ / ز ژ ذ ض ظ), /ɣ/ (ਗ਼ / غ), /q/ (ਕ਼ / ق), /ʃ/ (ਸ਼ / ش), /x/ (ਖ਼ / خ) and /f/ (ਫ਼ / ف) are all borrowed from Persian, but in some instances the latter three arise natively. Later, the letters ਜ਼ / ز, ਸ਼ / ش and ਫ਼ / ف began being used in English borrowings, with ਸ਼ / ش also used in Sanskrit borrowings. Punjabi has also had minor influence from and on neighbouring languages such as Sindhi, Haryanvi, Pashto and Hindustani. Note: In more formal contexts, hypercorrect Sanskritized versions of these words (ਪ੍ਰਧਾਨ pradhān for ਪਰਧਾਨ pardhān and ਪਰਿਵਾਰ parivār for ਪਰਵਾਰ parvār) may be used. Modern Punjabi emerged in the 19th century from the Medieval Punjabi stage. Modern Punjabi has two main varieties, Western Punjabi and Eastern Punjabi, which have many dialects and forms, altogether spoken by over 150 million people. The Majhi dialect, which is transitional between the two main varieties, has been adopted as standard Punjabi in India and Pakistan for education and mass media. The Majhi dialect originated in the Majha region of the Punjab. In India, Punjabi is written in the Gurmukhī script in offices, schools, and media. Gurmukhi is the official standard script for Punjabi, though it is often unofficially written in the Latin scripts due to influence from English, one of India's two primary official languages at the Union-level. In Pakistan, Punjabi is generally written using the Shahmukhī script, which in literary standards, is identical to the Urdu alphabet, however various attempts have been made to create certain, distinct characters from a modification of the Persian Nastaʿlīq characters to represent Punjabi phonology, not already found in the Urdu alphabet. In Pakistan, Punjabi loans technical words from Persian and Arabic, just like Urdu does. Geographic distribution Punjabi is the most widely spoken language in Pakistan, the eleventh-most widely spoken in India, and also present in the Punjabi diaspora in various countries. Punjabi is the most widely spoken language in Pakistan, being the native language of 88.9 million people, or approximately 37% of the country's population. Beginning with the 1981 and 2017 censuses respectively, speakers of the Western Punjabi's Saraiki and Hindko varieties were no longer included in the total numbers for Punjabi, which explains the apparent decrease. Pothwari speakers however are included in the total numbers for Punjabi. Punjabi is the official language of the Indian state of Punjab, and has the status of an additional official language in Haryana and Delhi. Some of its major urban centres in northern India are Amritsar, Ludhiana, Chandigarh, Jalandhar, Ambala, Patiala, Bathinda, Hoshiarpur, Firozpur and Delhi. In the 2011 census of India, 31.14 million reported their language as Punjabi. The census publications group this with speakers of related "mother tongues" like Bagri and Bhateali to arrive at the figure of 33.12 million. Punjabi is also spoken as a minority language in several other countries where Punjabi people have emigrated in large numbers, such as the United States, Australia, the United Kingdom, and Canada. There were 670,000 native Punjabi speakers in Canada in 2021, 300,000 in the United Kingdom in 2011, 280,000 in the United States and smaller numbers in other countries. Major dialects Standard Punjabi (sometimes referred to as Majhi) is the standard form of Punjabi used commonly in education and news broadcasting, and is based on the Majhi dialect. Such as the variety used on Google Translate, Standard Punjabi is also often used in official online services that employ Punjabi. It is widely used in the TV and entertainment industry of Pakistan, which is mainly produced in Lahore. The Standard Punjabi used in India and Pakistan has slight differences. In India, it excludes many of the dialect-specific features of Majhi. In Pakistan, the standard is closer to the Majhi spoken in the urban parts of Lahore. "Eastern Punjabi" refers to the varieties of Punjabi spoken in Pakistani Punjab (specifically Northern Punjabi), most of Indian Punjab, the far-north of Rajasthan and on the northwestern border of Haryana. It includes the dialects of Majhi, Malwai, Doabi, Puadhi and the extinct Lubanki. Sometimes, Dogri and Kangri are grouped into this category. "Western Punjabi" or "Lahnda" (لہندا, lit. 'western') is the name given to the diverse group of Punjabi varieties spoken in the majority of Pakistani Punjab, the Hazara region, most of Azad Kashmir and small parts of Indian Punjab such as Fazilka. These include groups of dialects like Saraiki, Pahari-Pothwari, Hindko and the extinct Inku; common dialects like Jhangvi, Shahpuri, Dhanni and Thali which are usually grouped under the term Jatki Punjabi; and the mixed variety of Punjabi and Sindhi called Khetrani. Depending on context, the terms Eastern and Western Punjabi can simply refer to all the Punjabi varieties spoken in India and Pakistan respectively, whether or not they are linguistically Eastern/Western. Phonology While a vowel length distinction between short and long vowels exists, reflected in modern Gurmukhi orthographical conventions, it is secondary to the vowel quality contrast between centralised vowels /ɪ ə ʊ/ and peripheral vowels /iː eː ɛː aː ɔː oː uː/ in terms of phonetic significance. The peripheral vowels have nasal analogues. There is a tendency with speakers to insert /ɪ̯/ between adjacent "a"-vowels as a separator. This usually changes to /ʊ̯/ if either vowel is nasalised. Note: for the tonal stops, refer to the next section about Tone. The three retroflex consonants /ɳ, ɽ, ɭ/ do not occur initially, and the nasals [ŋ, ɲ] most commonly occur as allophones of /n/ in clusters with velars and palatals (there are few exceptions). The well-established phoneme /ʃ/ may be realised allophonically as the voiceless retroflex fricative [ʂ] in learned clusters with retroflexes. Due to its foreign origin, it is often also realised as [s], in e.g. shalwār /salᵊ.ʋaːɾᵊ/. The phonemic status of the consonants /f, z, x, ɣ, q/ varies with familiarity with Hindustani norms, more so with the Gurmukhi script, with the pairs /f, pʰ/, /z, d͡ʒ/, /x, kʰ/, /ɣ, g/, and /q, k/ systematically distinguished in educated speech, /q/ being the most rarely pronounced. The retroflex lateral is most commonly analysed as an approximant as opposed to a flap. Some speakers soften the voiceless aspirates /t͡ʃʰ, pʰ, kʰ/ into fricatives /ɕ, f, x/ respectively.[citation needed] In rare cases, the /ɲ/ and /ŋ/ phonemes in Shahmukhi may be represented with letters from Sindhi.[citation needed] The /ɲ/ phoneme, which is more common than /ŋ/, is written as نی or نج depending on its phonetic preservation, e.g. نیاݨا /ɲaːɳaː/ (preserved ñ) as opposed to کنج /kiɲd͡ʒ/ (assimilated into nj). /ŋ/ is always written as نگ. Like Hindustani, the diphthongs /əɪ/ and /əʊ/ have mostly disappeared, but are still retained in some dialects. Phonotactically, long vowels /aː, iː, uː/ are treated as doubles of their short vowel counterparts /ə, ɪ, ʊ/ rather than separate phonemes. Hence, diphthongs like ai and au get monophthongised into /eː/ and /oː/, and āi and āu into /ɛː/ and /ɔː/ respectively.[citation needed] The phoneme /j/ is very fluid in Punjabi. /j/ is only truly pronounced word-initially (even then it often becomes /d͡ʒ/), where it is otherwise /ɪ/ or /i/. Unusually for an Indo-Aryan language, Punjabi distinguishes lexical tones. Three tones are distinguished in Punjabi (some sources have described these as tone contours, given in parentheses): low (high-falling), high (low-rising), and level (neutral or middle). The transcriptions and tone annotations in the examples below are based on those provided in Punjabi University, Patiala's Punjabi-English Dictionary. Level tone is found in about 75% of words and is described by some as absence of tone. There are also some words which are said to have rising tone in the first syllable and falling in the second. (Some writers describe this as a fourth tone.) However, a recent acoustic study of six Punjabi speakers in the United States found no evidence of a separate falling tone following a medial consonant. It is considered that these tones arose when voiced aspirated consonants (gh, jh, ḍh, dh, bh) lost their aspiration. In Punjabi, tone is induced by the loss of [h] in tonal consonants. Tonal consonants are any voiced aspirates /ʱ/ and the voiced glottal fricative /ɦ/. These include the five voiced aspirated plosives bh, dh, ḍh, jh and gh (which are represented by their own letters in Gurmukhi), the h consonant itself and any voiced consonants appended with [h] (Gurmukhi:੍ਹ "perī̃ hāhā", Shahmukhi: ھ "dō-caśmī hē"); usually ṛh, mh, nh, rh and lh. The five tonal plosives also become voiceless word-initially. E.g. ghar > kàr "house", ḍhōl > ṭṑl "drum" etc. Tonogenesis in Punjabi forfeits the sound of [h] for tone. Thus, the more [h] is realised, the less "tonal" a word will be pronounced, and vice versa. Tone is often reduced or rarely deleted when words are said with emphasis or on their own as a form of more exact identification.[citation needed] Sequences with the consonant h have some additional gimmicks: The consonant h on its own is now silent or very weakly pronounced except word-initially. However, certain dialects which exert stronger tone, particularly more northern Punjabi varieties and Dogri, pronounce h as very faint (thus tonal) in all cases. E.g. hatth > àtth. The Jhangvi and Shahpuri dialects of Punjabi (as they transition into Saraiki) show comparatively less realisation of tone than other Punjabi varieties,[citation needed] and do not induce the devoicing of the main five tonal consonants (bh, dh, ḍh, jh, gh). The Gurmukhi script which was developed in the 16th century has separate letters for voiced aspirated sounds, so it is thought that the change in pronunciation of the consonants and development of tones may have taken place since that time. Some other languages in Pakistan have also been found to have tonal distinctions, including Burushaski, Gujari, Hindko, Kalami, Shina, and Torwali, though these (besides Hindko) seem to be independent of Punjabi. Gemination of a consonant (doubling the letter) is indicated with adhak in Gurmukhi and tashdīd in Shahmukhi. Its inscription with a unique diacritic is a distinct feature of Gurmukhi compared to Brahmic scripts. All consonants except six (ṇ, ṛ, h, r, v, y) are regularly geminated. The latter four are only geminated in loan words from other languages.[k] There is a tendency to irregularly geminate consonants which follow long vowels, except in the final syllable of a word, e.g.menū̃ > mennū̃.[l] It also causes the long vowels to shorten but remain peripheral, distinguishing them from the central vowels /ə, ɪ, ʊ/. This gemination is less prominent than the literarily regular gemination represented by the diacritics mentioned above. Before a non-final prenasalised consonant,[m] long vowels undergo the same change but no gemination occurs. The true gemination of a consonant after a long vowel is unheard of but is written in some English loanwords to indicate short /ɛ/ and /ɔ/, e.g. ਡੈੱਡ ڈَیڈّ /ɖɛɖː/ "dead". Grammar Punjabi has a canonical word order of SOV (subject–object–verb). Function words are largely postpositions marking grammatical case on a preceding nominal. Punjabi distinguishes two genders, two numbers, and six cases, direct, oblique, vocative, ablative, locative, and instrumental. The ablative occurs only in the singular, in free variation with oblique case plus ablative postposition, and the locative and instrumental are usually confined to set adverbial expressions. Adjectives, when declinable, are marked for the gender, number, and case of the nouns they qualify. There is also a T-V distinction. Upon the inflectional case is built a system of particles known as postpositions, which parallel English's prepositions. It is their use with a noun or verb that is what necessitates the noun or verb taking the oblique case, and it is with them that the locus of grammatical function or "case-marking" then lies. The Punjabi verbal system is largely structured around a combination of aspect and tense/mood. Like the nominal system, the Punjabi verb takes a single inflectional suffix, and is often followed by successive layers of elements like auxiliary verbs and postpositions to the right of the lexical base. Vocabulary Being an Indo-Aryan language, the core vocabulary of Punjabi consists of tadbhav words inherited from Sanskrit. It contains many loanwords from Persian and Arabic. Writing systems The Punjabi language is written in multiple scripts (a phenomenon known as synchronic digraphia). Each of the major scripts currently in use is typically associated with a particular religious group, although the association is not absolute or exclusive. In India, Punjabi Sikhs use Gurmukhi, a script of the Brahmic family, which has official status in the state of Punjab. In Pakistan, Punjabi Muslims use Shahmukhi, a variant of the Perso-Arabic script and closely related to the Urdu alphabet. Sometimes Punjabi is recorded in the Devanagari script in India, albeit rarely. The Punjabi Hindus in India had a preference for Devanagari, another Brahmic script also used for Hindi, and in the first decades since independence raised objections to the uniform adoption of Gurmukhi in the state of Punjab, but most have now switched to Gurmukhi and so the use of Devanagari is rare. Often in literature, Pakistani Punjabi (written in Shahmukhi) is referred as Western-Punjabi (or West-Punjabi) and Indian Punjabi (written in Gurmukhi) is referred as Eastern-Punjabi (or East-Punjabi), although the underlying language is the same with a very slight shift in vocabulary towards Islamic and Sikh words respectively. The written standard for Shahmukhi also slightly differs from that of Gurmukhi, as it is used for western dialects, whereas Gurumukhi is used to write eastern dialects. Historically, various local Brahmic scripts including Laṇḍā and its descendants were also in use. The Punjabi Braille is used by the visually impaired. There is an altered version of IAST often used for Punjabi in which the diphthongs ai and au are written as e and o, and the long vowels e and o are written as ē and ō. Sample text This sample text was adapted from the Punjabi Wikipedia article on Lahore. Gurmukhi ਲਹੌਰ ਪਾਕਿਸਤਾਨੀ ਪੰਜਾਬ ਦੀ ਰਾਜਧਾਨੀ ਹੈ। ਲੋਕ ਗਿਣਤੀ ਦੇ ਨਾਲ਼ ਕਰਾਚੀ ਤੋਂ ਬਾਅਦ ਲਹੌਰ ਦੂਜਾ ਸਭ ਤੋਂ ਵੱਡਾ ਸ਼ਹਿਰ ਹੈ। ਲਹੌਰ ਪਾਕਿਸਤਾਨ ਦਾ ਸਿਆਸੀ, ਕਾਰੋਬਾਰੀ ਅਤੇ ਪੜ੍ਹਾਈ ਦਾ ਗੜ੍ਹ ਹੈ ਅਤੇ ਇਸੇ ਲਈ ਇਹਨੂੰ ਪਾਕਿਸਤਾਨ ਦਾ ਦਿਲ ਵੀ ਕਿਹਾ ਜਾਂਦਾ ਹੈ। ਲਹੌਰ ਰਾਵੀ ਦਰਿਆ ਦੇ ਕੰਢੇ ’ਤੇ ਵੱਸਦਾ ਹੈ। ਇਸਦੀ ਲੋਕ ਗਿਣਤੀ ਇੱਕ ਕਰੋੜ ਦੇ ਨੇੜੇ ਹੈ। Shahmukhi لہور پاکستانی پنجاب دی راجدھانی ہے۔ لوک گݨتی دے نالؕ کراچی توں بعد لہور دوجا سبھ توں وڈا شہر ہے۔ لہور پاکستان دا سیاسی، رہتلی کاروباری اتے پڑھائی دا گڑھ ہے اتے، ایسے لئی ایہنوں پاکستان دا دل وی کہا جاندا ہے۔ لہور راوی دریا دے کنڈھے تے وسدا ہے۔ ایسدی لوک گݨتی اک کروڑ دے نیڑے ہے۔ Transliteration Lahaur Pākistānī Panjāb dī rājtā̀ni ài. Lok giṇtī de nāḷ Karācī tõ bāad Lahaur dūjā sáb tõ vaḍḍā šáir ài. Lahaur Pākistān dā siāsī, kārobāri ate paṛā̀ī dā gáṛ ài te ise laī ínū̃ Pākistān dā dil vī kihā jāndā ài. Lahaur Rāvī dariā de káṇḍè te vassdā ài. Isdī lok giṇtī ikk karoṛ de neṛe ài. IPA /lɐɔ̂ːɾᵊ paˑkˑɪ̽sᵊˈtaˑnˑi pɐɲˈd͡ʒaːbᵊ di ɾaːd͡ʒᵊ ˈd̥âˑnˑi ɛ̂ ‖ loːkᵊ ˈɡɪɳᵊti de naːɭᵊ kɐ̆ɾaˑt͡ʃˑi tõ bǎːdᵊ lɐɔ̂ːɾᵊ duˑd͡ʒˑa sɐ̌bᵊ tõ ʋɐɖːa ʃɛ̌ːɾᵊ ɛ̂ ‖ lɐɔ̂ːɾᵊ paˑkˑɪ̽sᵊˈtaːnᵊ da sɐ̆ˈjaˑsˑi | kaːɾoˈbaːɾi ˈɐte pɐ̆ˈɽâːi da ɡɐ̌ɽᵊ ɛ̂ ˈɐte ˈɪse lɐi ˈěːnˑũ paˑkˑɪ̽sᵊˈtaːnᵊ da dɪlᵊ ʋi kɛ̌ːja d͡ʒaːnda ɛ̂ ‖ lɐɔ̂ːɾᵊ ˈɾaːʋi ˈdɐɾɐ̆ja de kɐ̌ɳɖe te ʋɐsːᵊda ɛ̂ ‖ ɪsᵊ di loːkᵊ ˈɡɪɳᵊti ɪkːᵊ kɐ̆ɾoːɽᵊ de neːɽe ɛ̂ ‖/ Translation Lahore is the capital city of Pakistani Punjab. After Karachi, Lahore is the second largest city. Lahore is Pakistan's political, cultural, and educational hub, and so it is also said to be the heart of Pakistan. Lahore lies on the bank of the Ravi River. Its population is close to ten million people. Literature development The Janamsakhis, stories on the life and legend of Guru Nanak (1469–1539), are early examples of Punjabi prose literature. The Victorian novel, Elizabethan drama, free verse and Modernism entered Punjabi literature through the introduction of British education during the Raj. Nanak Singh (1897–1971), Vir Singh, Ishwar Nanda, Amrita Pritam (1919–2005), Puran Singh (1881–1931), Dhani Ram Chatrik (1876–1957), Diwan Singh (1897–1944) and Ustad Daman (1911–1984), Mohan Singh (1905–78) and Shareef Kunjahi are some legendary Punjabi writers of this period. After independence of Pakistan and India Najm Hossein Syed, Fakhar Zaman and Afzal Ahsan Randhawa, Shafqat Tanvir Mirza, Ahmad Salim, and Najm Hosain Syed, Munir Niazi, Ali Arshad Mir, Pir Hadi Abdul Mannan enriched Punjabi literature in Pakistan, whereas Jaswant Singh Kanwal (1919–2020), Amrita Pritam (1919–2005), Jaswant Singh Rahi (1930–1996), Shiv Kumar Batalvi (1936–1973), Surjit Patar (1944–) and Pash (1950–1988) are some of the more prominent poets and writers from India. Status Despite Punjabi's rich literary history, it was not until 1947 that it would be recognised as an official language. Previous governments in the area of the Punjab had favoured Persian, Hindustani, or even earlier standardised versions of local registers as the language of the court or government. After the annexation of the Sikh Empire by the British East India Company following the Second Anglo-Sikh War in 1849, the British policy of establishing a uniform language for administration was expanded into the Punjab. The British Empire employed Urdu in its administration of North-Central and Northwestern India, while in the North-East of India, Bengali language was used as the language of administration. Despite its lack of official sanction, the Punjabi language continued to flourish as an instrument of cultural production, with rich literary traditions continuing until modern times. The Sikh religion, with its Gurmukhi script, played a special role in standardising and providing education in the language via gurdwaras, while writers of all religions continued to produce poetry, prose, and literature in the language. In India, Punjabi is one of the 22 scheduled languages of India. It is the first official language of the Indian State of Punjab. Punjabi also has second language official status in Delhi along with Urdu, and in Haryana. In Pakistan, no regional ethnic language has been granted official status at the national level, and as such Punjabi is not an official language at the national level, even though it is the most spoken language in Pakistan. It is widely spoken in Punjab, Pakistan, the second largest and the most populous province of Pakistan, as well as in Islamabad Capital Territory. The only two official languages in Pakistan are Urdu and English. In 1908, Mian Shafi, a Muslim elite, opposed Prof. Mukherjee's comment that Punjabi should become the provincial language of the British province of Punjab, refuting that the language of Muslims was Urdu. Jinnah opposed Motilal Nehru's 1928 proposal that every province should have its own language alongside the Hindustani language as the national language, with Jinnah stating that Urdu was the language of Muslims in a communication to Jawaharlal Nehru. Liaqat Ali Khan opposed a 1937 proposal by Gandhi to deliver free education to children in their mother tongue. When Pakistan was created in 1947, despite Punjabi being the majority language in West Pakistan and Bengali the majority in East Pakistan and Pakistan as whole, English and Urdu were chosen as the official languages. The selection of Urdu was due to its association with South Asian Muslim nationalism and because the leaders of the new nation wanted a unifying national language instead of promoting one ethnic group's language over another, due to this the Punjabi elites started identifying with Urdu more than Punjabi because they saw it as a unifying force on an ethnoreligious perspective. Broadcasting in Punjabi language by Pakistan Broadcasting Corporation decreased on TV and radio after 1947. Article 251 of the Constitution of Pakistan declares that these two languages would be the only official languages at the national level, while provincial governments would be allowed to make provisions for the use of other languages. However, in the 1950s the constitution was amended to include the Bengali language. Punjabi is not a language of instruction for primary or secondary school students in Punjab Province (unlike Sindhi and Pashto in other provinces). Pupils in secondary schools can choose the language as an elective, while Punjabi instruction or study remains rare in higher education. One notable example is the teaching of Punjabi language and literature by the University of the Punjab in Lahore which began in 1970 with the establishment of its Punjabi Department. In the cultural sphere, there are many books, plays, and songs being written or produced in the Punjabi-language in Pakistan. Until the 1970s, there were a large number of Punjabi-language films being produced by the Lollywood film industry, however since then Urdu has become a much more dominant language in film production. Additionally, television channels in Punjab Province (centred on the Lahore area) are broadcast in Urdu. The preeminence of Urdu in both broadcasting and the Lollywood film industry is seen by critics as being detrimental to the health of the language. Zia-ul Haq banned three works promoting the Punjabi language. Until the early 1990s, members of the Punjab Assembly were forbidden to address the house in Punjabi. However, this ban was lifted by Hanif Ramay yet was re-instated shortly after. The use of Urdu and English as the near-exclusive languages of broadcasting, the public sector, and formal education have led some to fear that Punjabi in Pakistan is being relegated to a low-status language and that it is being denied an environment where it can flourish. Several prominent educational leaders, researchers, and social commentators have echoed the opinion that the intentional promotion of Urdu and the continued denial of any official sanction or recognition of the Punjabi language amounts to a process of "Urdu-isation" that is detrimental to the health of the Punjabi language In August 2015, the Pakistan Academy of Letters, International Writer's Council (IWC) and World Punjabi Congress (WPC) organised the Khawaja Farid Conference and demanded that a Punjabi-language university should be established in Lahore and that Punjabi language should be declared as the medium of instruction at the primary level. In September 2015, a case was filed in Supreme Court of Pakistan against Government of Punjab, Pakistan as it did not take any step to implement the Punjabi language in the province. Additionally, several thousand Punjabis gather in Lahore every year on International Mother Language Day. Thinktanks, political organisations, cultural projects, and individuals also demand authorities at the national and provincial level to promote the use of the language in the public and official spheres. Despite being the most-widely spoken language in Pakistan, the language is in decline. The Punjabi language's decline in Pakistan has been attributed to various causes, such as lack of official recognition, lack of prestige, promotion of Urdu and English, to dispell fears of Punjabi domination by ethnic minorities, and other factors. The language has been relegated to informal communication, also as a means for crude humour, abuse, and to connect with uneducated masses. Parents who speak Punjabi opt to only communicate to their children in Urdu or English, with Punjabi being seen as paindu (shorthand for "backwardness" or "uneducated"). In June 2024, the Punjab Assembly passed a resolution to allow lawmakers to communicate in Punjabi. In October 2024, the Punjab province's assembly passed a resolution to make Punjabi an official subject across more sectors of the education system as a compulsory subject. In March 2024, Maryam Nawaz announced the introduction of Punjabi as a subject in schools. In November 2025, Maryam Nawaz again pushed for the inclusion of Punjabi language education in Pakistan's educational institutions. There is currently a Punjabi language revitalisation movement in Pakistan. At the federal level, Punjabi has official status via the Eighth Schedule to the Indian Constitution, earned after the Punjabi Suba movement of the 1950s. At the state level, Punjabi is the sole official language of the state of Punjab, while it has secondary official status in the states of Haryana and Delhi. In 2012, it was also made additional official language of West Bengal in areas where the population exceeds 10% of a particular block, sub-division or district. Both union and state laws specify the use of Punjabi in the field of education. The state of Punjab uses the Three Language Formula, and Punjabi is required to be either the medium of instruction, or one of the three languages learnt in all schools in Punjab. Punjabi is also a compulsory language in Haryana, and other states with a significant Punjabi speaking minority are required to offer Punjabi medium education.[dubious – discuss] There are vibrant Punjabi language movie and news industries in India, however Punjabi serials have had a much smaller presence within the last few decades in television up to 2015 due to market forces. Despite Punjabi having far greater official recognition in India, where the Punjabi language is officially admitted in all necessary social functions, while in Pakistan it is used only in a few radio and TV programs, attitudes of the English-educated elite towards the language are ambivalent as they are in neighbouring Pakistan.: 37 There are also claims of state apathy towards the language in non-Punjabi majority areas like Haryana and Delhi. Advocacy The Punjabi Sahit academy, Ludhiana, established in 1954 is supported by the Punjab state government and works exclusively for promotion of the Punjabi language, as does the Punjabi academy in Delhi. The Jammu and Kashmir academy of art, culture and literature in Jammu and Kashmir UT, India works for Punjabi and other regional languages like Urdu, Dogri, Gojri etc. Institutions in neighbouring states as well as in Lahore, Pakistan also advocate for the language. Gallery See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Alnitak] | [TOKENS: 1182]
Contents Alnitak Alnitak is a triple star system in the constellation of Orion. It has the designations ζ Orionis, which is Latinised to Zeta Orionis and abbreviated Zeta Ori or ζ Ori, and 50 Orionis, abbreviated 50 Ori. The system is located at a distance of several hundred parsecs from the Sun and is one of the three main stars of Orion's Belt along with Alnilam and Mintaka. The primary star, Alnitak Aa, is a hot blue supergiant with an absolute magnitude of −6.0 and is the brightest class O star in the night sky with a visual magnitude of +2.0. It has two companions—Ab and B, the latter known for the longest time and the former discovered recently, producing a combined magnitude for the trio of +1.77. The stars are members of the Orion OB1 association and the Collinder 70 [de] association. Observational history Alnitak has been known since antiquity and, as a component of Orion's Belt, has been of widespread cultural significance. It was reported to be a double star by amateur German astronomer George K. Kunowsky in 1819. Much more recently, in 1998, the bright primary was found by a team from the Lowell Observatory to have a close companion; this had been suspected from observations made with the Narrabri Stellar Intensity Interferometer in the 1970s. The stellar parallax derived from observations by the Hipparcos satellite imply a distance around 225 parsecs, but this does not take into account distortions caused by the multiple nature of the system. Larger distances, typically closer to 400 pc, have been derived by many authors based on the orbit of the pair or the assumed properties of the components. This distance is comparable to the Orion molecular cloud complex, including the nearby Flame and Horsehead Nebulae. Stellar system Alnitak is a triple star system at the eastern end of Orion's Belt, the second-magnitude primary having a 4th-magnitude companion nearly 3 arcseconds distant, in an orbit taking over 1,500 years. The part called Alnitak A is itself a close binary, comprising the stars Alnitak Aa and Alnitak Ab. Alnitak Aa is a blue supergiant of spectral type O9.5Iab with an absolute magnitude of −6.0 and an apparent magnitude of 2.0. It is estimated as being up to 33 times as massive as the Sun and a diameter 20 times greater. It is some 21,000 times brighter than the Sun, with a surface brightness (luminance) some 500 times greater. It is the brightest star of class O in the night sky. In about a million years, it will expand into a red supergiant wider than the orbit of Jupiter before ending its life in a supernova explosion, likely leaving behind a black hole. Alnitak Ab is a blue subgiant of spectral type B1IV with an absolute magnitude of −3.9 and an apparent magnitude of 4.3, discovered in 1998. A fourth star, 9th-magnitude Alnitak C, has not been confirmed to be part of the Aa–Ab–B group, and may simply lie along the line of sight. The Alnitak system is bathed in the nebulosity of IC 434. Etymology and cultural significance ζ Orionis (Latinised as Zeta Orionis) is the star system's Bayer designation and 50 Orionis its Flamsteed designation. The traditional name Alnitak, alternately spelled Al Nitak or Alnitah, is taken from the Arabic النطاق an-niṭāq, "the girdle". In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Alnitak for the star ζ Orionis Aa. It is now so entered in the IAU Catalog of Star Names. The three belt stars were collectively known by many names in many cultures. Arabic terms include النجاد Al Nijād 'the Belt', النسك Al Nasak 'the Line', العلقات Al Alkāt 'the Golden Grains or Nuts' and, in modern Arabic, ميزان الحق Al Mīzān al Ḥaqq 'the Scale of Justice'. In Chinese mythology they were known as The Weighing Beam. The belt was also the Three Stars mansion (simplified Chinese: 参宿; traditional Chinese: 參宿; pinyin: Shēn Xiù), one of the twenty-eight mansions of the Chinese constellations. It is one of the western mansions of the White Tiger. In Chinese, 參宿 (Shēn Xiù), meaning Three Stars (asterism), refers to an asterism consisting of Alnitak, Alnilam and Mintaka (Orion's Belt), with Betelgeuse, Bellatrix, Saiph and Rigel later added. Consequently, the Chinese name for Alnitak is 參宿一 (Shēn Xiù yī, English: the First Star of Three Stars). It is one of the western mansions of the White Tiger. Namesakes The USS Alnitah was a U.S. Navy Crater-class cargo ship named after the star. References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Meissa] | [TOKENS: 950]
Contents Meissa Meissa /ˈmaɪsə/, designated Lambda Orionis (λ Orionis, abbreviated Lambda Ori, λ Ori) is a star in the constellation of Orion. It is a multiple star approximately 1,300 ly away with a combined apparent magnitude of 3.33. The main components are an O9 giant star and a B-class main sequence star, separated by about 4″. Despite Meissa being more luminous and only slightly further away than Rigel, it appears 3 magnitudes dimmer at visual wavelengths, with much of its radiation emitted in the ultraviolet due to its high temperature. Nomenclature Lambda Orionis is the star's Bayer designation. The traditional name Meissa derives from the Arabic Al-Maisan which means 'The Shining One'. Al-Maisan was originally used for Gamma Geminorum, but was mistakenly applied to Lambda Orionis and the name stuck. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Meissa for this star. It is now so entered in the IAU Catalog of Star Names. The original Arabic name for this star, Al Hakah (the source for another name for it, Heka) refers to the Arabic lunar mansion that includes this star and the two of Phi Orionis (Al Haḳʽah, 'a White Spot'). In Chinese, 觜宿 (Zī Sù), meaning Turtle Beak, refers to an asterism consisting of Meissa and both of Phi Orionis Consequently, the Chinese name for Meissa itself is 觜宿一 (Zī Sù yī, English: the First Star of Turtle Beak.) Properties Meissa is a giant star with a stellar classification of O9 III and an apparent visible magnitude 3.54. It is an enormous star with about 34 times the mass of the Sun and 10 times the Sun's radius. The outer atmosphere has an effective temperature of around 35,000 K, giving it the characteristic blue glow of a hot O-type star. Meissa is a soft X-ray source with a luminosity of 1032 erg s−1 and peak emission in the energy range of 0.2–0.3 keV, which suggests the X-rays are probably being generated by the stellar wind. The stellar wind of Meissa is well characterized by a mass-loss rate of 2.5×10−8 solar masses per year and a terminal velocity of 2,000 km/s. Meissa is actually a double star with a companion at an angular separation of 4.41 arcseconds along a position angle of 43.12° (as of 1937). This fainter component is of magnitude 5.61 and it has a stellar classification of B0.5 V, making it a B-type main sequence star. There is an outlying component, Meissa C, which is an F-type main sequence star with a classification of F8 V. This star in turn may have a very low mass companion that is probably a brown dwarf. In 2018, a companion was detected around Meissa A, with a project separation of 10.13 mas. However, it was not detected again. Ring Meissa is surrounded by a ring of nebulosity about 12 degrees across. It is thought to be the remains of a supernova explosion, now ionized by the ultraviolet radiation from Meissa itself and some of the surrounding hot stars. Cluster This star is the dominant member of a 5-million-year-old star-forming region known as the λ Orionis cluster, or Collinder 69. The intense ultraviolet energy being radiated by this star is creating the Sh2-264 H II region in the neighboring volume of space, which in turn is surrounded by an expanding ring of cool gas that has an age of about 2–6 million years. The expansion of this gaseous ring may be explained by a former binary companion of Meissa that became a Type II supernova. Such an event would also explain the star's peculiar velocity with respect to the center of the expanding ring, as the explosion and resulting mass loss could have kicked Meissa out of the system. A potential candidate for the supernova remnant is the neutron star Geminga. However, the last is unlikely given the distance between Geminga and the cluster. Gallery References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Sony_Computer_Entertainment] | [TOKENS: 7136]
Contents Sony Interactive Entertainment Sony Interactive Entertainment LLC (SIE) is an American video game and digital entertainment company of Japanese conglomerate Sony Group Corporation. It primarily operates the PlayStation brand of video game consoles and products. It is also the world's largest company in the video game industry based on its equity investments and revenue. In 1993, Sony and Sony Music Entertainment Japan jointly established Sony Computer Entertainment Inc. (SCE) in Tokyo, which released the video game console PlayStation in Japan the following year and subsequently in the United States and Europe the year after. In 2010, Sony underwent a corporate split and established Sony Network Entertainment International (SNEI) in California, which provided gaming-related services through the PlayStation Network as well as other media through Sony Entertainment Network, including the sale of game titles and content on the PlayStation Store, as well as offering PlayStation Plus and Media Go. In 2016, SCE and SNEI jointly established Sony Interactive Entertainment and it was announced the new entity would be headquartered in the United States. History Sony Computer Entertainment, Inc. (SCEI) was jointly established by Sony and its subsidiary Sony Music Entertainment Japan in 1993 to handle the company's ventures into the video game industry. The original PlayStation console was released on December 3, 1994, in Japan. The company's North American operations, Sony Computer Entertainment of America (SCEA), were originally established in May 1995 as a division of Sony Electronic Publishing. Located in Foster City, California, the North American office was originally headed by Steve Race. Sony Computer Entertainment Europe was founded London, England, in 1994, original development staff had little to no experience in the video game industry, with most of them being recent college graduates according to Next Generation magazine. In the months prior to the release of the PlayStation in Western markets, the operations were restructured: All video game marketing from Sony Imagesoft was folded into SCEA in July 1995, with most affected employees transferred from Santa Monica to Foster City. On August 7, 1995, Race unexpectedly resigned and was named CEO of Spectrum HoloByte three days later. He was replaced by Sony Electronics veteran Martin Homlish. This proved to be the beginning of a run of exceptional managerial turnover. The PS console was released in the United States on September 9, 1995. As part of a worldwide restructuring at the beginning of 1997, the American and European divisions of SCE were both re-established as wholly owned subsidiaries of SCEI. The launch of the second PS console, the PlayStation 2 was released in Japan on March 4, 2000, and the U.S. on October 26, 2000. On July 1, 2002, chairman of SCEI, Shigeo Maruyama, was replaced by Tamotsu Iba as chairman. Jack Tretton and Phil Harrison were also promoted to senior vice presidents of SCE. The PlayStation Portable (PSP) was SCEI's first foray into the small handheld console market. Its development was first announced during SCE's E3 conference in 2003, and it was officially unveiled during their E3 conference on May 11, 2004. The system was released in Japan on December 12, 2004, in North America on March 24, 2005, and in Europe and Australia on September 1, 2005.[citation needed] On September 1, 2005, SCEI formed SCE Worldwide Studios, a single internal entity to oversee all wholly owned development studios within SCEI. It became responsible for the creative and strategic direction of development and production of all computer entertainment software by all SCEI-owned studios—all software is produced exclusively for the PS family of consoles. Shuhei Yoshida was named as president of Worldwide Studios on May 16, 2008, replacing Kazuo Hirai, who was serving interim after Harrison left the company in early 2008. On December 8, 2005, video game developer Guerrilla Games, developers of the Killzone series, was acquired by Sony Computer Entertainment as part of Worldwide Studios. On January 24, 2006, video game developer Zipper Interactive, developers of the Socom series, was acquired by Sony Computer Entertainment as part of Worldwide Studios. In March 2006, Sony announced the online network for its forthcoming PlayStation 3 (PS3) system at the 2006 PlayStation Business Briefing meeting in Tokyo, Japan, tentatively named "PlayStation Network Platform" and eventually called just PlayStation Network (PSN). Sony also stated that the service would always be connected, free, and include multiplayer support. The launch date for the PS3 was announced by Hirai at the pre-E3 conference held at the Sony Pictures Studios in Culver City, California, on May 8, 2006. The PS3 was released in Japan on November 11, 2006, and the U.S. date was November 17, 2006. The PSN was also launched in November 2006. On November 30, 2006, president of SCEI, Ken Kutaragi, was appointed as chairman of SCEI, while Hirai, then president of SCEA, was promoted to president of SCEI. On April 26, 2007, Ken Kutaragi resigned from his position as chairman of SCEI and group CEO, passing on his duties to the recently appointed president of SCE, Hirai. On September 20, 2007, video game developers Evolution Studios and Bigbig Studios, creators of the MotorStorm series, were acquired by Sony Computer Entertainment as part of Worldwide Studios. On April 15, 2009, David Reeves, president and CEO of SCE Europe, announced his forthcoming resignation from his post. He had joined the company in 1995 and was appointed as chairman of SCEE in 2003, and then president in 2005. His role of president and CEO of SCEE would be taken over by Andrew House, who joined Sony Corporation in 1990. The PSP Go was released on October 1, 2009, for North America and Europe, and on November 1, 2009, for Japan. On April 1, 2010, SCEI was restructured to bring together Sony's mobile electronics and personal computers divisions. The main Japanese division of SCEI was temporarily renamed "SNE Platform Inc." (SNEP) on April 1, 2010, and was split into two divisions that focused on different aspects: "Sony Computer Entertainment, Inc.", consisting of 1,300 employees who focused on the console business, and the network service business consisting of 60 to 70 employees. The network service business of SCEI was absorbed into Sony Corp's Network Products & Service Group (NPSG), which had already been headed by Hirai since April 2009. The original SCEI was then dissolved after the restructuring. The North American and European branches of SCEI were affected by the restructuring, and remained as SCEA and SCEE. Hirai, by that time SCEI CEO and Sony Corporation EVP, led both departments. Also on the same date, Sony Network Entertainment International (SNEI) was founded with Tim Schaaff becoming its president. SNEI would be in charge of operating the PlayStation Network and would also offer the Media Go software. On March 2, 2010, video game developer Media Molecule, developers of the PlayStation 3 game LittleBigPlanet, was acquired by SCEI as part of Worldwide Studios. On August 23, 2010, the headquarters of the company moved from Minami-Aoyama to the Sony City (Sony Corporation's headquarters) in Kōnan, Minato, Tokyo. On April 20, 2011, SNEI and SCEI were both the victim of an attack on its PlayStation Network system, which also affected its online division, Sony Online Entertainment. On August 1, 2011, video game developer Sucker Punch Productions, developers of the Sly Cooper and Infamous series, was also acquired. In August 2011, the Sony Entertainment Network was announced, offering music and video services as well as the PlayStation Network. From February 8, 2012, PSN accounts were converted into SEN accounts to be used with all services on offer by the Sony Entertainment Network. In January 2012, BigBig Studios was closed and Cambridge Studio—renamed Guerrilla Cambridge—becoming a sister studio of Guerrilla Games. In March 2012, Zipper Interactive, developers of the SOCOM series, MAG and Unit 13, was closed. On June 25, 2012, Hirai retired as chairman of Sony Computer Entertainment; however, he remains on the board of directors. On July 2, 2012, Sony Computer Entertainment acquired Gaikai, a cloud-based gaming service. In August 2012, Studio Liverpool, developer of the Wipeout and Formula One series, was closed. In August 2012, Sony Computer Entertainment announced PlayStation Mobile for Vita and PlayStation certified devices, with developers such as THQ, Team17 and Action Button Entertainment signed up. On December 31, 2012, Tim Schaaff retired as president of Sony Network Entertainment International, but remained on the company's board. A press release was published on August 20, 2013, announcing the release date of the PlayStation 4 (PS4) console. On that date, SCEI introduced the CUH-1000A series system, and announced the launch date as November 15, 2013, for North American markets and November 29, 2013, for European, Australasian and Central and South American markets. Following a January 2014 announcement by the Chinese government that the country's 14-year game console ban would be lifted, the PS4 was scheduled to be the first Sony video game console to be officially and legally released in China since the PlayStation 2. On March 6, 2014, Sony Computer Entertainment of America President and CEO, Tretton, announced he was resigning from his position at the end of the month, citing a mutual agreement between himself and SCEA for the cessation of his contract. Tretton had worked at SCEA since 1995 and was a founding member of the company's executive team. He was involved in the launch of all PlayStation platforms in North America, including the original PlayStation, PS2, PSP, PS3, PSN, PS Vita, and PS4. Tretton was replaced by Shawn Layden, who was the vice-president and chief operating officer (COO) of Sony Network Entertainment International, effective April 1, 2014. On April 2, 2015, it was announced that Sony Computer Entertainment had acquired the intellectual property of the cloud gaming service OnLive, and that its services would cease by the end of the month. In March 2015, Sony Network Entertainment International and Sony Computer Entertainment launched PlayStation Vue (PSVue) in the U.S., Sony's first-ever cloud-based television service. The beta version was only offered on an invite-only basis for PS3 and PS4 users from November 2014 prior to its official launch. Sony signed deals with major networks, including CBS, Discovery, Fox, and Viacom, so that users can view live streaming video, as well as catch up and on-demand content, from more than 75 channels, such as Comedy Central and Nickelodeon. Although pricing and release dates for other regions was not publicized, Sony confirmed that PSVue will eventually be available on iPad, followed by other Sony and non-Sony devices. On January 26, 2016, Sony announced the reorganization and integration of Sony Computer Entertainment (SCE) and Sony Network Entertainment International (SNEI), establishing a new company called Sony Interactive Entertainment LLC (SIE) on April 1, 2016, under the umbrella of Sony Corporation of America. Unlike the former SCE, SIE is headquartered in San Mateo, California — where SNEI was based at — and oversees the entire PlayStation brand, regional subsidiaries, and content business. SIE's Japanese branch, Sony Interactive Entertainment Inc, was established as a direct subsidiary of Sony Corporation. On March 24, 2016, Sony announced the establishment of ForwardWorks, a new studio dedicated to producing "full-fledged" games based on Sony intellectual properties for mobile platforms such as smartphones; it would later develop Disgaea RPG and is currently supporting Everybody's Golf on Android and iOS. ForwardWorks was later moved to another division within Sony becoming a subsidiary to Sony Music and therefore no longer a unit within Sony Interactive Entertainment. It was reported in December 2016 by multiple news outlets that Sony was considering restructuring its U.S. operations by merging its TV and film business with SIE. According to the reports, such a restructuring would have placed Sony Pictures under Sony Interactive's CEO, Andrew House, though House would not have taken over day-to-day operations of the film studio. According to one report, Sony was set to make a final decision on the possibility of the merger of the TV, film, and gaming businesses by the end of its fiscal year in March of the following year (2017). However, judging by Sony's activity in 2017, the rumored merger never materialized.[citation needed] On January 8, 2019, Sony announced that the company had entered into a definitive agreement for Sony Interactive Entertainment to acquire Audiokinetic. On March 20, 2019, Sony Interactive Entertainment launched the educational video game platform toio in Japan. On May 20, 2019, Sony Interactive Entertainment announced that the company had launched PlayStation Productions, a production studio that adapts the company's extensive catalogue of video game titles for film and television. The new venture is headed by Asad Qizilbash and overseen by Shawn Layden, chairman of Worldwide Studios. On August 19, 2019, Sony Interactive Entertainment announced that the company had entered into definitive agreements to acquire Insomniac Games. The acquisition was completed on November 15, 2019, where Sony paid ¥24,895 million (US$229 million) in cash. On November 8, 2019, Gobind Singh Deo, Malaysia's Minister of Communications and Multimedia, announced that Sony Interactive Entertainment would open a new development office in the country as in 2020 to provide art and animation as part of Worldwide Studios' efforts to make exclusive games for PlayStation consoles. The studio will be Sony Interactive Entertainment's first studio in Southeast Asia. SIE announced the formation of PlayStation Studios in May 2020 to be formally introduced alongside the PlayStation 5 later in 2020. PlayStation Studios will serve as an umbrella organization for its first-party game development studios, including Naughty Dog, Insomniac, Santa Monica Studio, Media Molecule and Guerrilla Games, as well as used for branding on games developed by studios brought in by Sony in work-for-hire situations. Sony plans to use the "PlayStation Studios" branding on both PlayStation 5 and new PlayStation 4 games to help with consumer recognition, though the branding was not ready for some of Sony's mid-2020 releases like The Last of Us Part II. SIE's parent Sony bought a minority stake in Epic Games for $250 million in July 2020, giving the company about a 1.4% stake in Epic. The investment came after Sony helped with Epic's development of new technologies in its Unreal Engine 5, which it was positioning for use in powering games on the upcoming PlayStation 5 to take advantage of its high speed internal storage solutions for in-game streaming. In March 2021, SIE announced that it and RTS acquired the assets and properties of the Evolution Championship Series as a joint venture. On April 13, 2021, Epic Games announced that it received an additional $200 million strategic investment from SIE's parent Sony Group Corporation. On May 3, 2021, Sony Interactive Entertainment announced the acquisition of a minority stake in Discord, which would be integrated into the PlayStation Network by early 2022. On June 29, 2021, Sony Interactive Entertainment announced the acquisition of Housemarque. On July 1, 2021, Sony Interactive Entertainment announced the acquisition of Nixxes Software. Jim Ryan said later that month that they plan to work with Nixxes to release more of their PlayStation games to personal computers. On September 8, 2021, Sony Interactive Entertainment announced the acquisition of Firesprite, a Liverpool-based developer with over 250 employees. The studio has multiple projects in development, with the projects focusing on genres outside the core offerings of PlayStation Studios. On September 29, 2021, Firesprite announced that it had acquired Fabrik Games, bringing the studio's headcount to 265. On September 30, 2021, Sony Interactive Entertainment announced that Bluepoint Games had joined PlayStation Studios, with Bluepoint working on original content instead of remaking an older game. On November 4, 2021, Sony Interactive Entertainment acquired a 5% stake in the video game publisher Devolver Digital. On December 10, 2021, Sony Interactive Entertainment announced the acquisition of the Seattle-based studio Valkyrie Entertainment. Sony Interactive Entertainment announced its intent to purchase Bungie for $3.6 billion in January 2022. This deal closed on July 15, 2022. Under terms of this deal, Bungie remained an independent development studio and publisher, allowing Bungie to pursue development outside Sony's platforms, and was intended to help bolster live service games for SIE. Sony Interactive Entertainment acquired Jade Raymond's Haven Studios in March 2022 and incorporating it as part of PlayStation Studios, making the studio Sony's first development team in Canada. On July 18, 2022, Sony Interactive Entertainment and Repeat.gg announced that Sony Interactive Entertainment had acquired Repeat.gg. On August 29, 2022, Sony Interactive Entertainment announced that it had acquired Savage Game Studios, a mobile game development studio with offices in Helsinki and Berlin. Savage Game Studios joined the newly created PlayStation Studios Mobile Division, an independent operation from console development. On August 31, 2022, it was announced that Sony Interactive Entertainment has acquired a 14.09% stake in FromSoftware. On April 20, 2023, Sony Interactive Entertainment announced that it had acquired Firewalk Studios from ProbablyMonster. On August 24, 2023, Sony Interactive Entertainment announced it had acquired audio company Audeze, who makes gaming headphones. On November 2, 2023, Sony Interactive Entertainment announced that it would acquire UK-based iSize, a company which specializes in building AI-powered solutions to improve video delivery. In the UK in November 2023, SIE was unable to dismiss a lawsuit from consumer advocates challenging the requirement that all digital content for the PlayStation systems be sold through the PlayStation Store along with the 30% fee that SIE takes for each sale. The suit has potential for damages up to £6.3 billion (US$7.9 billion). On November 27, 2023, SIE signed Shift Up studio to become their first Korean second-party developers. On November 28, 2023, SIE and Korean publisher NCSoft signed a strategic global partnership. On February 27, 2024, SIE announced it would lay off 900 employees, approximately 8% of its workforce, as part of a restructuring operation. Additionally, President and CEO Jim Ryan announced the London Studio will close in response to the changes in the industry. Having announced his retirement in September 2023, Jim Ryan left Sony at the end of March 2024. Sony Group president Hiroki Totoki became chairman of SIE on October 1, 2023, and interim CEO from April 1, 2024, following Ryan's departure. On May 13, 2024, Sony Interactive Entertainment unveiled a new leadership structure effective June 1, 2024, with Hermen Hulst and Hideaki Nishino becoming CEOs of separate divisions within SIE. Hulst will be the CEO of the Studio Business Group which he will oversee PlayStation's video game development as well as adaptations into other mediums such as television and film while Nishino will be CEO of the Platform Business Group, in which he will oversee hardware, technology, accessories, PlayStation Network and relationships with other developers and publishers. Both will report to SIE chairman Hiroki Totoki. On January 28, 2025, it was announced that Hiroki Totoki would be promoted to CEO of Sony Group Corporation and Hideaki Nishino would be promoted to president and CEO of Sony Interactive Entertainment, with Hulst remaining the CEO of Studio Business Group and will report to Nishino. In addition, Lin Tao, currently Senior Vice President of Finance, Corporate Development and Strategy, will leave SIE to become CFO of Sony Group Corporation. The management changes went into effect on April 1, 2025 Corporate affairs Hideaki Nishino serves as president and CEO of SIE. The first and longest-serving CEO of SIE is Ken Kutaragi, who served from 1993 to 2007. He is also known as the "Father of the PlayStation", and was honorary chairman of SIE for another four years after he resigned as CEO. Kutaragi has remained at Sony as a senior technology advisor. As of November 7, 2019, Hermen Hulst is the Head of Worldwide Studios. SIE currently has eight main headquarters around the world. They are as follows: SIE also has smaller offices and distribution centers in Los Angeles, California, San Diego, California; Toronto, Ontario; Adelaide, South Australia; Melbourne, Victoria; Seoul, South Korea, Singapore; Shanghai, China and Liverpool, England. SIE evaluates and approves games for its consoles. The process is stricter than for the Nintendo Seal of Quality, and developers submit game concepts to Sony early in the design process. Each SIE unit has its own evaluation process; SIEE, for example, approved Billy the Wizard for its consumers but SIEA did not. The company sometimes imposes additional restrictions, such as when it prohibited PS and PS2 games from being ported to the PSP without 30% of content being new to the Sony console. Hardware SCEI produces the PlayStation line of video game hardware that consists of consoles and handhelds. Sony's first wide home console release, the PlayStation (codenamed "PSX" during development), was initially designed to be a CD-ROM drive add-on for Nintendo's Super NES (a.k.a. "Super Famicom" in Japan) video game console, in response to add-ons for competing platforms such as the TurboGrafx-CD and the Sega CD (sold as the PC Engine CD-ROM² System and Mega CD in Japan respectively). When the prospect of releasing the system as an add-on dissolved, Sony redesigned the machine into a standalone unit.[citation needed] The PlayStation was released in Japan on December 3, 1994, and later in North America on September 9, 1995. By the end of the console 12-year production cycle, the PlayStation had sold 102 million units. SCEI's second home console, the PlayStation 2 (PS2) was released in Japan on March 4, 2000, and later in North America and Europe in October and November 2000, respectively. The PS2 is powered by a proprietary central processing unit, the Emotion Engine, and was the first video game console to have DVD playback functionality and also backwards compatibility with the original PlayStation games included out of the box. The PS2 consisted of a DVD drive and retailed in the U.S. for US$299. SCEI received heavy criticism after the launch of the PS2 due to the games released as part of the launch, difficulties that it presented for video game designers, and users who struggled to port Sega Dreamcast games to the system. However, despite these complaints, the PlayStation 2 received widespread support from third party developers throughout its lifespan on the market. On December 28, 2012, Sony confirmed that it would cease production of the PS2 through a gradual process that started in Japan—the continuing popularity of the console in markets like Brazil and India meant that PS2 products would still be shipped, while games for the console were released in March 2013. The PS2 stands as the best-selling home video game console in history, with a total of 155 million consoles sold. Writing for the ExtremeTech website at the end of 2012, James Plafke described the PS2 as revolutionary and proclaimed that the console "turn[ed] the gaming industry on its head": Aside from being the "first" next-gen console, as well as providing many, many people with their first DVD player, the PlayStation 2 launched in something of a Golden Age of the non-PC gaming industry. Gaming tech was becoming extremely sophisticated ... Sony seemingly knew the exact route toward popularity, turning the console with the least powerful hardware of that generation into a juggernaut of success. The PlayStation Portable (PSP) was SCEI's first foray into the small handheld console market. Its development was first announced during SCE's E3 conference in 2003, and it was officially unveiled during their E3 conference on May 11, 2004. The system was released in Japan on December 12, 2004, in North America on March 24, 2005, and in Europe and Australia on September 1, 2005. The console has since seen two major redesigns, with new features including smaller size, more internal memory, a better quality LCD screen and lighter weight. The launch date for the PS3 was announced by Hirai at the pre-E3 conference held at Sony Pictures Studios in Los Angeles, California, on May 8, 2006. The PS3 was released in Japan on November 11, 2006, and the U.S. date was November 17, 2006. Technology journalists observed that Sony had followed what Microsoft did with the Xbox 360, and produced the PS3 in two versions: one with a 20GB hard drive and the other with a 60GB hard drive. The PS3 utilizes a unique processing architecture, the Cell microprocessor, a proprietary technology developed by Sony in conjunction with Toshiba and IBM. The graphics processing unit, the RSX 'Reality Synthesizer', was co-developed by Nvidia and Sony. Several variations of the PS3 have been released, each with slight hardware and software differences, and each denoted by the varying size of the included hard disk drive. The PS Vita is the successor to the PlayStation Portable. It was released in Japan and other parts of Asia on December 17, 2011, and then in Europe, Australia and North America on February 22, 2012. Internally, the Vita features a 4-core ARM Cortex-A9 MPCore processor and a 4-core SGX543MP4+ graphics processing unit, as well as LiveArea software as its main user interface, which succeeds the XrossMediaBar. On March 1, 2019, Sony has ended production of the system and physical cartridge games. The PS4 was announced as the successor to the PS3 and was launched in North America on November 15, 2013, in Europe on November 29, 2013 and in Japan on February 23, 2014. Described by Sony as a "next generation" console, the PS4 included features such as enhanced social capabilities, second-screen options involving devices like the handheld PlayStation Vita, a membership service and compatibility with the Twitch live streaming platform. Following a January 2014 announcement by the Chinese government that the country's 14-year game console ban would be lifted, the PS4 was scheduled to be the first Sony video game console to be officially and legally released in China since the PlayStation 2—the ban was enacted in 2000 to protect the mental health of young people. Around 70 game developers, including Ubisoft and Koei, will service Chinese PlayStation users. The Chinese release dates and price details were announced in early December, with January 11, 2015, confirmed by SCEI. The makers announced that both the PS4 and Vita consoles will be released in China, and the former's package will also consist of a 500GB and 1TB hard drive and controller. The 20th anniversary of the original PS console was celebrated on December 6, 2014, with the release of a limited-edition, anniversary-edition PlayStation 4 with an aesthetic design that recalled the original 1994 PlayStation. The PS5 was announced to succeed the PS4 in 2019, and released in Australia, Japan, New Zealand, North America, and South Korea on November 12, 2020, with a further worldwide release on November 19, 2020. Software and franchises SIE has maintained several in-house studios since 2005, with the most recent move to brand these as PlayStation Studios starting in 2020. All of these studios develop PlayStation console-exclusive games for Sony.[citation needed] The table below shows the current and former studios associated with SIE, and their respective franchises or games of note. In addition to those listed below, Bungie has operated as an independent studio and publisher under SIE since July 2022. North America: Europe: Game development support units: North America: Europe: SIE began releasing some of its first-party studios' exclusive titles for Windows in 2020, starting with Horizon: Zero Dawn in August 2020, and with Days Gone in May 2021. Layden said in a 2021 interview that he was part of the team that came up with this concept, where they recognized "we need to go out to where these new customers are, where these new fans could be. We need to go to where they are... Because they've decided not to come to my house, so I've got to go their house now. And what's the best way to go to their house? Why not take one of our top-selling games?" Ryan said in an interview that with some of the latter PlayStation 4 titles that "There's an opportunity to expose those great games to a wider audience" and that Horizon: Zero Dawn's release on Windows shows there was a strong interest in further releases. An investor report in 2021 stated that a primary factor in SIE's recent desire to expand into PC gaming under Ryan stems from the motivation to expand the PlayStation brand into China, Russia and India—markets where console-oriented gaming is far less prevalent than in the West and Japan. In June 2021 after acquiring the studio Nixxes which had become their go-to developer for these ports, Sony confirmed that they are dedicated to PC gaming and value PC gamers, although the PlayStation consoles will still be the "first" and "best" places to play their games. Subsequent Windows releases included 2018's God of War in January 2022, Marvel's Spider-Man Remastered in August 2022, the Uncharted: Legacy of Thieves collection and Sackboy: A Big Adventure in October 2022, Marvel's Spider-Man: Miles Morales in November 2022, Returnal in February 2023, The Last of Us Part I in March 2023, and Ratchet & Clank: Rift Apart in July 2023. Video Games Chronicle observed that Sony had established a label, PlayStation PC around April 2021 to handle the publication of its games on Windows. The label was quietly renamed PlayStation Publishing in June 2024. SIE stated in a May 2022 investor report that sales of PC ports of their games had grown from $30 million in their 2020 fiscal year, to $80 million in 2021, and estimated to be $300 million for 2022. Because of this, SIE plans to continue to support PC releases of their PlayStation exclusive games and anticipate that by 2025, a third of their games revenue will come from PC sales. SIE also began seeking the mobile games market, forming a division named ForwardWorks to develop mobile games for Japan in 2016. To expand this ambition to the West, they hired a former content manager for Apple Arcade in 2020, as a means to bring their IPs to this platform. SIE acquired Savage Game Studios as their first dedicated mobile developer within PlayStation Studios in August 2022 for an undisclosed sum. It expects that by 2025, mobile games will make up 20% of their games revenue. Outside Windows and mobile, Sony Interactive Entertainment have also periodically published or licensed games for distribution on other game platforms. In 2021, their annual sports franchise MLB: The Show was renegotiated for release on non-PlayStation consoles for the first time in the series' history, beginning with MLB The Show 21, which launched simultaneously on Xbox One and Xbox Series X/S alongside the PlayStation versions. Due to SIE's competitive opposition towards Microsoft, the Xbox versions were published by MLB Advanced Media, who also allowed the series to be carried on Microsoft's subscription service Xbox Game Pass. The next installment, MLB The Show 22, would also mark the series' debut on Nintendo consoles with a release on Nintendo Switch alongside PlayStation and Xbox. In 2024, Sony released Lego Horizon Adventures through their PlayStation Publishing label already hosting their Windows titles. The game is a Lego video game spinoff of their first-party Horizon series, released on Nintendo Switch in tandem with PlayStation 5 and Windows, making it the first SIE-published game to appear on a Nintendo system, as well as the first Sony franchise to do so since Wipeout 64 (1998) for Nintendo 64. Sony Interactive Entertainment entered a licensing deal with publisher Bandai Namco Entertainment to distribute remasters and new entries in first-party PlayStation franchises for multiple platforms while they retain final ownership over the intellectual properties, with particular attention towards franchises formerly developed by Japan Studio. In 2025, Freedom Wars Remastered launched for Nintendo Switch and Windows in addition to PlayStation 4 and PlayStation 5. Bandai Namco will also publish Patapon 1+2 Replay and develop Everybody's Golf: Hot Shots for Nintendo Switch, PlayStation 5 and Windows under license from SIE. In July 2025, Sony announced that they would be publishing Helldivers 2 for Xbox Series X/S under PlayStation Publishing, making it the first game directly distributed by SIE on an Xbox console. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTERaskin1985103-19] | [TOKENS: 8460]
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-183] | [TOKENS: 12858]
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/File:Orion_3D_red-green.png] | [TOKENS: 109]
File:Orion 3D red-green.png Summary 3D red-green glasses are recommended to view this image correctly. Licensing File history Click on a date/time to view the file as it appeared at that time. File usage The following page uses this file: Global file usage The following other wikis use this file: Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file.
========================================
[SOURCE: https://en.wikipedia.org/wiki/History_of_the_Internet] | [TOKENS: 16547]
Contents History of the Internet The Internet originated in the efforts of scientists and engineers to build and interconnect computer networks. The Internet Protocol Suite, the set of rules used to communicate between networks and devices on the Internet, arose from research and development in the United States and involved international collaboration, particularly with researchers in the United Kingdom and France. Computer science was an emerging discipline in the late 1950s that began to consider time-sharing between computer users, and later, the possibility of achieving this over wide area networks. J. C. R. Licklider articulated the idea of a universal network at the Information Processing Techniques Office (IPTO) of the United States Department of Defense (DoD) Advanced Research Projects Agency (ARPA). Independently, Paul Baran at the RAND Corporation proposed a distributed network based on data in message blocks in the early 1960s, and Donald Davies conceived of packet switching in 1965 at the National Physical Laboratory (NPL), proposing a national commercial data network in the United Kingdom. ARPA awarded contracts in 1969 for the development of the ARPANET project, directed by Robert Taylor and managed by Lawrence Roberts. ARPANET adopted the packet switching technology proposed by Davies and sought input from Baran. The network of Interface Message Processors (IMPs) was built by a team at Bolt, Beranek, and Newman, with the design and specification led by Bob Kahn. The host-to-host protocol was specified mainly by graduate students, led by Steve Crocker at UCLA, along with Jon Postel and others. The ARPANET expanded rapidly across the United States with connections to the United Kingdom and Norway. Several early packet-switched networks emerged in the 1970s which researched and provided data networking. Louis Pouzin and Hubert Zimmermann pioneered a simplified end-to-end approach to internetworking at the IRIA. Peter Kirstein put internetworking into practice at University College London in 1973. Bob Metcalfe developed the theory and practice behind Ethernet and the PARC Universal Packet. ARPA initiatives and the International Network Working Group developed and refined ideas for internetworking, in which multiple separate networks could be joined into a network of networks. Vint Cerf, now at Stanford University, and Bob Kahn, now at DARPA, published their research on internetworking in 1974. Through the Internet Experiment Note series and later RFCs this evolved into the Transmission Control Protocol (TCP) and Internet Protocol (IP), two protocols of the Internet protocol suite. The design reflected concepts pioneered in the French CYCLADES project directed by Louis Pouzin. The development of packet switching networks was complemented by mathematical work in the 1970s by Leonard Kleinrock at UCLA. In the late 1970s, national and international public data networks emerged based on the X.25 protocol, designed by Rémi Després and others. In the United States, the National Science Foundation (NSF) funded national supercomputing centers at several universities in the United States, and provided interconnectivity in 1986 with the NSFNET project, thus creating network access to these supercomputer sites for research and academic organizations in the United States. International connections to NSFNET, the emergence of architecture such as the Domain Name System, and the adoption of TCP/IP on existing networks in the United States and around the world marked the beginnings of the Internet. Commercial Internet service providers (ISPs) emerged in 1989 in the United States and Australia. Limited private connections to parts of the Internet by officially commercial entities emerged in several American cities by late 1989 and 1990. The optical backbone of the NSFNET was decommissioned in 1995, removing the last restrictions on the use of the Internet to carry commercial traffic, as traffic transitioned to optical networks managed by Sprint, MCI and AT&T in the United States. Research at CERN in Switzerland by the British computer scientist Tim Berners-Lee in 1989–90 resulted in the World Wide Web, linking hypertext documents into an information system, accessible from any node on the network. The dramatic expansion of the capacity of the Internet, enabled by the advent of wave division multiplexing (WDM) and the rollout of fiber optic cables in the mid-1990s, had a revolutionary impact on culture, commerce, and technology. This made possible the rise of near-instant communication by electronic mail, instant messaging, voice over Internet Protocol (VoIP) telephone calls, video chat, and the World Wide Web with its discussion forums, blogs, social networking services, and online shopping sites. Increasing amounts of data are transmitted at higher and higher speeds over fiber-optic networks operating at 1 Gbit/s, 10 Gbit/s, and 800 Gbit/s by 2019. The Internet's takeover of the global communication landscape was rapid in historical terms: it only communicated 1% of the information flowing through two-way telecommunications networks in the year 1993, 51% by 2000, and more than 97% of the telecommunicated information by 2007. The Internet continues to grow, driven by ever greater amounts of online information, commerce, entertainment, and social networking services. However, the future of the global network may be shaped by regional differences. Foundations J. C. R. Licklider, while working at BBN, proposed a computer network in his March 1960 paper Man–Computer Symbiosis: A network of such centers, connected to one another by wide-band communication lines [...] the functions of present-day libraries together with anticipated advances in information storage and retrieval and symbiotic functions suggested earlier in this paper In August 1962, Licklider and Welden Clark published the paper "On-Line Man-Computer Communication" which was one of the first descriptions of a networked future. In October 1962, Licklider was hired by Jack Ruina as director of the newly established Information Processing Techniques Office (IPTO) within ARPA, with a mandate to interconnect the United States Department of Defense's main computers at Cheyenne Mountain, the Pentagon, and SAC HQ. There he formed an informal group within DARPA to further computer research. He began by writing memos in 1963 describing a distributed network to the IPTO staff, whom he called "Members and Affiliates of the Intergalactic Computer Network". Although he left the IPTO in 1964, five years before the ARPANET went live, it was his vision of universal networking that provided the impetus for one of his successors, Robert Taylor, to initiate the ARPANET development. Licklider later returned to lead the IPTO in 1973 for two years. The infrastructure for telephone systems at the time was based on circuit switching, which requires pre-allocation of a dedicated communication line for the duration of the call. Telegram services had developed store and forward telecommunication techniques. Western Union's Automatic Telegraph Switching System Plan 55-A was based on message switching. The U.S. military's AUTODIN network became operational in 1962. These systems, like SAGE and SBRE, still required rigid routing structures that were prone to single point of failure. The technology was considered vulnerable for strategic and military use because there were no alternative paths for the communication in case of a broken link. In the early 1960s, Paul Baran of the RAND Corporation produced a study of survivable networks for the U.S. military in the event of nuclear war. Information would be transmitted across a "distributed" network, divided into what he called "message blocks". Baran's design was intended for high-speed digital communication of voice messages using low-cost hardware; it was not implemented. In addition to being prone to a single point of failure, existing telegraphic techniques were inefficient and inflexible. Beginning in 1965 Donald Davies, at the National Physical Laboratory in the United Kingdom, independently developed a similar proposal of the concept, designed for high-speed data communication in computer networks, which he called packet switching, the term that would ultimately be adopted. Packet switching is a technique for transmitting computer data by splitting it into very short, standardized chunks, attaching routing information to each of these chunks, and transmitting them independently through a computer network. It provides better bandwidth utilization than traditional circuit-switching used for telephony, and enables the connection of computers with different transmission and receive rates. It is a distinct concept to message switching. Networks that led to the Internet Following discussions with J. C. R. Licklider in 1965, Donald Davies became interested in data communications for computer networks. Later that year, at the National Physical Laboratory (NPL) in the United Kingdom, Davies designed and proposed a national commercial data network based on packet switching. The following year, he described the use of "switching nodes" to act as routers in a digital communication network. The proposal was not taken up nationally but he produced a design for a local network to serve the needs of the NPL and prove the feasibility of packet switching using high-speed data transmission. To deal with packet permutations (due to dynamically updated route preferences) and to datagram losses (unavoidable when fast sources send to a slow destinations), he assumed that "all users of the network will provide themselves with some kind of error control", thus inventing what came to be known as the end-to-end principle. In 1967, he and his team were the first to use the term 'protocol' in a modern data-commutation context. In 1968, Davies began building the Mark I packet-switched network to meet the needs of his multidisciplinary laboratory and prove the technology under operational conditions. The network's development was described at a 1968 conference. Elements of the network became operational in early 1969, the first implementation of packet switching, and the NPL network was the first to use high-speed links. Many other packet switching networks built in the 1970s were similar "in nearly all respects" to Davies' original 1965 design. The Mark II version which operated from 1973 used a layered protocol architecture. In 1977, there were roughly 30 computers, 30 peripherals and 100 VDU terminals all able to interact through the NPL Network. The NPL team carried out simulation work on wide-area packet networks, including datagrams and congestion; and research into internetworking and secure communications. The network was replaced in 1986. Robert Taylor was promoted to the head of the Information Processing Techniques Office (IPTO) at Advanced Research Projects Agency (ARPA) in 1966. He intended to realize Licklider's ideas of an interconnected networking system. As part of the IPTO's role, three network terminals had been installed: one for System Development Corporation in Santa Monica, one for Project Genie at University of California, Berkeley, and one for the Compatible Time-Sharing System project at Massachusetts Institute of Technology (MIT). Taylor's identified need for networking became obvious from the waste of resources apparent to him. For each of these three terminals, I had three different sets of user commands. So if I was talking online with someone at S.D.C. and I wanted to talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them.... I said, oh man, it's obvious what to do: If you have these three terminals, there ought to be one terminal that goes anywhere you want to go where you have interactive computing. That idea is the ARPAnet. Bringing in Larry Roberts from MIT in January 1967, he initiated a project to build such a network. Roberts and Thomas Merrill had been researching computer time-sharing over wide area networks (WANs). Wide area networks emerged during the late 1950s and became established during the 1960s. At the first ACM Symposium on Operating Systems Principles in October 1967, Roberts presented a proposal for the "ARPA net", based on Wesley Clark's idea to use Interface Message Processors (IMP) to create a message switching network. At the conference, Roger Scantlebury presented Donald Davies' work on a hierarchical digital communications network using packet switching and referenced the work of Paul Baran at RAND. Roberts incorporated the packet switching concepts proposed by Davies into the ARPANET design. He upgraded the proposed communications speed from 2.4 kbit/s to 50 kbit/s and sought input from Baran. ARPA awarded the contract to build the network to Bolt Beranek & Newman. The "IMP guys", led by Frank Heart and Bob Kahn, developed the routing, flow control, software design and network control. The first ARPANET link was established between the Network Measurement Center at the University of California, Los Angeles (UCLA) Henry Samueli School of Engineering and Applied Science led by Leonard Kleinrock, and the NLS system at Stanford Research Institute (SRI) led by Douglas Engelbart in Menlo Park, California at 22:30 hours on October 29, 1969. With reference to the actions of the graduate and undergraduate students working with the IMPs, Kleinrock said: "We set up a telephone connection between us and the guys at SRI ...", Kleinrock ... said in an interview: "We typed the L and we asked on the phone, Yet a revolution had begun" .... By December 1969, a four-node network was connected by adding the Culler-Fried Interactive Mathematics Center at the University of California, Santa Barbara followed by the University of Utah Graphics Department. In the same year, Taylor helped fund ALOHAnet, a system designed by professor Norman Abramson and others at the University of Hawaiʻi at Mānoa that transmitted data by radio between seven computers on four islands on Hawaii. Steve Crocker, a graduate student at UCLA, formed the "Network Working Group" in 1969. Working with Jon Postel and others, he initiated and managed the Request for Comments (RFC) process, which is still used today for proposing and distributing contributions. RFC 1, entitled "Host Software", was written by Crocker and published on April 7, 1969. The protocol for establishing links between network sites in the ARPANET, the Network Control Program (NCP), was completed in 1970. These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing. Roberts presented the idea of packet switching to the communication professionals, and faced anger and hostility. Before ARPANET was operating, they argued that the router buffers would quickly run out. After the ARPANET was operating, they argued packet switching would never be economic without the government subsidy. Baran had faced the same rejection and thus failed to convince the military into constructing a packet switching network. Early international collaborations via the ARPANET were sparse. Connections were made in 1973 to Norway (NORSAR), via a satellite link at the Tanum Earth Station in Sweden, and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first international heterogenous resource sharing network. Throughout the 1970s, Leonard Kleinrock developed the mathematical theory to model and measure the performance of packet-switching technology, building on his earlier work on the application of queueing theory to message switching systems. By 1981, the number of hosts had grown to 213. The ARPANET became the technical core of what would become the Internet, and a primary tool in developing the technologies used. CYCLADES was a French research network designed and directed by Louis Pouzin. In 1972, he began implementing his ideas to build on the work of Donald Davies and explore alternatives to the early ARPANET design. His goal was to enable internetworking, which he called a "catenet". This was the first network to implement the end-to-end principle by making the hosts responsible for reliable delivery of data, rather than the network itself, using unreliable datagrams. Concepts implemented in this network influenced the initial proposal of the Transmission Control Program, and were reflected in the later TCP/IP architecture. Based on international research initiatives, particularly the contributions of Rémi Després, packet switching network standards were developed by the CCITT, which is now the International Telegraph and Telephone Consultative Committee (ITU-T), in the form of X.25 and related standards. X.25 is built on the concept of virtual circuits emulating traditional telephone connections. The initial ITU Standard on X.25 was approved in March 1976. Existing networks, such as Telenet in the United States adopted X.25 as well as new public data networks, such as DATAPAC in Canada and TRANSPAC in France. The protocol formed the basis for the SERCnet network between British academic and research sites, which became JANET in the 1980s, the United Kingdom's high-speed national research and education network (NREN). The British Post Office, Western Union International, and Tymnet collaborated to create the first international packet-switched network, referred to as the International Packet Switched Service (IPSS), in 1978. This network grew from Europe and the US to cover Canada, Hong Kong, and Australia by 1981. By the 1990s it provided a worldwide networking infrastructure. X.25 was supplemented by the X.75 protocol which enabled internetworking between national PTT networks in Europe and commercial networks in North America. Unlike the ARPANET and its protocols, X.25 was available for business use. In 1979, CompuServe became the first service to offer commercial electronic mail capabilities and technical support to personal computer users. The company broke new ground again in 1980 as the first to offer real-time chat with its CB Simulator. Telenet offered its Telemail electronic mail service, which was also targeted to enterprise use unlike the network mail system of the ARPANET. Other major dial-in networks were America Online (AOL) and Prodigy that also provided communications, content, and entertainment features. Bulletin board system (BBS) networks also provided on-line access, such as FidoNet which was popular amongst hobbyist computer users, many of them hackers and amateur radio operators.[citation needed] Many of these public data networks went on to adopt TCP/IP and formed the infrastructure of the early Internet. In 1979, two students at Duke University, Tom Truscott and Jim Ellis, originated the idea of using Bourne shell scripts to transfer news and messages on a serial line UUCP connection with nearby University of North Carolina at Chapel Hill. Following public release of the software in 1980, the mesh of UUCP hosts forwarding on the Usenet news rapidly expanded. UUCPnet, as it would later be named, also created gateways and links between FidoNet and dial-up BBS hosts. UUCP networks spread quickly due to the lower costs involved, ability to use existing leased lines, X.25 links or even ARPANET connections, and the lack of strict use policies compared to later networks like CSNET and BITNET. All connects were local. By 1981 the number of UUCP hosts had grown to 550, nearly doubling to 940 in 1984. Sublink Network, operating since 1987 and officially founded in Italy in 1989, based its interconnectivity upon UUCP to redistribute mail and news groups messages throughout its Italian nodes (about 100 at the time) owned both by private individuals and small companies. Sublink Network evolved into one of the first examples of Internet technology coming into use through popular diffusion. 1973–1989: Merging the networks and creating the Internet With so many different networking methods seeking interconnection, a method was needed to unify them. Louis Pouzin initiated the CYCLADES project in 1972, building on the work of Donald Davies and the ARPANET. An International Network Working Group formed in 1972; active members included Vint Cerf from Stanford University, Alex McKenzie from BBN, Donald Davies and Roger Scantlebury from NPL, and Louis Pouzin and Hubert Zimmermann from IRIA. Pouzin coined the term catenet for concatenated network. Bob Metcalfe at Xerox PARC outlined the idea of Ethernet and PARC Universal Packet (PUP) for internetworking. Bob Kahn, now at DARPA, recruited Vint Cerf to work with him on the problem. By 1973, these groups had worked out a fundamental reformulation, in which the differences between network protocols were hidden by using a common internetworking protocol. Instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. Cerf and Kahn published their ideas in May 1974, which incorporated concepts implemented by Louis Pouzin and Hubert Zimmermann in the CYCLADES network. The specification of the resulting protocol, the Transmission Control Program, was published as RFC 675 by the Network Working Group in December 1974. It contains the first attested use of the term internet, as a shorthand for internetwork. This software was monolithic in design using two simplex communication channels for each user session. With the role of the network reduced to a core of functionality, it became possible to exchange traffic with other networks independently from their detailed characteristics, thereby solving the fundamental problems of internetworking. DARPA agreed to fund the development of prototype software, work on which was documented in the Internet Experiment Notes. Testing began in 1975 through concurrent implementations at Stanford, BBN and University College London (UCL). After several years of work, the first demonstration of a gateway between the Packet Radio network (PRNET) in the SF Bay area and the ARPANET was conducted by the Stanford Research Institute. On November 22, 1977, a three network demonstration was conducted including the ARPANET, the SRI's Packet Radio Van on the Packet Radio Network and the Atlantic Packet Satellite Network (SATNET) including a node at UCL. The software was redesigned as a modular protocol stack, using full-duplex channels; between 1976 and 1977, Yogen Dalal, John Shoch and Robert Metcalfe among others, proposed separating TCP's routing and transmission control functions into two discrete layers, which led to the splitting of the Transmission Control Program into the Transmission Control Protocol (TCP) and the Internet Protocol (IP) in version 3 in 1978. Version 4 was described in IETF publication RFC 791 (September 1981), 792 and 793. It was installed on SATNET in 1982 and the ARPANET in January 1983 after the DoD made it standard for all military computer networking. This resulted in a networking model that became known informally as TCP/IP. It was also referred to as the Department of Defense (DoD) model or DARPA model. Cerf credits several of his graduate students with important work on the design and testing (see List of Internet pioneers). DARPA sponsored or encouraged the development of TCP/IP implementations for many operating systems. After the ARPANET had been up and running for several years, ARPA looked for another agency to hand off the network to; ARPA's primary mission was funding cutting-edge research and development, not running a communications utility. In July 1975, the network was turned over to the Defense Communications Agency, also part of the Department of Defense. In 1983, the U.S. military portion of the ARPANET was broken off as a separate network, the MILNET. MILNET subsequently became the unclassified but military-only NIPRNET, in parallel with the SECRET-level SIPRNET and JWICS for TOP SECRET and above. NIPRNET does have controlled security gateways to the public Internet. The networks based on the ARPANET were government funded and therefore restricted to noncommercial uses such as research; unrelated commercial use was strictly forbidden. This initially restricted connections to military sites and universities. During the 1980s, the connections expanded to more educational institutions, and a growing number of companies such as Digital Equipment Corporation and Hewlett-Packard, which were participating in research projects or providing services to those who were. Data transmission speeds depended upon the type of connection, the slowest being analog telephone lines and the fastest using optical networking technology. Several other branches of the U.S. government, the National Aeronautics and Space Administration (NASA), the National Science Foundation (NSF), and the Department of Energy (DOE) became heavily involved in Internet research and started development of a successor to ARPANET. In the mid-1980s, all three of these branches developed the first Wide Area Networks based on TCP/IP. NASA developed the NASA Science Network, NSF developed CSNET and DOE evolved the Energy Sciences Network or ESNet. NASA developed the TCP/IP based NASA Science Network (NSN) in the mid-1980s, connecting space scientists to data and information stored anywhere in the world. In 1989, the DECnet-based Space Physics Analysis Network (SPAN) and the TCP/IP-based NASA Science Network (NSN) were brought together at NASA Ames Research Center creating the first multiprotocol wide area network called the NASA Science Internet, or NSI. NSI was established to provide a totally integrated communications infrastructure to the NASA scientific community for the advancement of earth, space and life sciences. As a high-speed, multiprotocol, international network, NSI provided connectivity to over 20,000 scientists across all seven continents. In 1981, NSF supported the development of the Computer Science Network (CSNET). CSNET connected with ARPANET using TCP/IP, and ran TCP/IP over X.25, but it also supported departments without sophisticated network connections, using automated dial-up mail exchange. CSNET played a central role in popularizing the Internet outside the ARPANET. In 1986, the NSF created NSFNET, a 56 kbit/s backbone to support the NSF-sponsored supercomputing centers. The NSFNET also provided support for the creation of regional research and education networks in the United States, and for the connection of university and college campus networks to the regional networks. The use of NSFNET and the regional networks was not limited to supercomputer users and the 56 kbit/s network quickly became overloaded. NSFNET was upgraded to 1.5 Mbit/s in 1988 under a cooperative agreement with the Merit Network in partnership with IBM, MCI, and the State of Michigan. The existence of NSFNET and the creation of Federal Internet Exchanges (FIXes) allowed the ARPANET to be decommissioned in 1990. NSFNET was expanded and upgraded to dedicated fiber, optical lasers and optical amplifier systems capable of delivering T3 start up speeds or 45 Mbit/s in 1991. However, the T3 transition by MCI took longer than expected, allowing Sprint to establish a coast-to-coast long-distance commercial Internet service. When NSFNET was decommissioned in 1995, its optical networking backbones were handed off to several commercial Internet service providers, including MCI, PSI Net and Sprint. As a result, when the handoff was complete, Sprint and its Washington DC Network Access Points began to carry Internet traffic, and by 1996, Sprint was the world's largest carrier of Internet traffic. The research and academic community continues to develop and use advanced networks such as Internet2 in the United States and JANET in the United Kingdom. The term "internet" was reflected in the first RFC published on the TCP protocol (RFC 675: Internet Transmission Control Program, December 1974) as a short form of internetworking, when the two terms were used interchangeably. In general, an internet was a collection of networks linked by a common protocol. In the time period when the ARPANET was connected to the newly formed NSFNET project in the late 1980s, the term was used as the name of the network, Internet, being the large and global TCP/IP network. Opening the Internet and the fiber optic backbone to corporate and consumers increased demand for network capacity. The expense and delay of laying new fiber led providers to test a fiber bandwidth expansion alternative that had been pioneered in the late 1970s by Optelecom using "interactions between light and matter, such as lasers and optical devices used for optical amplification and wave mixing". This technology became known as wave division multiplexing (WDM). Bell Labs deployed a 4-channel WDM system in 1995. To develop a mass capacity (dense) WDM system, Optelecom and its former head of Light Systems Research, David R. Huber formed a new venture, Ciena Corp., that deployed the world's first dense WDM system on the Sprint fiber network in June 1996. This was referred to as the real start of optical networking. As interest in networking grew by needs of collaboration, exchange of data, and access of remote computing resources, the Internet technologies spread throughout the rest of the world. The hardware-agnostic approach in TCP/IP supported the use of existing network infrastructure, such as the International Packet Switched Service (IPSS) X.25 network, to carry Internet traffic. Many sites unable to link directly to the Internet created simple gateways for the transfer of electronic mail, the most important application of the time. Sites with only intermittent connections used UUCP or FidoNet and relied on the gateways between these networks and the Internet. Some gateway services went beyond simple mail peering, such as allowing access to File Transfer Protocol (FTP) sites via UUCP or mail. Finally, routing technologies were developed for the Internet to remove the remaining centralized routing aspects. The Exterior Gateway Protocol (EGP) was replaced by a new protocol, the Border Gateway Protocol (BGP). This provided a meshed topology for the Internet and reduced the centric architecture which ARPANET had emphasized. In 1994, Classless Inter-Domain Routing (CIDR) was introduced to support better conservation of address space which allowed use of route aggregation to decrease the size of routing tables. The MOS transistor underpinned the rapid growth of telecommunication bandwidth over the second half of the 20th century. To address the need for transmission capacity beyond that provided by radio, satellite and analog copper telephone lines, engineers developed optical communications systems based on fiber optic cables powered by lasers and optical amplifier techniques. The concept of lasing arose from a 1917 paper by Albert Einstein, "On the Quantum Theory of Radiation". Einstein expanded upon a conversation with Max Planck on how atoms absorb and emit light, part of a thought process that, with input from Erwin Schrödinger, Werner Heisenberg and others, gave rise to quantum mechanics. Specifically, in his quantum theory, Einstein mathematically determined that light could be generated not only by spontaneous emission, such as the light emitted by an incandescent light or the Sun, but also by stimulated emission. Forty years later, on November 13, 1957, Columbia University physics student Gordon Gould first realized how to make light by stimulated emission through a process of optical amplification. He coined the term LASER for this technology—Light Amplification by Stimulated Emission of Radiation. Using Gould's light amplification method (patented as "Optically Pumped Laser Amplifier"), Theodore Maiman made the first working laser on May 16, 1960. Gould co-founded Optelecom in 1973 to commercialize his inventions in optical fiber telecommunications, just as Corning Glass was producing the first commercial fiber optic cable in small quantities. Optelecom configured its own fiber lasers and optical amplifiers into the first commercial optical communication systems which it delivered to Chevron and the US Army Missile Defense. Three years later, GTE deployed the first optical telephone system in 1977 in Long Beach, California. By the early 1980s, optical networks powered by lasers, LED and optical amplifier equipment supplied by Bell Labs, NTT and Perelli[clarification needed] were used by select universities and long-distance telephone providers.[citation needed] In 1982, Norway (NORSAR and NDRE) and Peter Kirstein's research group at University College London (UCL) left the ARPANET and reconnected using TCP/IP over SATNET. There were 40 British research groups using UCL's link to ARPANET in 1975; by 1984 there was a user population of about 150 people on both sides of the Atlantic. Between 1984 and 1988, CERN began installation and operation of TCP/IP to interconnect its major internal computer systems, workstations, PCs, and an accelerator control system. CERN continued to operate a limited self-developed system (CERNET) internally and several incompatible (typically proprietary) network protocols externally. There was considerable resistance in Europe towards more widespread use of TCP/IP, and the CERN TCP/IP intranets remained isolated from the Internet until 1989, when a transatlantic connection to Cornell University was established. The Computer Science Network (CSNET) began operation in 1981 to provide networking connections to institutions that could not connect directly to ARPANET. Its first international connection was to Israel in 1984. Soon after, connections were established to computer science departments in Canada, France, and Germany. In 1988, the first international connections to NSFNET was established by France's INRIA, and Piet Beertema at the Centrum Wiskunde & Informatica (CWI) in the Netherlands. Daniel Karrenberg, from CWI, visited Ben Segal, CERN's TCP/IP coordinator, looking for advice about the transition of EUnet, the European side of the UUCP Usenet network (much of which ran over X.25 links), over to TCP/IP. The previous year, Segal had met with Len Bosack from the then still small company Cisco about purchasing some TCP/IP routers for CERN, and Segal was able to give Karrenberg advice and forward him on to Cisco for the appropriate hardware. This expanded the European portion of the Internet across the existing UUCP networks. The NORDUnet connection to NSFNET was in place soon after, providing open access for university students in Denmark, Finland, Iceland, Norway, and Sweden. In January 1989, CERN opened its first external TCP/IP connections. This coincided with the creation of Réseaux IP Européens (RIPE), initially a group of IP network administrators who met regularly to carry out coordination work together. Later, in 1992, RIPE was formally registered as a cooperative in Amsterdam. The United Kingdom's national research and education network (NREN), JANET, began operation in 1984 using the UK's Coloured Book protocols and connected to NSFNET in 1989. In 1991, JANET adopted Internet Protocol on the existing network. The same year, Dai Davies introduced Internet technology into the pan-European NREN, EuropaNet, which was built on the X.25 protocol. The European Academic and Research Network (EARN) and RARE adopted IP around the same time, and the European Internet backbone EBONE became operational in 1992. Nonetheless, for a period in the late 1980s and early 1990s, engineers, organizations and nations were polarized over the issue of which standard, the OSI model or the Internet protocol suite would result in the best and most robust computer networks. Japan, which had built the UUCP-based network JUNET in 1984, connected to CSNET, and later to NSFNET in 1989, marking the spread of the Internet to Asia. South Korea set up a two-node domestic TCP/IP network in 1982, the System Development Network (SDN), adding a third node the following year. SDN was connected to the rest of the world in August 1983 using UUCP (Unix-to-Unix-Copy); connected to CSNET in December 1984; and formally connected to the NSFNET in 1990. In Australia, ad hoc networking to ARPA and in-between Australian universities formed in the late 1980s, based on various technologies such as X.25, UUCPNet, and via a CSNET. These were limited in their connection to the global networks, due to the cost of making individual international UUCP dial-up or X.25 connections. In 1989, Australian universities joined the push towards using IP protocols to unify their networking infrastructures. AARNet was formed in 1989 by the Australian Vice-Chancellors' Committee and provided a dedicated IP based network for Australia. New Zealand adopted the UK's Coloured Book protocols as an interim standard and established its first international IP connection to the U.S. in 1989. While developed countries with technological infrastructures were joining the Internet, developing countries began to experience a digital divide separating them from the Internet. On an essentially continental basis, they built organizations for Internet resource administration and to share operational experience, which enabled more transmission facilities to be put into place. At the beginning of the 1990s, African countries relied upon X.25 IPSS and 2400 baud modem UUCP links for international and internetwork computer communications. In August 1995, InfoMail Uganda, Ltd., a privately held firm in Kampala now known as InfoCom, and NSN Network Services of Avon, Colorado, sold in 1997 and now known as Clear Channel Satellite, established Africa's first native TCP/IP high-speed satellite Internet services. The data connection was originally carried by a C-Band RSCC Russian satellite which connected InfoMail's Kampala offices directly to NSN's MAE-West point of presence using a private network from NSN's leased ground station in New Jersey. InfoCom's first satellite connection was just 64 kbit/s, serving a Sun host computer and twelve US Robotics dial-up modems. In 1996, a USAID funded project, the Leland Initiative, started work on developing full Internet connectivity for the continent. Guinea, Mozambique, Madagascar and Rwanda gained satellite earth stations in 1997, followed by Ivory Coast and Benin in 1998. Africa is building an Internet infrastructure. AFRINIC, headquartered in Mauritius, manages IP address allocation for the continent. As with other Internet regions, there is an operational forum, the Internet Community of Operational Networking Specialists. There are many programs to provide high-performance transmission plant, and the western and southern coasts have undersea optical cable. High-speed cables join North Africa and the Horn of Africa to intercontinental cable systems. Undersea cable development is slower for East Africa; the original joint effort between New Partnership for Africa's Development (NEPAD) and the East Africa Submarine System (Eassy) has broken off and may become two efforts. The Asia Pacific Network Information Centre (APNIC), headquartered in Australia, manages IP address allocation for the continent. APNIC sponsors an operational forum, the Asia-Pacific Regional Internet Conference on Operational Technologies (APRICOT). In South Korea, VDSL, a last mile technology developed in the 1990s by NextLevel Communications, connected corporate and consumer copper-based telephone lines to the Internet. The People's Republic of China established its first TCP/IP college network, Tsinghua University's TUNET in 1991. The PRC went on to make its first global Internet connection in 1994, between the Beijing Electro-Spectrometer Collaboration and Stanford University's Linear Accelerator Center. However, China went on to implement its own digital divide by implementing a country-wide content filter. Japan hosted the annual meeting of the Internet Society, INET'92, in Kobe. Singapore developed TECHNET in 1990, and Thailand gained a global Internet connection between Chulalongkorn University and UUNET in 1992. As with the other regions, the Latin American and Caribbean Internet Addresses Registry (LACNIC) manages the IP address space and other resources for its area. LACNIC, headquartered in Uruguay, operates DNS root, reverse DNS, and other key services. 1989–2004: Rise of the global Internet, Web 1.0 Initially, as with its predecessor networks, the system that would evolve into the Internet was primarily for government and government body use. Although commercial use was forbidden, the exact definition of commercial use was unclear and subjective. UUCPNet and the X.25 IPSS had no such restrictions, which would eventually see the official barring of UUCPNet use of ARPANET and NSFNET connections. As a result, during the late 1980s, the first Internet service provider (ISP) companies were formed. Companies like PSINet, UUNET, Netcom, and Portal Software were formed to provide service to the regional research networks and provide alternate network access, UUCP-based email and Usenet News to the public. In 1989, MCI Mail became the first commercial email provider to get an experimental gateway to the Internet. The first commercial dialup ISP in the United States was The World, which opened in 1989. In 1992, the U.S. Congress passed the Scientific and Advanced-Technology Act, 42 U.S.C. § 1862(g), which allowed NSF to support access by the research and education communities to computer networks which were not used exclusively for research and education purposes, thus permitting NSFNET to interconnect with commercial networks. This caused controversy within the research and education community, who were concerned commercial use of the network might lead to an Internet that was less responsive to their needs, and within the community of commercial network providers, who felt that government subsidies were giving an unfair advantage to some organizations. By 1990, ARPANET's goals had been fulfilled and new networking technologies exceeded the original scope and the project came to a close. New network service providers including PSINet, Alternet, CERFNet, ANS CO+RE, and many others were offering network access to commercial customers. NSFNET was no longer the de facto backbone and exchange point of the Internet. The Commercial Internet eXchange (CIX), Metropolitan Area Exchanges (MAEs), and later Network Access Points (NAPs) were becoming the primary interconnections between many networks. The final restrictions on carrying commercial traffic ended on April 30, 1995, when the National Science Foundation ended its sponsorship of the NSFNET Backbone Service. NSF provided initial support for the NAPs and interim support to help the regional research and education networks transition to commercial ISPs. NSF also sponsored the very high speed Backbone Network Service (vBNS) which continued to provide support for the supercomputing centers and research and education in the United States. An event held on 11 January 1994, The Superhighway Summit at UCLA's Royce Hall, was the "first public conference bringing together all of the major industry, government and academic leaders in the field [and] also began the national dialogue about the Information Superhighway and its implications". The invention of the World Wide Web by Tim Berners-Lee at CERN, as an application on the Internet, brought many social and commercial uses to what was, at the time, a network of networks for academic and research institutions. The Web opened to the public in 1991 and began to enter general use in 1993–4, when websites for everyday use started to become available. During the first decade or so of the public Internet, the immense changes it would eventually enable in the 2000s were still nascent. In terms of providing context for this period, mobile cellular devices ("smartphones" and other cellular devices) which today provide near-universal access, were used for business and not a routine household item owned by parents and children worldwide. Social media in the modern sense had yet to come into existence, laptops were bulky and most households did not have computers. Data rates were slow and most people lacked means to video or digitize video; media storage was transitioning slowly from analog tape to digital optical discs (DVD and to an extent still, floppy disc to CD). Enabling technologies used from the early 2000s such as PHP, modern JavaScript and Java, technologies such as AJAX, HTML 4 (and its emphasis on CSS), and various software frameworks, which enabled and simplified speed of web development, largely awaited invention and their eventual widespread adoption. The Internet was widely used for mailing lists, emails, creating and distributing maps with tools like MapQuest, e-commerce and early popular online shopping (Amazon and eBay for example), online forums and bulletin boards, and personal websites and blogs, and use was growing rapidly, but by more modern standards, the systems used were static and lacked widespread social engagement. It awaited a number of events in the early 2000s to change from a communications technology to gradually develop into a key part of global society's infrastructure. Typical design elements of these "Web 1.0" era websites included: Static pages instead of dynamic HTML; content served from filesystems instead of relational databases; pages built using Server Side Includes or CGI instead of a web application written in a dynamic programming language; HTML 3.2-era structures such as frames and tables to create page layouts; online guestbooks; overuse of GIF buttons and similar small graphics promoting particular items; and HTML forms sent via email. (Support for server side scripting was rare on shared servers so the usual feedback mechanism was via email, using mailto forms and their email program. During the period 1997 to 2001, the first speculative investment bubble related to the Internet took place, in which "dot-com" companies (referring to the ".com" top level domain used by businesses) were propelled to exceedingly high valuations as investors rapidly stoked stock values, followed by a market crash; the first dot-com bubble. However this only temporarily slowed enthusiasm and growth, which quickly recovered and continued to grow. The history of the World Wide Web up to around 2004 was retrospectively named and described by some as "Web 1.0". In the final stage of IPv4 address exhaustion, the last IPv4 address block was assigned in January 2011 at the level of the regional Internet registries. IPv4 uses 32-bit addresses which limits the address space to 232 addresses, i.e. 4294967296 addresses. IPv4 is in the process of replacement by IPv6, its successor, which uses 128-bit addresses, providing 2128 addresses, i.e. 340282366920938463463374607431768211456, a vastly increased address space. The shift to IPv6 is expected to take a long time to complete. 2004–present: Web 2.0, global ubiquity, social media The rapid technical advances that would propel the Internet into its place as a social system, which has completely transformed the way humans interact with each other, took place during a relatively short period from around 2005 to 2010, coinciding with the point in time in which IoT devices surpassed the number of humans alive at some point in the late 2000s. They included: The term "Web 2.0" describes websites that emphasize user-generated content (including user-to-user interaction), usability, and interoperability. It first appeared in a January 1999 article called "Fragmented Future" written by Darcy DiNucci, a consultant on electronic information design, where she wrote: The Web we know now, which loads into a browser window in essentially static screenfuls, is only an embryo of the Web to come. The first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo might develop. The Web will be understood not as screenfuls of text and graphics but as a transport mechanism, the ether through which interactivity happens. It will [...] appear on your computer screen, [...] on your TV set [...] your car dashboard [...] your cell phone [...] hand-held game machines [...] maybe even your microwave oven. The term resurfaced during 2002–2004, and gained prominence in late 2004 following presentations by Tim O'Reilly and Dale Dougherty at the first Web 2.0 Conference. In their opening remarks, John Battelle and Tim O'Reilly outlined their definition of the "Web as Platform", where software applications are built upon the Web as opposed to upon the desktop. The unique aspect of this migration, they argued, is that "customers are building your business for you".[non-primary source needed] They argued that the activities of users generating content (in the form of ideas, text, videos, or pictures) could be "harnessed" to create value. "Web 2.0" does not refer to an update to any technical specification, but rather to cumulative changes in the way Web pages are made and used. "Web 2.0" describes an approach, in which sites focus substantially upon allowing users to interact and collaborate with each other in a social media dialogue as creators of user-generated content in a virtual community, in contrast to Web sites where people are limited to the passive viewing of content. Examples of Web 2.0 include social networking services, blogs, wikis, folksonomies, video sharing sites, hosted services, Web applications, and mashups. Terry Flew, in his 3rd edition of New Media, described what he believed to characterize the differences between Web 1.0 and Web 2.0: [The] move from personal websites to blogs and blog site aggregation, from publishing to participation, from web content as the outcome of large up-front investment to an ongoing and interactive process, and from content management systems to links based on tagging (folksonomy). This era saw several household names gain prominence through their community-oriented operation – YouTube, Twitter, Facebook, Reddit and Wikipedia being some examples. Telephone systems have been slowly adopting voice over IP since 2003. Early experiments proved that voice can be converted to digital packets and sent over the Internet. The packets are collected and converted back to analog voice. The process of change that generally coincided with Web 2.0 was itself greatly accelerated and transformed only a short time later by the increasing growth in mobile devices. This mobile revolution meant that computers in the form of smartphones became something many people used, took with them everywhere, communicated with, used for photographs and videos they instantly shared or to shop or seek information "on the move" – and used socially, as opposed to items on a desk at home or just used for work.[citation needed] Location-based services, services using location and other sensor information, and crowdsourcing (frequently but not always location based), became common, with posts tagged by location, or websites and services becoming location aware. Mobile-targeted websites (such as "m.example.com") became common, designed especially for the new devices used. Netbooks, ultrabooks, widespread 4G and Wi-Fi, and mobile chips capable or running at nearly the power of desktops from not many years before on far lower power usage, became enablers of this stage of Internet development, and the term "App" (short for "Application program" or "Program") became popularized, as did the "App store". This "mobile revolution" has allowed for people to have a nearly unlimited amount of information at all times. With the ability to access the internet from cell phones came a change in the way media was consumed. Media consumption statistics show that over half of media consumption between those aged 18 and 34 were using a smartphone. The first Internet link into low Earth orbit was established on January 22, 2010, when astronaut T. J. Creamer posted the first unassisted update to his Twitter account from the International Space Station, marking the extension of the Internet into space. (Astronauts at the ISS had used email and Twitter before, but these messages had been relayed to the ground through a NASA data link before being posted by a human proxy.) This personal Web access, which NASA calls the Crew Support LAN, uses the space station's high-speed Ku band microwave link. To surf the Web, astronauts can use a station laptop computer to control a desktop computer on Earth, and they can talk to their families and friends on Earth using Voice over IP equipment. Communication with spacecraft beyond Earth orbit has traditionally been over point-to-point links through the Deep Space Network. Each such data link must be manually scheduled and configured. In the late 1990s NASA and Google began working on a new network protocol, delay-tolerant networking (DTN), which automates this process, allows networking of spaceborne transmission nodes, and takes the fact into account that spacecraft can temporarily lose contact because they move behind the Moon or planets, or because space weather disrupts the connection. Under such conditions, DTN retransmits data packages instead of dropping them, as the standard TCP/IP Internet Protocol does. NASA conducted the first field test of what it calls the "deep space internet" in November 2008. Testing of DTN-based communications between the International Space Station and Earth (now termed disruption-tolerant networking) has been ongoing since March 2009, and was scheduled to continue until March 2014.[needs update] This network technology is supposed to ultimately enable missions that involve multiple spacecraft where reliable inter-vessel communication might take precedence over vessel-to-Earth downlinks. According to a February 2011 statement by Google's Vint Cerf, the so-called "bundle protocols" have been uploaded to NASA's EPOXI mission spacecraft (which is in orbit around the Sun) and communication with Earth has been tested at a distance of approximately 80 light seconds. Internet governance As a globally distributed network of voluntarily interconnected autonomous networks, the Internet operates without a central governing body. Each constituent network chooses the technologies and protocols it deploys from the technical standards that are developed by the Internet Engineering Task Force (IETF). However, successful interoperation of many networks requires certain parameters that must be common throughout the network. For managing such parameters, the Internet Assigned Numbers Authority (IANA) oversees the allocation and assignment of various technical identifiers. In addition, the Internet Corporation for Assigned Names and Numbers (ICANN) provides oversight and coordination for the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System. The IANA function was originally performed by USC Information Sciences Institute (ISI), and it delegated portions of this responsibility with respect to numeric network and autonomous system identifiers to the Network Information Center (NIC) at Stanford Research Institute (SRI International) in Menlo Park, California. ISI's Jonathan Postel managed the IANA, served as RFC Editor and performed other key roles until his death in 1998. As the early ARPANET grew, hosts were referred to by names, and a HOSTS.TXT file would be distributed from SRI International to each host on the network. As the network grew, this became cumbersome. A technical solution came in the form of the Domain Name System, created by ISI's Paul Mockapetris in 1983. The Defense Data Network—Network Information Center (DDN-NIC) at SRI handled all registration services, including the top-level domains (TLDs) of .mil, .gov, .edu, .org, .net, .com and .us, root nameserver administration and Internet number assignments under a United States Department of Defense contract. In 1991, the Defense Information Systems Agency (DISA) awarded the administration and maintenance of DDN-NIC (managed by SRI up until this point) to Government Systems, Inc., who subcontracted it to the small private-sector Network Solutions, Inc. The increasing cultural diversity of the Internet also posed administrative challenges for centralized management of the IP addresses. In October 1992, the Internet Engineering Task Force (IETF) published RFC 1366, which described the "growth of the Internet and its increasing globalization" and set out the basis for an evolution of the IP registry process, based on a regionally distributed registry model. This document stressed the need for a single Internet number registry to exist in each geographical region of the world (which would be of "continental dimensions"). Registries would be "unbiased and widely recognized by network providers and subscribers" within their region. The RIPE Network Coordination Centre (RIPE NCC) was established as the first RIR in May 1992. The second RIR, the Asia Pacific Network Information Centre (APNIC), was established in Tokyo in 1993, as a pilot project of the Asia Pacific Networking Group. Since at this point in history most of the growth on the Internet was coming from non-military sources, it was decided that the Department of Defense would no longer fund registration services outside of the .mil TLD. In 1993 the U.S. National Science Foundation, after a competitive bidding process in 1992, created the InterNIC to manage the allocations of addresses and management of the address databases, and awarded the contract to three organizations. Registration Services would be provided by Network Solutions; Directory and Database Services would be provided by AT&T; and Information Services would be provided by General Atomics. Over time, after consultation with the IANA, the IETF, RIPE NCC, APNIC, and the Federal Networking Council (FNC), the decision was made to separate the management of domain names from the management of IP numbers. Following the examples of RIPE NCC and APNIC, it was recommended that management of IP address space then administered by the InterNIC should be under the control of those that use it, specifically the ISPs, end-user organizations, corporate entities, universities, and individuals. As a result, the American Registry for Internet Numbers (ARIN) was established as in December 1997, as an independent, not-for-profit corporation by direction of the National Science Foundation and became the third Regional Internet Registry. In 1998, both the IANA and remaining DNS-related InterNIC functions were reorganized under the control of ICANN, a California non-profit corporation contracted by the United States Department of Commerce to manage a number of Internet-related tasks. As these tasks involved technical coordination for two principal Internet name spaces (DNS names and IP addresses) created by the IETF, ICANN also signed a memorandum of understanding with the IAB to define the technical work to be carried out by the Internet Assigned Numbers Authority. The management of Internet address space remained with the regional Internet registries, which collectively were defined as a supporting organization within the ICANN structure. ICANN provides central coordination for the DNS system, including policy coordination for the split registry / registrar system, with competition among registry service providers to serve each top-level-domain and multiple competing registrars offering DNS services to end-users. The Internet Engineering Task Force (IETF) is the largest and most visible of several loosely related ad-hoc groups that provide technical direction for the Internet, including the Internet Architecture Board (IAB), the Internet Engineering Steering Group (IESG), and the Internet Research Task Force (IRTF). The IETF is a loosely self-organized group of international volunteers who contribute to the engineering and evolution of Internet technologies. It is the principal body engaged in the development of new Internet standard specifications. Much of the work of the IETF is organized into Working Groups. Standardization efforts of the Working Groups are often adopted by the Internet community, but the IETF does not control or patrol the Internet. The IETF grew out of quarterly meetings with U.S. government-funded researchers, starting in January 1986. Non-government representatives were invited by the fourth IETF meeting in October 1986. The concept of Working Groups was introduced at the fifth meeting in February 1987. The seventh meeting in July 1987 was the first meeting with more than one hundred attendees. In 1992, the Internet Society, a professional membership society, was formed and IETF began to operate under it as an independent international standards body. The first IETF meeting outside of the United States was held in Amsterdam, the Netherlands, in July 1993. Today, the IETF meets three times per year and attendance has been as high as ca. 2,000 participants. Typically one in three IETF meetings are held in Europe or Asia. The number of non-US attendees is typically ca. 50%, even at meetings held in the United States. The IETF is not a legal entity, has no governing board, no members, and no dues. The closest status resembling membership is being on an IETF or Working Group mailing list. IETF volunteers come from all over the world and from many different parts of the Internet community. The IETF works closely with and under the supervision of the Internet Engineering Steering Group (IESG) and the Internet Architecture Board (IAB). The Internet Research Task Force (IRTF) and the Internet Research Steering Group (IRSG), peer activities to the IETF and IESG under the general supervision of the IAB, focus on longer-term research issues. RFCs are the main documentation for the work of the IAB, IESG, IETF, and IRTF. Originally intended as requests for comments, RFC 1, "Host Software", was written by Steve Crocker at UCLA in April 1969. These technical memos documented aspects of ARPANET development. They were edited by Jon Postel, the first RFC Editor. RFCs cover a wide range of information from proposed standards, draft standards, full standards, best practices, experimental protocols, history, and other informational topics. RFCs can be written by individuals or informal groups of individuals, but many are the product of a more formal Working Group. Drafts are submitted to the IESG either by individuals or by the Working Group Chair. An RFC Editor, appointed by the IAB, separate from IANA, and working in conjunction with the IESG, receives drafts from the IESG and edits, formats, and publishes them. Once an RFC is published, it is never revised. If the standard it describes changes or its information becomes obsolete, the revised standard or updated information will be re-published as a new RFC that "obsoletes" the original. The Internet Society (ISOC) is an international, nonprofit organization founded during 1992 "to assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". With offices near Washington, DC, US, and in Geneva, Switzerland, ISOC has a membership base comprising more than 80 organizational and more than 50,000 individual members. Members also form "chapters" based on either common geographical location or special interests. There are currently more than 90 chapters around the world. ISOC provides financial and organizational support to and promotes the work of the standards settings bodies for which it is the organizational home: the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), the Internet Engineering Steering Group (IESG), and the Internet Research Task Force (IRTF). ISOC also promotes understanding and appreciation of the Internet model of open, transparent processes and consensus-based decision-making. Since the 1990s, the Internet's governance and organization has been of global importance to governments, commerce, civil society, and individuals. The organizations which held control of certain technical aspects of the Internet were the successors of the old ARPANET oversight and the current decision-makers in the day-to-day technical aspects of the network. While recognized as the administrators of certain aspects of the Internet, their roles and their decision-making authority are limited and subject to increasing international scrutiny and increasing objections. These objections have led to the ICANN removing themselves from relationships with first the University of Southern California in 2000, and in September 2009 gaining autonomy from the US government by the ending of its longstanding agreements, although some contractual obligations with the U.S. Department of Commerce continued. Finally, on October 1, 2016, ICANN ended its contract with the United States Department of Commerce National Telecommunications and Information Administration (NTIA), allowing oversight to pass to the global Internet community. The IETF, with financial and organizational support from the Internet Society, continues to serve as the Internet's ad-hoc standards body and issues Request for Comments. In November 2005, the World Summit on the Information Society, held in Tunis, called for an Internet Governance Forum (IGF) to be convened by United Nations Secretary General. The IGF opened an ongoing, non-binding conversation among stakeholders representing governments, the private sector, civil society, and the technical and academic communities about the future of Internet governance. The first IGF meeting was held in October/November 2006 with follow up meetings annually thereafter. Since WSIS, the term "Internet governance" has been broadened beyond narrow technical concerns to include a wider range of Internet-related policy issues. Tim Berners-Lee, inventor of the web, was becoming concerned about threats to the web's future and in November 2009 at the IGF in Washington DC launched the World Wide Web Foundation (WWWF) to campaign to make the web a safe and empowering tool for the good of humanity with access to all. In November 2019 at the IGF in Berlin, Berners-Lee and the WWWF went on to launch the Contract for the Web, a campaign initiative to persuade governments, companies and citizens to commit to nine principles to stop "misuse" with the warning "If we don't act now - and act together - to prevent the web being misused by those who want to exploit, divide and undermine, we are at risk of squandering" (its potential for good). Politicization of the Internet Due to its prominence and immediacy as an effective means of mass communication, the Internet has also become more politicized as it has grown. This has led in turn, to discourses and activities that would once have taken place in other ways, migrating to being mediated by internet. Examples include political activities such as public protest and canvassing of support and votes, but also: Net neutrality On April 23, 2014, the Federal Communications Commission (FCC) was reported to be considering a new rule that would permit Internet service providers to offer content providers a faster track to send content, thus reversing their earlier net neutrality position. A possible solution to net neutrality concerns may be municipal broadband, according to Professor Susan Crawford, a legal and technology expert at Harvard Law School. On May 15, 2014, the FCC decided to consider two options regarding Internet services: first, permit fast and slow broadband lanes, thereby compromising net neutrality; and second, reclassify broadband as a telecommunication service, thereby preserving net neutrality. On November 10, 2014, President Obama recommended the FCC reclassify broadband Internet service as a telecommunications service in order to preserve net neutrality. On January 16, 2015, Republicans presented legislation, in the form of a U.S. Congress HR discussion draft bill, that makes concessions to net neutrality but prohibits the FCC from accomplishing the goal or enacting any further regulation affecting Internet service providers (ISPs). On January 31, 2015, AP News reported that the FCC will present the notion of applying ("with some caveats") Title II (common carrier) of the Communications Act of 1934 to the internet in a vote expected on February 26, 2015. Adoption of this notion would reclassify internet service from one of information to one of telecommunications and, according to Tom Wheeler, chairman of the FCC, ensure net neutrality. The FCC is expected to enforce net neutrality in its vote, according to The New York Times. On February 26, 2015, the FCC ruled in favor of net neutrality by applying Title II (common carrier) of the Communications Act of 1934 and Section 706 of the Telecommunications act of 1996 to the Internet. The FCC chairman, Tom Wheeler, commented, "This is no more a plan to regulate the Internet than the First Amendment is a plan to regulate free speech. They both stand for the same concept." On March 12, 2015, the FCC released the specific details of the net neutrality rules. On April 13, 2015, the FCC published the final rule on its new "Net Neutrality" regulations. On December 14, 2017, the FCC repealed their March 12, 2015 decision by a 3–2 vote regarding net neutrality rules. Use and culture Email has often been called the killer application of the Internet. It predates the Internet, and was a crucial tool in creating it. Email started in 1965 as a way for multiple users of a time-sharing mainframe computer to communicate. Although the history is undocumented, among the first systems to have such a facility were the System Development Corporation (SDC) Q32 and the Compatible Time-Sharing System (CTSS) at MIT. The ARPANET computer network made a large contribution to the evolution of electronic mail. An experimental inter-system transferred mail on the ARPANET shortly after its creation. In 1971 Ray Tomlinson created what was to become the standard Internet electronic mail addressing format, using the @ sign to separate mailbox names from host names. A number of protocols were developed to deliver messages among groups of time-sharing computers over alternative transmission systems, such as UUCP and IBM's VNET email system. Email could be passed this way between a number of networks, including ARPANET, BITNET and NSFNET, as well as to hosts connected directly to other sites via UUCP. See the history of SMTP protocol. In addition, UUCP allowed the publication of text files that could be read by many others. The News software developed by Steve Daniel and Tom Truscott in 1979 was used to distribute news and bulletin board-like messages. This quickly grew into discussion groups, known as newsgroups, on a wide range of topics. On ARPANET and NSFNET similar discussion groups would form via mailing lists, discussing both technical issues and more culturally focused topics (such as science fiction, discussed on the sflovers mailing list). During the early years of the Internet, email and similar mechanisms were also fundamental to allow people to access resources that were not available due to the absence of online connectivity. UUCP was often used to distribute files using the 'alt.binary' groups. Also, FTP e-mail gateways allowed people that lived outside the US and Europe to download files using ftp commands written inside email messages. The file was encoded, broken in pieces and sent by email; the receiver had to reassemble and decode it later, and it was the only way for people living overseas to download items such as the earlier Linux versions using the slow dial-up connections available at the time. After the popularization of the Web and the HTTP protocol such tools were slowly abandoned. Resource or file sharing has been an important activity on computer networks from well before the Internet was established and was supported in a variety of ways including bulletin board systems (1978), Usenet (1980), Kermit (1981), and many others. The File Transfer Protocol (FTP) for use on the Internet was standardized in 1985 and is still in use today. A variety of tools were developed to aid the use of FTP by helping users discover files they might want to transfer, including the Wide Area Information Server (WAIS) in 1991, Gopher in 1991, Archie in 1991, Veronica in 1992, Jughead in 1993, Internet Relay Chat (IRC) in 1988, and eventually the World Wide Web (WWW) in 1991 with Web directories and Web search engines. In 1999, Napster became the first peer-to-peer file sharing system. Napster used a central server for indexing and peer discovery, but the storage and transfer of files was decentralized. A variety of peer-to-peer file sharing programs and services with different levels of decentralization and anonymity followed, including: Gnutella, eDonkey2000, and Freenet in 2000, FastTrack, Kazaa, Limewire, and BitTorrent in 2001, and Poisoned in 2003. All of these tools are general purpose and can be used to share a wide variety of content, but sharing of music files, software, and later movies and videos are major uses. And while some of this sharing is legal, large portions are not. Lawsuits and other legal actions caused Napster in 2001, eDonkey2000 in 2005, Kazaa in 2006, and Limewire in 2010 to shut down or refocus their efforts. The Pirate Bay, founded in Sweden in 2003, continues despite a trial and appeal in 2009 and 2010 that resulted in jail terms and large fines for several of its founders. File sharing remains contentious and controversial with charges of theft of intellectual property on the one hand and charges of censorship on the other. File hosting allowed for people to expand their computer's hard drives and "host" their files on a server. Most file hosting services offer free storage, as well as larger storage amount for a fee. These services have greatly expanded the internet for business and personal use. Google Drive, launched on April 24, 2012, has become the most popular file hosting service. Google Drive allows users to store, edit, and share files with themselves and other users. Not only does this application allow for file editing, hosting, and sharing. It also acts as Google's own free-to-access office programs, such as Google Docs, Google Slides, and Google Sheets. This application served as a useful tool for University professors and students, as well as those who are in need of Cloud storage. Dropbox, released in June 2007 is a similar file hosting service that allows users to keep all of their files in a folder on their computer, which is synced with Dropbox's servers. This differs from Google Drive as it is not web-browser based. Now, Dropbox works to keep workers and files in sync and efficient. Mega, having over 200 million users, is an encrypted storage and communication system that offers users free and paid storage, with an emphasis on privacy. Being three of the largest file hosting services, Google Drive, Dropbox, and Mega all represent the core ideas and values of these services. Online piracy The earliest form of online piracy began with a P2P (peer to peer) music sharing service named Napster, launched in 1999. Sites like LimeWire, The Pirate Bay, and BitTorrent allowed for anyone to engage in online piracy, sending ripples through the media industry. With online piracy came a change in the media industry as a whole. Mobile telephone data traffic Total global mobile data traffic reached 588 exabytes during 2020, a 150-fold increase from 3.86 exabytes/year in 2010. Most recently, smartphones accounted for 95% of this mobile data traffic with video accounting for 66% by type of data. Mobile traffic travels by radio frequency to the closest cell phone tower and its base station where the radio signal is converted into an optical signal that is transmitted over high-capacity optical networking systems that convey the information to data centers. The optical backbones enable much of this traffic as well as a host of emerging mobile services including the Internet of things, 3-D virtual reality, gaming and autonomous vehicles. The most popular mobile phone application is texting, of which 2.1 trillion messages were logged in 2020. The texting phenomenon began on December 3, 1992, when Neil Papworth sent the first text message of "Merry Christmas" over a commercial cell phone network to the CEO of Vodafone. The first mobile phone with Internet connectivity was the Nokia 9000 Communicator, launched in Finland in 1996. The viability of Internet services access on mobile phones was limited until prices came down from that model, and network providers started to develop systems and services conveniently accessible on phones. NTT DoCoMo in Japan launched the first mobile Internet service, i-mode, in 1999 and this is considered the birth of the mobile phone Internet services. In 2001, the mobile phone email system by Research in Motion (now BlackBerry Limited) for their BlackBerry product was launched in America. To make efficient use of the small screen and tiny keypad and one-handed operation typical of mobile phones, a specific document and networking model was created for mobile devices, the Wireless Application Protocol (WAP). Most mobile device Internet services operate using WAP. The growth of mobile phone services was initially a primarily Asian phenomenon with Japan, South Korea and Taiwan all soon finding the majority of their Internet users accessing resources by phone rather than by PC. Developing countries followed, with India, South Africa, Kenya, the Philippines, and Pakistan all reporting that the majority of their domestic users accessed the Internet from a mobile phone rather than a PC. The European and North American use of the Internet was influenced by a large installed base of personal computers, and the growth of mobile phone Internet access was more gradual, but had reached national penetration levels of 20–30% in most Western countries. The cross-over occurred in 2008, when more Internet access devices were mobile phones than personal computers. In many parts of the developing world, the ratio is as much as 10 mobile phone users to one PC user. Growth in demand Global Internet traffic continues to grow at a rapid rate, rising 23% from 2020 to 2021 when the number of active Internet users reached 4.66 billion people, representing half of the global population. Further demand for data, and the capacity to satisfy this demand, are forecast to increase to 717 terabits per second in 2021. This capacity stems from the optical amplification and WDM systems that are the common basis of virtually every metro, regional, national, international and submarine telecommunications networks. These optical networking systems have been installed throughout the 5 billion kilometers of fiber optic lines deployed around the world. Continued growth in traffic is expected for the foreseeable future from a combination of new users, increased mobile phone adoption, machine-to-machine connections, connected homes, 5G devices and the burgeoning requirement for cloud and Internet services such as Amazon, Facebook, Apple Music and YouTube. Historiography There are nearly insurmountable problems in supplying a historiography of the Internet's development. The process of digitization represents a twofold challenge both for historiography in general and, in particular, for historical communication research. A sense of the difficulty in documenting early developments that led to the internet can be gathered from the quote: "The Arpanet period is somewhat well documented because the corporation in charge – BBN – left a physical record. Moving into the NSFNET era, it became an extraordinarily decentralized process. The record exists in people's basements, in closets. ... So much of what happened was done verbally and on the basis of individual trust." — Doug Gale (2007) Notable works on the subject include a book by journalists Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late: The Origins Of The Internet (1996); and professor Janet Abbate's book Inventing the Internet (2000). Most scholarship and literature on the Internet lists ARPANET as the prior network that was iterated on and studied to create it, although other early computer networks and experiments existed alongside or before ARPANET. Such histories of the Internet have since been criticized as teleologies and Whig history; that is, they take the present to be the end point toward which history has been unfolding based on a single cause: In the case of Internet history, the epoch-making event is usually said to be the demonstration of the 4-node ARPANET network in 1969. From that single happening the global Internet developed. — Martin Campbell-Kelly, Daniel D. Garcia-Swartz In addition to these characteristics, historians have cited methodological problems arising in their work: "Internet history" ... tends to be too close to its sources. Many Internet pioneers are alive, active, and eager to shape the histories that describe their accomplishments. Many museums and historians are equally eager to interview the pioneers and to publicize their stories. — Andrew L. Russell (2012) See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Python_(programming_language)#cite_note-Julia-30] | [TOKENS: 4314]
Contents Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language. Python 3.0, released in 2008, was a major revision and not completely backward-compatible with earlier versions. Beginning with Python 3.5, capabilities and keywords for typing were added to the language, allowing optional static typing. As of 2026[update], the Python Software Foundation supports Python 3.10, 3.11, 3.12, 3.13, and 3.14, following the project's annual release cycle and five-year support policy. Python 3.15 is currently in the alpha development phase, and the stable release is expected to come out in October 2026. Earlier versions in the 3.x series have reached end-of-life and no longer receive security updates. Python has gained widespread use in the machine learning community. It is widely taught as an introductory programming language. Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index, which ranks based on searches in 24 platforms. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands. It was designed as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Python implementation began in December 1989. Van Rossum first released it in 1991 as Python 0.9.0. Van Rossum assumed sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from responsibilities as Python's "benevolent dictator for life" (BDFL); this title was bestowed on him by the Python community to reflect his long-term commitment as the project's chief decision-maker. (He has since come out of retirement and is self-titled "BDFL-emeritus".) In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python derives from the British comedy series Monty Python's Flying Circus. (See § Naming.) Python 2.0 was released on 16 October 2000, featuring many new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, and then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. It no longer receives security patches or updates. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e., "2.7.18+" (plus 3.11), with the plus signifying (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, and was a major revision and not completely backward-compatible with earlier versions, with some new semantics and changed syntax. Python 2.7.18, released in 2020, was the last release of Python 2. Several releases in the Python 3.x series have added new syntax to the language, and made a few (considered very minor) backward-incompatible changes. As of January 2026[update], Python 3.14.3 is the latest stable release. All older 3.x versions had a security update down to Python 3.9.24 then again with 3.9.25, the final version in 3.9 series. Python 3.10 is, since November 2025, the oldest supported branch. Python 3.15 has an alpha released, and Android has an official downloadable executable available for Python 3.14. Releases receive two years of full support followed by three years of security support. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming – including metaprogramming and metaobjects. Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language' because it is purposely designed to be able to integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the "Lisp tradition". It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML. Python's core philosophy is summarized in the Zen of Python (PEP 20) written by Tim Peters, which includes aphorisms such as these: However, Python has received criticism for violating these principles and adding unnecessary language bloat. Responses to these criticisms note that the Zen of Python is a guideline rather than a rule. The addition of some new features had been controversial: Guido van Rossum resigned as Benevolent Dictator for Life after conflict about adding the assignment expression operator in Python 3.8. Nevertheless, rather than building all functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which represented the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar, while giving developers a choice in their coding methodology. Python lacks do .. while loops, which Rossum considered harmful. In contrast to Perl's motto "there is more than one way to do it", Python advocates an approach where "there should be one – and preferably only one – obvious way to do it". In practice, however, Python provides many ways to achieve a given goal. There are at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli is a Fellow at the Python Software Foundation and Python book author; he wrote that "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers typically prioritize readability over performance. For example, they reject patches to non-critical parts of the CPython reference implementation that would offer increases in speed that do not justify the cost of clarity and readability.[failed verification] Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. Also, it is possible to transpile to other languages. However, this approach either fails to achieve the expected speed-up, since Python is a very dynamic language, or only a restricted subset of Python is compiled (with potential minor semantic changes). Python is meant to be a fun language to use. This goal is reflected in the name – a tribute to the British comedy group Monty Python – and in playful approaches to some tutorials and reference materials. For instance, some code examples use the terms "spam" and "eggs" (in reference to a Monty Python sketch), rather than the typical terms "foo" and "bar". A common neologism in the Python community is pythonic, which has a broad range of meanings related to program style: Pythonic code may use Python idioms well; be natural or show fluency in the language; or conform with Python's minimalist philosophy and emphasis on readability. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Python's statements include the following: The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typing—in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations; according to Van Rossum, the language never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, data can be passed through multiple stack levels. Python's expressions include the following: In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example: A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement. Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them. Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support the semantics of only the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Python includes a module typing including several type names for type annotations. Also, mypy supports a Python compiler called mypyc, which leverages type annotations for optimization. 1.33333 frozenset() Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the modulo operator, a remainder can be negative, e.g., 4 % -3 == -2.) Also, Python offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0. Also, it offers the matrix‑multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively. Division between integers produces floating-point results. The behavior of division has changed significantly over time: In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division. Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. Also, the rounding implies that the equation b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is −1.0. Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation. Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs: To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header. Code examples "Hello, World!" program: Program to calculate the factorial of a non-negative integer: Libraries Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations. As of 13 March 2025,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. Development environments Most[which?] Python implementations (including CPython) include a read–eval–print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately. Also, CPython is bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.[citation needed] Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting. Standard desktop IDEs include PyCharm, Spyder, and Visual Studio Code; there are web browser-based IDEs, such as the following environments: Implementations CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard since version 3.11. Older versions use the C89 standard with several select C99 features, but third-party extensions are not limited to older C versions—e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python. CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms. All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past. All alternative implementations have at least slightly different semantics. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Creating an executable with Python often is done by bundling an entire Python interpreter into the executable, which causes binary sizes to be massive for small programs, yet there exist implementations that are capable of truly compiling Python. Alternative implementations include the following: Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version. Just-in-time Python compilers have been developed, but are now unsupported: There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python: There are also specialized compilers: Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax: A performance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game. There are several approaches to optimizing Python performance, despite the inherent slowness of an interpreted language. These approaches include the following strategies or tools: Language Development Python's development is conducted mostly through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases have three types, distinguished by which part of the version number is incremented: Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development. The major academic conference on Python is PyCon. Also, there are special Python mentoring programs, such as PyLadies. Naming Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. Also, the official Python documentation contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas". Languages influenced by Python See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Tagalog_language] | [TOKENS: 9680]
Contents Tagalog language Tagalog (/təˈɡɑːlɒɡ/ tə-GAH-log, native pronunciation: [tɐˈɡaːloɡ] ⓘ; Baybayin: ᜆᜄᜎᜓᜄ᜔) is an Austronesian language spoken as a first language by the ethnic Tagalog people, who make up a quarter of the population of the Philippines, and as a second language by the majority. Its de facto standardized and codified form, Filipino, is the national language of the Philippines, and is one of the nation's two official languages, alongside English. Tagalog is closely related to other Philippine languages, such as the Bikol languages, the Bisaya languages, Ilocano, Kapampangan, and Pangasinan, and more distantly to other Austronesian languages, such as the Formosan languages of Taiwan, Indonesian, Malay, Hawaiian, Māori, Malagasy, and many more. Classification Tagalog is a Central Philippine language within the Austronesian language family. Being Malayo-Polynesian, it is related to other Austronesian languages, such as Malagasy, Javanese, Indonesian, Malay, Tetum (of Timor), and Yami (of Taiwan). It is closely related to the languages spoken in the Bicol Region and the Visayas islands, such as the Bikol group and the Visayan group, including Waray, Hiligaynon and Cebuano. Tagalog differs from its Central Philippine counterparts with its treatment of the Proto-Philippine schwa vowel *ə. In most Bikol and Visayan languages, this sound merged with /u/ and [o]. In Tagalog, it has merged with /i/. For example, Proto-Philippine *dəkət (adhere, stick) is Tagalog dikít and Visayan and Bikol dukót. Proto-Philippine *r, *j, and *z merged with /d/ but is /l/ between vowels. Proto-Philippine *ŋajan (name) and *hajək (kiss) became Tagalog ngalan and halík. Adjacent to an affix, however, it becomes /r/ instead: bayád (paid) → bayaran (to pay). Proto-Philippine *R merged with /ɡ/. *tubiR (water) and *zuRuʔ (blood) became Tagalog tubig and dugô. History The word Tagalog is possibly derived from the endonym taga-ilog ("river dweller"), composed of tagá- ("native of" or "from") and ilog ("river"), or alternatively, taga-alog deriving from alog ("pool of water in the lowlands"; "rice or vegetable plantation"). Linguists such as David Zorc and Robert Blust speculate that the Tagalogs and other Central Philippine ethno-linguistic groups originated in Northeastern Mindanao or the Eastern Visayas. Possible words of Old Tagalog origin are attested in the Laguna Copperplate Inscription from the tenth century, which is largely written in Old Malay. The first known complete book to be written in Tagalog is the Doctrina Christiana (Christian Doctrine), printed in 1593. The Doctrina was written in Spanish and two transcriptions of Tagalog; one in the ancient, then-current Baybayin script and the other in an early Spanish attempt at a Latin orthography for the language. Throughout the 333 years of Spanish rule, various grammars and dictionaries were written by Spanish clergymen. In 1610, the Dominican priest Francisco Blancas de San José published the Arte y reglas de la lengua tagala (which was subsequently revised with two editions in 1752 and 1832) in Bataan. In 1613, the Franciscan priest Pedro de San Buenaventura published the first Tagalog dictionary, his Vocabulario de la lengua tagala in Pila, Laguna. The first substantial dictionary of the Tagalog language was written by the Czech Jesuit missionary Pablo Clain in the beginning of the 18th century. Clain spoke Tagalog and used it actively in several of his books. He prepared the dictionary, which he later passed over to Francisco Jansens and José Hernandez. Further compilation of his substantial work was prepared by P. Juan de Noceda and P. Pedro de Sanlucar and published as Vocabulario de la lengua tagala in Manila in 1754 and then repeatedly reedited, with the last edition being in 2013 in Manila. Among others, Arte de la lengua tagala y manual tagalog para la administración de los Santos Sacramentos (1850) in addition to early studies of the language. The indigenous poet Francisco Balagtas (1788–1862) is known as the foremost Tagalog writer, his most notable work being the 19th-century epic Florante at Laura. Tagalog was declared the official language by the first revolutionary constitution in the Philippines, the Constitution of Biak-na-Bato in 1897. In 1935, the Philippine constitution designated English and Spanish as official languages, but mandated the development and adoption of a common national language based on one of the existing native languages. After study and deliberation, the National Language Institute, a committee composed of seven members who represented various regions in the Philippines, chose Tagalog as the basis for the evolution and adoption of the national language of the Philippines. President Manuel L. Quezon then, on December 30, 1937, proclaimed the selection of the Tagalog language to be used as the basis for the evolution and adoption of the national language of the Philippines. In 1939, President Quezon renamed the proposed Tagalog-based national language as Wikang Pambansâ (national language). Quezon himself was born and raised in Baler, Aurora, which is a native Tagalog-speaking area. Under the Japanese puppet government during World War II, Tagalog as a national language was strongly promoted; the 1943 Constitution specifying: "The government shall take steps toward the development and propagation of Tagalog as the national language." In 1959, the language was further renamed as "Pilipino". Along with English, the national language has had official status under the 1973 constitution (as "Pilipino") and the present 1987 constitution (as Filipino). The adoption of Tagalog in 1937 as basis for a national language is not without its own controversies. Instead of specifying Tagalog, the national language was designated as Wikang Pambansâ ("National Language") in 1939.[better source needed] Twenty years later, in 1959, it was renamed by then Secretary of Education, José E. Romero, as Pilipino to give it a national rather than ethnic label and connotation. The changing of the name did not, however, result in acceptance among non-Tagalogs, especially Cebuanos who had not accepted the selection. The national language issue was revived once more during the 1971 Constitutional Convention. The majority of the delegates were in favor of scrapping the idea of a "national language" altogether. A compromise solution was worked out—a "universalist" approach to the national language, to be called Filipino rather than Pilipino. The 1973 constitution makes no mention of Tagalog. When a new constitution was drawn up in 1987, it named Filipino as the national language. The constitution specified that as the Filipino language evolves, it shall be further developed and enriched on the basis of existing Philippine and other languages. Filipino and Tagalog are varieties of the same language, sharing a big bulk of common lexical items, and having very similar grammatical structures. Upon the issuance of Executive Order No. 134, Tagalog was declared as basis of the National Language. On April 12, 1940, Executive No. 263 was issued ordering the teaching of the national language in all public and private schools in the country. Article XIV, Section 6 of the 1987 Constitution of the Philippines specifies, in part: Subject to provisions of law and as the Congress may deem appropriate, the Government shall take steps to initiate and sustain the use of Filipino as a medium of official communication and as language of instruction in the educational system. Under Section 7, however: The regional languages are the auxiliary official languages in the regions and shall serve as auxiliary media of instruction therein. In 2009, the Department of Education promulgated an order institutionalizing a system of mother-tongue based multilingual education ("MLE"), wherein instruction is conducted primarily in a student's mother tongue (one of the various regional Philippine languages) until at least grade three, with additional languages such as Filipino and English being introduced as separate subjects no earlier than grade two. In secondary school, Filipino and English become the primary languages of instruction, with the learner's first language taking on an auxiliary role. After pilot tests in selected schools, the MLE program was implemented nationwide from School Year (SY) 2012–2013. Tagalog is the first language of a quarter of the population of the Philippines (particularly in Central and Southern Luzon) and the second language for the majority. Geographic distribution According to the 2020 census, 109 million people were living in the Philippines. The vast majority have some basic level of understanding of Filipino. The Tagalog homeland, Katagalugan, covers much of the central to southern parts of the island Luzon — particularly in Aurora, Bataan, Batangas, Bulacan, Cavite, Laguna, Metro Manila, Nueva Ecija, Quezon, and Rizal. Tagalog is also spoken natively by inhabitants living on the islands of Marinduque and Mindoro, as well as Palawan to a lesser extent. Significant minorities of Filipino speakers are found in the other Central Luzon provinces of Pampanga and Tarlac, Camarines Norte and Camarines Sur in Bicol Region, the Cordillera city of Baguio, southeast Pangasinan in Ilocos Region, and various parts of Mindanao especially in the island's urban areas. Filipino is also the predominant language of Cotabato City in Mindanao, making it the only place outside of Luzon with a Filipino-speaking majority. It is also the main lingua franca in Bangsamoro Autonomous Region in Muslim Mindanao. According to the 2000 Philippine Census, approximately 96% of the household population who were able to attend school could speak Filipino; and about 28% of the total population spoke it natively. The following regions and provinces of the Philippines are majority Tagalog-speaking, overlapping with Filipino-speaking (from north to south): Tagalog serves as the common language among Overseas Filipinos, though its use overseas is usually limited to communication among Filipino ethnic groups. The largest concentration of Tagalog speakers outside the Philippines is found in the United States, where the 2020 census reported (based on data collected in 2018) that it was the fourth most-spoken non-English language at home with over 1.7 million speakers, behind Spanish, French, and Chinese (with figures for Cantonese and Mandarin combined). A study based on data from the United States Census Bureau's 2015 American Consumer Survey shows that Tagalog is the most commonly spoken non-English language after Spanish in California, Nevada, and Washington states. Tagalog is one of three recognized languages in San Francisco, California, along with Spanish and Chinese, making all essential city services be communicated using these languages along with English. In Hawaii, state-funded entities are required to provide oral and written translations for everything in Tagalog and Ilocano. Nevada provides Tagalog Election ballots. Other countries with significant concentrations of overseas Filipinos and Tagalog speakers include Saudi Arabia with 938,490, Canada with 676,775, Japan with 313,588, United Arab Emirates with 541,593, Kuwait with 187,067, and Malaysia with 620,043. Dialects At present, no comprehensive dialectology has been done in the Tagalog-speaking regions, though there have been descriptions in the form of dictionaries and grammars of various Tagalog dialects. Ethnologue lists Manila, Lubang, Marinduque, Bataan (Western Central Luzon), Batangas, Bulacan (Eastern Central Luzon), Tanay-Paete (Rizal-Laguna), and Tayabas (Quezon) as dialects of Tagalog; however, there appear to be four main dialects, of which the aforementioned are a part: Northern (exemplified by the Bulacan dialect), Central (including Manila), Southern (exemplified by Batangas), and Marinduque. Some example of dialectal differences are: Perhaps the most divergent Tagalog dialects are those spoken in Marinduque. Linguist Rosa Soberano identifies two dialects, western and eastern, with the former being closer to the Tagalog dialects spoken in the provinces of Batangas and Quezon. One example is the verb conjugation paradigms. While some of the affixes are different, Marinduque also preserves the imperative affixes, also found in Visayan and Bikol languages, that have mostly disappeared from most Tagalog early 20th century; they have since merged with the infinitive. The Manila Dialect is the basis for the national language. Outside of Luzon, a variety of Tagalog called Soccsksargen Tagalog (Sox-Tagalog, also called Kabacan Tagalog) is spoken in Soccsksargen, a southwestern region in Mindanao, as well as Cotabato City. This "hybrid" Tagalog dialect is a blend of Tagalog (including its dialects) with other languages where they are widely spoken and varyingly heard such as Hiligaynon (a regional lingua franca), Ilocano, Cebuano as well as Maguindanaon and other indigenous languages native to region, as a result of migration from Panay, Negros, Cebu, Bohol, Siquijor, Ilocandia, Cagayan Valley, Cordillera Administrative Region, Central Luzon, Calabarzon, Mindoro and Marinduque since the turn of 20th century, therefore making the region a melting pot of cultures and languages. Phonology Tagalog has 21 phonemes: 16 are consonants and 5 are vowels. Native Tagalog words follow CV(C) syllable structure, though more complex consonant clusters are permitted in loanwords. Tagalog has five vowels and four diphthongs. Tagalog originally had three vowel phonemes, /a/, /i/, and /u/. Tagalog is now considered to have five vowel phonemes following the introduction of two marginal phonemes from Spanish, /o/ and /e/. Nevertheless, simplification of pairs [o ~ u] and [ɛ ~ i] is likely to take place, especially in some Tagalog as second language, remote location and working class registers. The four diphthongs are /aj/, /uj/, /aw/, and /iw/. Long vowels are not written apart from pedagogical texts, where an acute accent is used: á é í ó ú. The table above shows all the possible realizations for each of the five vowel sounds depending on the speaker's origin or proficiency. The five general vowels are in bold. Below is a chart of Tagalog consonants. All the stops are unaspirated. The velar nasal occurs in all positions including at the beginning of a word. Loanword variants using these phonemes are italicized inside the angle brackets. Glottal stop is not indicated. Glottal stops are most likely to occur when: Stress is a distinctive feature in Tagalog. Primary stress occurs on either the final or the penultimate syllable of a word. Vowel lengthening accompanies primary or secondary stress except when stress occurs at the end of a word. Tagalog words are often distinguished from one another by the position of the stress or the presence of a final glottal stop. In formal or academic settings, stress placement and the glottal stop are indicated by a diacritic (tuldík) above the final vowel. The penultimate primary stress position (malumay) is the default stress type and so is left unwritten except in dictionaries. Grammar The grammar of Tagalog is agglutinative, predicate-initial, and organized around the Austronesian alignment system, in which intricate verbal morphology indicates which semantic role is associated with the topic ("ang"-marked) argument. Tagalog verbs combine a wide array of prefixes, infixes, suffixes, circumfixes, and clitic particles to express voice/"trigger", aspect, mood, and valency changes, resulting in morphologically complex predicate structures. Tagalog noun morphology is relatively simple compared to its verbal system, though nouns are also productively derived from a range of affixes. Grammatical roles are expressed not by case endings but by a three-way article system (ang, ng, sa) placed directly before the noun clause, distinguishing topic, non-topic, and oblique arguments. Pronouns reflect distinctions in person, number, clusivity, and case. Word order is typically verb-initial, though SVO may be used in formal contexts. Because the voice/trigger system and article markers indicate grammatical roles, arguments can be freely rearranged to shift focus or emphasize different participants without changing the core meaning. A defining feature of the language is its productive reduplication system, which includes partial and full reduplication. These patterns perform both grammatical and derivational functions, marking imperfective aspect, intensity, plurality, distributive or repeated action, among other functions. Another important feature is phonemic stress, wherein the placement of stress is lexically contrastive: identical sequence of sounds can represent distinct words depending on stress (e.g., arálin “to study” vs. aralín “lesson”), and the presence or absence of a glottal stop. Stress interacts with affixation and reduplication in systematic but sometimes nontransparent ways. Writing system Tagalog, like other Philippines languages today, is written using the Latin alphabet. Prior to the arrival of the Spanish in 1521 and the beginning of their colonization in 1565, Tagalog was written in an abugida—or alphasyllabary—called Baybayin. This system of writing gradually gave way to the use and propagation of the Latin alphabet as introduced by the Spanish. As the Spanish began to record and create grammars and dictionaries for the various languages of the Philippine archipelago, they adopted systems of writing closely following the orthographic customs of the Spanish language and were refined over the years. Until the first half of the 20th century, most Philippine languages were widely written in a variety of ways based on Spanish orthography. In the late 19th century, a number of educated Filipinos began proposing for revising the spelling system used for Tagalog at the time. In 1884, Filipino doctor and student of languages Trinidad Pardo de Tavera published his study on the ancient Tagalog script Contribucion para el Estudio de los Antiguos Alfabetos Filipinos and in 1887, published his essay El Sanscrito en la lengua Tagalog which made use of a new writing system developed by him. Meanwhile, Jose Rizal, inspired by Pardo de Tavera's 1884 work, also began developing a new system of orthography (unaware at first of Pardo de Tavera's own orthography). A major noticeable change in these proposed orthographies was the use of the letter ⟨k⟩ rather than ⟨c⟩ and ⟨q⟩ to represent the phoneme /k/. In 1889, the new bilingual Spanish-Tagalog La España Oriental newspaper, of which Isabelo de los Reyes was an editor, began publishing using the new orthography stating in a footnote that it would "use the orthography recently introduced by ... learned Orientalis". This new orthography, while having its supporters, was also not initially accepted by several writers. Soon after the first issue of La España, Pascual H. Poblete's Revista Católica de Filipina began a series of articles attacking the new orthography and its proponents. A fellow writer, Pablo Tecson was also critical. Among the attacks was the use of the letters "k" and "w" as they were deemed to be of German origin and thus its proponents were deemed as "unpatriotic". The publishers of these two papers would eventually merge as La Lectura Popular in January 1890 and would eventually make use of both spelling systems in its articles. Pedro Laktaw, a schoolteacher, published the first Spanish-Tagalog dictionary using the new orthography in 1890. In April 1890, Jose Rizal authored an article Sobre la Nueva Ortografia de la Lengua Tagalog in the Madrid-based periodical La Solidaridad. In it, he addressed the criticisms of the new writing system by writers like Pobrete and Tecson and the simplicity, in his opinion, of the new orthography. Rizal described the orthography promoted by Pardo de Tavera as "more perfect" than what he himself had developed. The new orthography was, however, not broadly adopted initially and was used inconsistently in the bilingual periodicals of Manila until the early 20th century. The revolutionary society Kataás-taasan, Kagalang-galang Katipunan ng̃ mg̃á Anak ng̃ Bayan or Katipunan made use of the k-orthography and the letter k featured prominently on many of its flags and insignias. In 1937, Tagalog was selected to serve as basis for the country's national language. In 1940, the Balarilâ ng Wikang Pambansâ (English: Grammar of the National Language) of grammarian Lope K. Santos introduced the Abakada alphabet. This alphabet consists of 20 letters and became the standard alphabet of the national language.[better source needed] The orthography as used by Tagalog would eventually influence and spread to the systems of writing used by other Philippine languages (which had been using variants of the Spanish-based system of writing). In 1987, the Abakada was dropped and replaced by the expanded Filipino alphabet. Tagalog was written in an abugida (alphasyllabary) called Baybayin prior to the Spanish colonial period in the Philippines, in the 16th century. This particular writing system was composed of symbols representing three vowels and 14 consonants. Belonging to the Brahmic family of scripts, it shares similarities with the Old Kawi script of Java and is believed to be descended from the script used by the Bugis in Sulawesi. Although it enjoyed a relatively high level of literacy, Baybayin gradually fell into disuse in favor of the Latin alphabet taught by the Spaniards during their rule. There has been confusion of how to use Baybayin, which is actually an abugida, or an alphasyllabary, rather than an alphabet. Not every letter in the Latin alphabet is represented with one of those in the Baybayin alphasyllabary. Rather than letters being put together to make sounds as in Western languages, Baybayin uses symbols to represent syllables. A "kudlít" resembling an apostrophe is used above or below a symbol to change the vowel sound after its consonant. If the kudlit is used above, the vowel is an "E" or "I" sound. If the kudlit is used below, the vowel is an "O" or "U" sound. A special kudlit was later added by Spanish missionaries in which a cross placed below the symbol to get rid of the vowel sound all together, leaving a consonant. Previously, the consonant without a following vowel was simply left out (for example, bundók being rendered as budo), forcing the reader to use context when reading such words. Example: Until the first half of the 20th century, Tagalog was widely written in a variety of ways based on Spanish orthography consisting of 32 letters called 'ABECEDARIO' (Spanish for "alphabet"). The additional letters beyond the 26-letter English alphabet are: ch, ll, ng, ñ, n͠g / ñg, and rr. When the national language was based on Tagalog, grammarian Lope K. Santos introduced a new alphabet consisting of 20 letters called Abakada in school grammar books called balarilâ.[full citation needed] The only letter not in the English alphabet is ng. In 1987, the Department of Education, Culture and Sports issued a memo stating that the Philippine alphabet had changed from the Pilipino-Tagalog Abakada version to a new 28-letter alphabet to make room for loans, especially family names from Spanish and English. The additional letters beyond the 26-letter English alphabet are: ñ, ng. The genitive marker ng and the plural marker mga (e.g. Iyan ang mga damít ko. (Those are my clothes)) are abbreviations that are pronounced nang [naŋ] and mangá [mɐˈŋa]. Ng, in most cases, roughly translates to "of" (ex. Siyá ay kapatíd ng nanay ko. She is the sibling of my mother) while nang usually means "when" or can describe how something is done or to what extent (equivalent to the suffix -ly in English adverbs), among other uses. In the first example, nang is used in lieu of the word noong (when; Noong si Hudas ay madulás). In the second, nang describes that the person woke up (gumising) early (maaga); gumising nang maaga. In the third, nang described up to what extent that Juan improved (gumalíng), which is "greatly" (nang todo). In the latter two examples, the ligature na and its variants -ng and -g may also be used (Gumising na maaga/Maagang gumising; Gumalíng na todo/Todong gumalíng). The longer nang may also have other uses, such as a ligature that joins a repeated word: The words pô/hô originated from the word "Panginoon." and "Poon." ("Lord."). When combined with the basic affirmative Oo "yes" (from Proto-Malayo-Polynesian *heqe), the resulting forms are opò and ohò. "Pô" and "opò" are specifically used to denote a high level of respect when addressing older persons of close affinity like parents, relatives, teachers and family friends. "Hô" and "ohò" are generally used to politely address older neighbours, strangers, public officials, bosses and nannies, and may suggest a distance in societal relationship and respect determined by the addressee's social rank and not their age. However, "pô" and "opò" can be used in any case in order to express an elevation of respect. Used in the affirmative: Pô/Hô may also be used in negation. Vocabulary and borrowed words Tagalog vocabulary is mostly of native Austronesian or Tagalog origin, such as most of the words that end with the diphthong -iw, (e.g. giliw) and words that exhibit reduplication (e.g. halo-halo, patpat, etc.). Besides inherited cognates, this also accounts for innovations in Tagalog vocabulary, especially traditional ones within its dialects. Tagalog has also incorporated many Spanish and English loanwords; the necessity of which increases in more technical parlance. In precolonial times, Trade Malay was widely known and spoken throughout Maritime Southeast Asia, contributing a significant number of Malay vocabulary into the Tagalog language. Malay loanwords, identifiable or not, may often already be considered native as these have existed in the language before colonisation. Tagalog also includes loanwords from Indian languages (Sanskrit and Tamil, mostly through Malay), Chinese languages (mostly Hokkien, followed by Cantonese, Mandarin, etc.), Japanese, Arabic and Persian. English has borrowed some words from Tagalog, such as abaca, barong, balisong, boondocks, jeepney, Manila hemp, pancit, ylang-ylang, and yaya. Some of these loanwords are more often used in Philippine English. Tagalog has contributed several words to Philippine Spanish, like barangay (from balan͠gay, meaning barrio), the abacá, cogon, palay, dalaga etc. Taglish (Englog) Taglish and Englog are names given to a mix of English and Tagalog. The amount of English vs. Tagalog varies from the occasional use of English loan words to changing language in mid-sentence. Such code-switching is prevalent throughout the Philippines and in various languages of the Philippines other than Tagalog. Code-mixing also entails the use of foreign words that are "Filipinized" by reforming them using Filipino rules, such as verb conjugations. Users typically use Filipino or English words, whichever comes to mind first or whichever is easier to use. Magshoshopping kamí sa mall. Sino ba ang magdadrive sa shopping center? We will go shopping at the mall. Who will drive to the shopping center? Urbanites are the most likely to speak like this. The practice is common in television, radio, and print media as well. Advertisements from companies like Wells Fargo, Wal-Mart, Albertsons, McDonald's and Western Union have contained Taglish. Comparisons with Austronesian languages Below is a chart of Tagalog and a number of other Austronesian languages comparing thirteen words. Religious literature Religious literature remains one of the most dynamic components to Tagalog literature. The first Bible in Tagalog, then called Ang Biblia ("the Bible") and now called Ang Dating Biblia ("the Old Bible"), was published in 1905. In 1970, the Philippine Bible Society translated the Bible into modern Tagalog. Even before the Second Vatican Council, devotional materials in Tagalog had been in circulation. There are at least four circulating Tagalog translations of the Bible When the Second Vatican Council, (specifically the Sacrosanctum Concilium) permitted the universal prayers to be translated into vernacular languages, the Catholic Bishops' Conference of the Philippines was one of the first to translate the Roman Missal into Tagalog. The Roman Missal in Tagalog was published as early as 1982. In 2012, the Catholic Bishops' Conference of the Philippines revised the 41-year-old liturgy with an English version of the Roman Missal, and later translated it in the vernacular to several native languages in the Philippines. For instance, in 2024, the Roman Catholic Diocese of Malolos uses the Tagalog translation of the Roman Missal entitled "Ang Aklat ng Mabuting Balita." Jehovah's Witnesses were printing Tagalog literature at least as early as 1941 and The Watchtower (the primary magazine of Jehovah's Witnesses) has been published in Tagalog since at least the 1950s. New releases are now regularly released simultaneously in a number of languages, including Tagalog. The official website of Jehovah's Witnesses also has some publications available online in Tagalog. The revised bible edition, the New World Translation of the Holy Scriptures, was released in Tagalog on 2019 and it is distributed without charge both printed and online versions. Tagalog is quite a stable language, and very few revisions have been made to Catholic Bible translations. Also, as Protestantism in the Philippines is relatively young, liturgical prayers tend to be more ecumenical. Example texts In Tagalog, the Lord's Prayer is known by its incipit, Amá Namin (literally, "Our Father"). Amá namin, sumasalangit Ka, Sambahín ang ngalan Mo. Mapasaamin ang kaharián Mo. Sundín ang loób Mo, Dito sa lupà, gaya nang sa langit. Bigyán Mo kamí ngayón ng aming kakanin sa araw-araw, At patawarin Mo kamí sa aming mga salà, Para nang pagpápatawad namin, Sa nagkakasalà sa amin; At huwág Mo kamíng ipahintulot sa tuksô, At iadyâ Mo kamí sa lahát ng masamâ. [Sapagkát sa Inyó ang kaharián, at ang kapangyarihan, At ang kaluwálhatian, ngayón, at magpakailanman.] Amen. The same text, in Baybayin script, is as follows. ᜀᜋ ᜈᜋᜒᜈ᜔᜵ ᜐᜓᜋᜐᜎᜅᜒᜆ᜔ ᜃ᜵ ᜐᜋ᜔ᜊᜑᜒᜈ᜔ ᜀᜅ᜔ ᜅᜎᜈ᜔ ᜋᜓ᜶ ᜋᜉᜐᜀᜋᜒᜈ᜔ ᜀᜅ᜔ ᜃᜑᜇᜒᜀᜈ᜔ ᜋᜓ᜶ ᜐᜓᜈ᜔ᜇᜒᜈ᜔ ᜀᜅ᜔ ᜎᜓᜂᜊ᜔ ᜋᜓ᜶ ᜇᜒᜆᜓ ᜐ ᜎᜓᜉ᜵ ᜄᜌ ᜈᜅ᜔ ᜐ ᜎᜅᜒᜆ᜔᜶ ᜊᜒᜄ᜔ᜌᜈ᜔ ᜋᜓ ᜃᜋᜒ ᜈᜅ᜔ ᜀᜋᜒᜅ᜔ ᜃᜃᜈᜒᜈ᜔ ᜐ ᜀᜇᜏ᜔ᜀᜇᜏ᜔᜵ ᜀᜆ᜔ ᜉᜆᜏᜇᜒᜈ᜔ ᜋᜓ ᜃᜋᜒ ᜐ ᜀᜋᜒᜅ᜔ ᜋᜅ ᜐᜎ᜵ ᜉᜇ ᜈᜅ᜔ ᜉᜄ᜔ᜉᜉᜆᜏᜇ᜔ ᜈᜋᜒᜈ᜔᜵ ᜐ ᜈᜄ᜔ᜃᜃᜐᜎ ᜐ ᜀᜋᜒᜈ᜔; ᜀᜆ᜔ ᜑᜓᜏᜄ᜔ ᜋᜓ ᜃᜋᜒᜅ᜔ ᜁᜉᜑᜒᜈ᜔ᜆᜓᜎᜓᜆ᜔ ᜐ ᜆᜓᜃ᜔ᜐᜓ᜵ ᜀᜆ᜔ ᜁᜀᜇ᜔ᜌ ᜋᜓ ᜃᜋᜒ ᜐ ᜎᜑᜆ᜔ ᜈᜅ᜔ ᜋᜐᜋ᜶ [ᜐᜉᜄ᜔ᜃᜆ᜔ ᜐ ᜁᜈ᜔ᜌᜓ ᜀᜅ᜔ ᜃᜑᜇᜒᜀᜈ᜔᜵ ᜀᜆ᜔ ᜀᜅ᜔ ᜃᜉᜅ᜔ᜌᜇᜒᜑᜈ᜔᜵ ᜀᜆ᜔ ᜀᜅ᜔ ᜃᜎᜓᜏᜎ᜔ᜑᜆᜒᜀᜈ᜔᜵ ᜅᜌᜓᜈ᜔᜵ ᜀᜆ᜔ ᜋᜄ᜔ᜉᜃᜁᜎᜈ᜔ᜋᜈ᜔᜶] ᜀᜋᜒᜈ᜔᜶ This is Article 1 of the Universal Declaration of Human Rights (Pangkalahatáng Pagpapahayág ng Karapatáng Pantao) Bawat tao'y isinilang na may layà at magkakapantáy ang tagláy na dangál at karapatán. Silá'y pinagkalooban ng pangangatwiran at budhî, at dapat magpálagayan ang isá't-isá sa diwà ng pagkákapatiran. ᜊᜏᜆ᜔ ᜆᜂᜌ᜔ ᜁᜐᜒᜈᜒᜎᜅ᜔ ᜈ ᜋᜌ᜔ ᜎᜌ ᜀᜆ᜔ ᜋᜄ᜔ᜃᜃᜉᜈ᜔ᜆᜌ᜔ ᜀᜅ᜔ ᜆᜄ᜔ᜎᜌ᜔ ᜈ ᜇᜅᜎ᜔ ᜀᜆ᜔ ᜃᜇᜉᜆᜈ᜔᜶ ᜐᜒᜎᜌ᜔ ᜉᜒᜈᜄ᜔ᜃᜎᜓᜂᜊᜈ᜔ ᜈᜅ᜔ ᜃᜆ᜔ᜏᜒᜇᜈ᜔ ᜀᜆ᜔ ᜊᜓᜇᜑᜒ᜵ ᜀᜆ᜔ ᜇᜉᜆ᜔ ᜋᜄ᜔ᜉᜎᜄᜌᜈ᜔ ᜀᜅ᜔ ᜁᜐᜆ᜔ ᜁᜐ ᜐ ᜇᜒᜏ ᜈᜅ᜔ ᜉᜄ᜔ᜃᜃᜉᜆᜒᜇᜈ᜔᜶ All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood. Numbers (mga bilang/mga numero) in Tagalog follow two systems. The first consists of native Tagalog words and the other are Spanish-derived. (This may be compared to other East Asian languages, except with the second set of numbers borrowed from Spanish instead of Chinese.) For example, when a person refers to the number "seven", it can be translated into Tagalog as "pitó" or "siyete" (Spanish: siete). Months and days in Tagalog are also localised forms of Spanish months and days. "Month" in Tagalog is buwán (also the word for moon) and "day" is araw (the word also means sun). Unlike Spanish, however, months and days in Tagalog are always capitalised. Time expressions in Tagalog are also Tagalized forms of the corresponding Spanish. "Time" in Tagalog is panahón or oras. Common phrases Opò [ˈʔopoʔ] or ohò [ˈʔohoʔ] (formal/polite form) Hindî pô [hɪnˈdiː poʔ] (formal/polite form) Very informal: Ewan [ˈʔɛwɐn], archaic aywan [ʔaɪ̯ˈwan] (closest English equivalent: colloquial dismissive 'Whatever' or 'Dunno') Hindî ko naúunawáan [hɪnˈdiː ko nɐˌʔuʔʊnɐˈwaʔan] Marunong pô ba kayóng magsalitâ ng Inglés? [mɐˈɾunoŋ poː ba kɐˈjoŋ mɐɡsɐlɪˈtaː nɐŋ ʔɪŋˈɡlɛs] (polite version for elders and strangers) Marunong ka bang mag-Inglés? [mɐˈɾunoŋ kɐ baŋ mɐɡʔɪŋˈɡlɛs] (short form) Marunong pô ba kayóng mag-Inglés? [mɐˈɾunoŋ poː ba kɐˈjoŋ mɐɡʔɪŋˈɡlɛs] (short form, polite version for elders and strangers) *Pronouns such as niyó (2nd person plural) and nilá (3rd person plural) are used on a single 2nd person in polite or formal language. See Tagalog grammar. Ang hindî marunong lumingón sa pinánggalingan ay hindî makaráratíng sa paroroonan. One who knows not how to look back to whence he came will never get to where he is going. Unang kagát, tinapay pa rin.First bite, still bread.All fluff, no substance. Tao ka nang humaráp, bilang tao kitáng haharapin.You reach me as a human, I will treat you as a human and never act as a traitor.(A proverb in Southern Tagalog that has made people aware of the significance of sincerity in Tagalog communities.) Hulí man daw (raw) at magalíng, nakáhahábol pa rin.If one is behind but capable, one will still be able to catch up. Magbirô ka na sa lasíng, huwág lang sa bagong gising.Make fun of someone drunk, if you must, but never one who has just awakened. Aanhín pa ang damó kung patáy na ang kabayò?What use is the grass if the horse is already dead? Ang sakít ng kalingkingan, damdám ng buóng katawán.The pain in the pinkie is felt by the whole body. In a group, if one goes down, the rest follow. Nasa hulí ang pagsisisi.Regret is always in the end. Pagkáhabà-habà man ng prusisyón, sa simbahan pa rin ang tulóy.The procession may stretch on and on, but it still ends up at the church. (In romance: refers to how certain people are destined to be married. In general: refers to how some things are inevitable, no matter how long you try to postpone it.) Kung 'dî mádaán sa santóng dasalan, daanin sa santóng paspasan.If it cannot be got through holy prayer, get it through blessed force. (In romance and courting: santóng paspasan literally means 'holy speeding' and is a euphemism for sexual intercourse. It refers to the two styles of courting by Filipino boys: one is the traditional, protracted, restrained manner favored by older generations, which often featured serenades and manual labor for the girl's family; the other is upfront seduction, which may lead to a slap on the face or a pregnancy out of wedlock. The second conclusion is known as pikot or what Western cultures would call a 'shotgun marriage'. This proverb is also applied in terms of diplomacy and negotiation.) See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Political_sociology] | [TOKENS: 4711]
Contents Political sociology 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias Political sociology is an interdisciplinary field of study concerned with exploring how governance and society interact and influence one another at the micro to macro levels of analysis. Interested in the social causes and consequences of how power is distributed and changes throughout and amongst societies, political sociology's focus ranges across individual families to the state as sites of social and political conflict and power contestation. Introduction Political sociology was conceived as an interdisciplinary sub-field of sociology and politics in the early 1930s throughout the social and political disruptions that took place through the rise of communism, fascism, and World War II. This new area drawing upon works by Alexis de Tocqueville, James Bryce, Robert Michels, Max Weber, Émile Durkheim, and Karl Marx to understand an integral theme of political sociology: power. Power's definition for political sociologists varies across the approaches and conceptual framework utilised within this interdisciplinary study. At its basic understanding, power can be seen as the ability to influence or control other people or processes around you. This helps to create a variety of research focuses and use of methodologies as different scholars' understanding of power differs. Alongside this, their academic disciplinary department/ institution can also flavour their research as they develop from their baseline of inquiry (e.g. political or sociological studies) into this interdisciplinary field (see § Political sociology vs sociology of politics). Although with deviation in how it is carried out, political sociology has an overall focus on understanding why power structures are the way they are in any given societal context. Political sociologists, throughout its broad manifestations, propose that in order to understand power, society and politics must be studied with one another and neither treated as assumed variables. In the words of political scientist Michael Rush, "For any society to be understood, so must its politics; and if the politics of any society is to be understood, so must that society." Origins The development of political sociology from the 1930s onwards took place as the separating disciplines of sociology and politics explored their overlapping areas of interest. Sociology can be viewed as the broad analysis of human society and the interrelationship of these societies. Predominantly focused on the relationship of human behaviour with society. Political science or politics as a study largely situates itself within this definition of sociology and is sometimes regarded as a well developed sub-field of sociology, but is seen as a stand alone disciplinary area of research due to the size of scholarly work undertaken within it. Politics offers a complex definition and is important to note that what 'politics' means is subjective to the author and context. From the study of governmental institutions, public policy, to power relations, politics has a rich disciplinary outlook. The importance of studying sociology within politics, and vice versa, has had recognition across figures from Mosca to Pareto as they recognised that politicians and politics do not operate in a societal vacuum, and society does not operate outside of politics. Here, political sociology sets about to study the relationships of society and politics. Numerous works account for highlighting a political sociology, from the work of Comte and Spencer to other figures such as Durkheim. Although feeding into this interdisciplinary area, the body of work by Karl Marx and Max Weber are considered foundational to its inception as a sub-field of research. Scope The scope of political sociology is broad, reflecting on the wide interest in how power and oppression operate over and within social and political areas in society. Although diverse, some major themes of interest for political sociology include: In other words, political sociology is concerned with how social trends, dynamics, and structures of domination affect formal political processes alongside social forces working together to create change. From this perspective, we can identify three major theoretical frameworks: pluralism, elite or managerial theory, and class analysis, which overlaps with Marxist analysis. Pluralism sees politics primarily as a contest among competing interest groups. Elite or managerial theory is sometimes called a state-centered approach. It explains what the state does by looking at constraints from organizational structure, semi-autonomous state managers, and interests that arise from the state as a unique, power-concentrating organization. A leading representative is Theda Skocpol. Social class theory analysis emphasizes the political power of capitalist elites. It can be split into two parts: one is the "power structure" or "instrumentalist" approach, whereas another is the structuralist approach. The power structure approach focuses on the question of who rules and its most well-known representative is G. William Domhoff. The structuralist approach emphasizes the way a capitalist economy operates; only allowing and encouraging the state to do some things but not others (Nicos Poulantzas, Bob Jessop). Where a typical research question in political sociology might have been, "Why do so few American or European citizens choose to vote?" or even, "What difference does it make if women get elected?", political sociologists also now ask, "How is the body a site of power?", "How are emotions relevant to global poverty?", and "What difference does knowledge make to democracy?" While both are valid lines of enquiry, sociology of politics is a sociological reductionist account of politics (e.g. exploring political areas through a sociological lens), whereas political sociology is a collaborative socio-political exploration of society and its power contestation. When addressing political sociology, there is noted overlap in using sociology of politics as a synonym. Sartori outlines that sociology of politics refers specifically to a sociological analysis of politics and not an interdisciplinary area of research that political sociology works towards. This difference is made by the variables of interest that both perspectives focus upon. Sociology of politics centres on the non-political causes of oppression and power contestation in political life, whereas political sociology includes the political causes of these actions throughout commentary with non-political ones. People Marx's ideas about the state can be divided into three subject areas: pre-capitalist states, states in the capitalist (i.e. present) era and the state (or absence of one) in post-capitalist society. Overlaying this is the fact that his own ideas about the state changed as he grew older, differing in his early pre-communist phase, the young Marx phase which predates the unsuccessful 1848 uprisings in Europe and in his mature, more nuanced work. In Marx's 1843 Critique of Hegel's Philosophy of Right, his basic conception is that the state and civil society are separate. However, he already saw some limitations to that model, arguing: "The political state everywhere needs the guarantee of spheres lying outside it." He added: "He as yet was saying nothing about the abolition of private property, does not express a developed theory of class, and "the solution [he offers] to the problem of the state/civil society separation is a purely political solution, namely universal suffrage". By the time he wrote The German Ideology (1846), Marx viewed the state as a creature of the bourgeois economic interest. Two years later, that idea was expounded in The Communist Manifesto: "The executive of the modern state is nothing but a committee for managing the common affairs of the whole bourgeoisie." This represents the high point of conformance of the state theory to an economic interpretation of history in which the forces of production determine peoples' production relations and their production relations determine all other relations, including the political. Although "determines" is the strong form of the claim, Marx also uses "conditions". Even "determination" is not causality and some reciprocity of action is admitted. The bourgeoisie control the economy, therefore they control the state. In this theory, the state is an instrument of class rule. Antonio Gramsci's theory of hegemony is tied to his conception of the capitalist state. Gramsci does not understand the state in the narrow sense of the government. Instead, he divides it between political society (the police, the army, legal system, etc.) – the arena of political institutions and legal constitutional control – and civil society (the family, the education system, trade unions, etc.) – commonly seen as the private or non-state sphere, which mediates between the state and the economy. However, he stresses that the division is purely conceptual and that the two often overlap in reality. [citation needed] Gramsci claims the capitalist state rules through force plus consent: political society is the realm of force and civil society is the realm of consent. Gramsci proffers that under modern capitalism the bourgeoisie can maintain its economic control by allowing certain demands made by trade unions and mass political parties within civil society to be met by the political sphere. Thus, the bourgeoisie engages in passive revolution by going beyond its immediate economic interests and allowing the forms of its hegemony to change. Gramsci posits that movements such as reformism and fascism, as well as the scientific management and assembly line methods of Frederick Taylor and Henry Ford respectively, are examples of this. [citation needed] English Marxist sociologist Ralph Miliband was influenced by American sociologist C. Wright Mills, of whom he had been a friend. He published The State in Capitalist Society in 1969, a study in Marxist political sociology, rejecting the idea that pluralism spread political power, and maintaining that power in Western democracies was concentrated in the hands of a dominant class. Nicos Poulantzas' theory of the state reacted to what he saw as simplistic understandings within Marxism. For him Instrumentalist Marxist accounts such as that of Miliband held that the state was simply an instrument in the hands of a particular class. Poulantzas disagreed with this because he saw the capitalist class as too focused on its individual short-term profit, rather than on maintaining the class's power as a whole, to simply exercise the whole of state power in its own interest. Poulantzas argued that the state, though relatively autonomous from the capitalist class, nonetheless functions to ensure the smooth operation of capitalist society, and therefore benefits the capitalist class. [citation needed] In particular, he focused on how an inherently divisive system such as capitalism could coexist with the social stability necessary for it to reproduce itself—looking in particular to nationalism as a means to overcome the class divisions within capitalism. Borrowing from Gramsci's notion of cultural hegemony, Poulantzas argued that repressing movements of the oppressed is not the sole function of the state. Rather, state power must also obtain the consent of the oppressed. It does this through class alliances, where the dominant group makes an "alliance" with subordinate groups as a means to obtain the consent of the subordinate group. [citation needed] Bob Jessop was influenced by Gramsci, Miliband and Poulantzas to propose that the state is not as an entity but as a social relation with differential strategic effects. [citation needed] This means that the state is not something with an essential, fixed property such as a neutral coordinator of different social interests, an autonomous corporate actor with its own bureaucratic goals and interests, or the 'executive committee of the bourgeoisie' as often described by pluralists, elitists/statists and conventional Marxists respectively. Rather, what the state is essentially determined by is the nature of the wider social relations in which it is situated, especially the balance of social forces. [citation needed] In political sociology, one of Weber's most influential contributions is his "Politics as a Vocation" (Politik als Beruf) essay. Therein, Weber unveils the definition of the state as that entity that possesses a monopoly on the legitimate use of physical force. Weber wrote that politics is the sharing of state's power between various groups, and political leaders are those who wield this power. Weber distinguished three ideal types of political leadership (alternatively referred to as three types of domination, legitimisation or authority): In his view, every historical relation between rulers and ruled contained such elements and they can be analysed on the basis of this tripartite distinction. He notes that the instability of charismatic authority forces it to "routinise" into a more structured form of authority. In a pure type of traditional rule, sufficient resistance to a ruler can lead to a "traditional revolution". The move towards a rational-legal structure of authority, utilising a bureaucratic structure, is inevitable in the end. Thus this theory can be sometimes viewed as part of the social evolutionism theory. This ties to his broader concept of rationalisation by suggesting the inevitability of a move in this direction, in which "Bureaucratic administration means fundamentally domination through knowledge." Weber described many ideal types of public administration and government in Economy and Society (1922). His critical study of the bureaucratisation of society became one of the most enduring parts of his work. It was Weber who began the studies of bureaucracy and whose works led to the popularisation of this term. Many aspects of modern public administration go back to him and a classic, hierarchically organised civil service of the Continental type is called "Weberian civil service". As the most efficient and rational way of organising, bureaucratisation for Weber was the key part of the rational-legal authority and furthermore, he saw it as the key process in the ongoing rationalisation of the Western society. Weber's ideal bureaucracy is characterised by hierarchical organisation, by delineated lines of authority in a fixed area of activity, by action taken (and recorded) on the basis of written rules, by bureaucratic officials needing expert training, by rules being implemented neutrally and by career advancement depending on technical qualifications judged by organisations, not by individuals. Approaches Vilfredo Pareto (1848–1923), Gaetano Mosca (1858–1941), and Robert Michels (1876–1936), were cofounders of the Italian school of elitism which influenced subsequent elite theory in the Western tradition. The outlook of the Italian school of elitism is based on two ideas: Power lies in position of authority in key economic and political institutions. The psychological difference that sets elites apart is that they have personal resources, for instance intelligence and skills, and a vested interest in the government; while the rest are incompetent and do not have the capabilities of governing themselves, the elite are resourceful and strive to make the government work. For in reality, the elite would have the most to lose in a failed state. Pareto emphasized the psychological and intellectual superiority of elites, believing that they were the highest achievers in any field. He discussed the existence of two types of elites: Governing elites and Non-governing elites. He also extended the idea that a whole elite can be replaced by a new one and how one can circulate from being elite to non-elite. Mosca emphasized the sociological and personal characteristics of elites. He said elites are an organized minority and that the masses are an unorganized majority. The ruling class is composed of the ruling elite and the sub-elites. He divides the world into two group: Political class and Non-Political class. Mosca asserts that elites have intellectual, moral, and material superiority that is highly esteemed and influential. Sociologist Michels developed the iron law of oligarchy where, he asserts, social and political organizations are run by few individuals, and social organization and labor division are key. He believed that all organizations were elitist and that elites have three basic principles that help in the bureaucratic structure of political organization: Contemporary political sociology takes these questions seriously, but it is concerned with the play of power and politics across societies, which includes, but is not restricted to, relations between the state and society. In part, this is a product of the growing complexity of social relations, the impact of social movement organizing, and the relative weakening of the state as a result of globalization. To a significant part, however, it is due to the radical rethinking of social theory. This is as much focused now on micro questions (such as the formation of identity through social interaction, the politics of knowledge, and the effects of the contestation of meaning on structures), as it is on macro questions (such as how to capture and use state power). Chief influences here include cultural studies (Stuart Hall), post-structuralism (Michel Foucault, Judith Butler), pragmatism (Luc Boltanski), structuration theory (Anthony Giddens), and cultural sociology (Jeffrey C. Alexander). Political sociology attempts to explore the dynamics between the two institutional systems introduced by the advent of Western capitalist system that are the democratic constitutional liberal state and the capitalist economy. While democracy promises impartiality and legal equality before all citizens, the capitalist system results in unequal economic power and thus possible political inequality as well. For pluralists, the distribution of political power is not determined by economic interests but by multiple social divisions and political agendas. The diverse political interests and beliefs of different factions work together through collective organizations to create a flexible and fair representation that in turn influences political parties which make the decisions. The distribution of power is then achieved through the interplay of contending interest groups. The government in this model functions just as a mediating broker and is free from control by any economic power. This pluralistic democracy however requires the existence of an underlying framework that would offer mechanisms for citizenship and expression and the opportunity to organize representations through social and industrial organizations, such as trade unions. Ultimately, decisions are reached through the complex process of bargaining and compromise between various groups pushing for their interests. Many factors, pluralists believe, have ended the domination of the political sphere by an economic elite. The power of organized labour and the increasingly interventionist state have placed restrictions on the power of capital to manipulate and control the state. Additionally, capital is no longer owned by a dominant class, but by an expanding managerial sector and diversified shareholders, none of whom can exert their will upon another. The pluralist emphasis on fair representation however overshadows the constraints imposed on the extent of choice offered. Bachrauch and Baratz (1963) examined the deliberate withdrawal of certain policies from the political arena. For example, organized movements that express what might seem as radical change in a society can often by portrayed as illegitimate. A main rival to pluralist theory in the United States was the theory of the "power elite" by sociologist C. Wright Mills. According to Mills, the eponymous "power elite" are those that occupy the dominant positions, in the dominant institutions (military, economic and political) of a dominant country, and their decisions (or lack of decisions) have enormous consequences, not only for the U.S. population but, "the underlying populations of the world." The institutions which they head, Mills posits, are a triumvirate of groups that have succeeded weaker predecessors: (1) "two or three hundred giant corporations" which have replaced the traditional agrarian and craft economy, (2) a strong federal political order that has inherited power from "a decentralized set of several dozen states" and "now enters into each and every cranny of the social structure", and (3) the military establishment, formerly an object of "distrust fed by state militia," but now an entity with "all the grim and clumsy efficiency of a sprawling bureaucratic domain." Importantly, and in distinction from modern American conspiracy theory, Mills explains that the elite themselves may not be aware of their status as an elite, noting that "often they are uncertain about their roles" and "without conscious effort, they absorb the aspiration to be ... The Onecide." Nonetheless, he sees them as a quasi-hereditary caste. The members of the power elite, according to Mills, often enter into positions of societal prominence through educations obtained at establishment universities. The resulting elites, who control the three dominant institutions (military, economy and political system) can be generally grouped into one of six types, according to Mills: Mills formulated a very short summary of his book: "Who, after all, runs America? No one runs it altogether, but in so far as any group does, the power elite." Who Rules America? is a book by research psychologist and sociologist, G. William Domhoff, first published in 1967 as a best-seller (#12), with six subsequent editions. Domhoff argues in the book that a power elite wields power in America through its support of think-tanks, foundations, commissions, and academic departments. Additionally, he argues that the elite control institutions through overt authority, not through covert influence. In his introduction, Domhoff writes that the book was inspired by the work of four men: sociologists E. Digby Baltzell, C. Wright Mills, economist Paul Sweezy, and political scientist Robert A. Dahl. Concepts T. H. Marshall's Social Citizenship is a political concept first highlighted in his essay, Citizenship and Social Class in 1949. Marshall's concept defines the social responsibilities the state has to its citizens or, as Marshall puts it, "from [granting] the right to a modicum of economic welfare and security to the right to share to the full in the social heritage and to live the life of a civilized being according to the standards prevailing in the society". One of the key points made by Marshall is his belief in an evolution of rights in England acquired via citizenship, from "civil rights in the eighteenth [century], political in the nineteenth, and social in the twentieth". This evolution however, has been criticized by many for only being from the perspective of the white working man. Marshall concludes his essay with three major factors for the evolution of social rights and for their further evolution, listed below: Many of the social responsibilities of a state have since become a major part of many state's policies (see United States Social Security). However, these have also become controversial issues as there is a debate over whether a citizen truly has the right to education and even more so, to social welfare. In Political Man: The Social Bases of Politics political sociologist Seymour Martin Lipset provided a very influential analysis of the bases of democracy across the world. Larry Diamond and Gary Marks argue that "Lipset's assertion of a direct relationship between economic development and democracy has been subjected to extensive empirical examination, both quantitative and qualitative, in the past 30 years. And the evidence shows, with striking clarity and consistency, a strong causal relationship between economic development and democracy." The book sold more than 400,000 copies and was translated into 20 languages, including: Vietnamese, Bengali, and Serbo-Croatian. Lipset was one of the first proponents of Modernization theory which states that democracy is the direct result of economic growth, and that "[t]he more well-to-do a nation, the greater the chances that it will sustain democracy." Lipset's modernization theory has continued to be a significant factor in academic discussions and research relating to democratic transitions. It has been referred to as the "Lipset hypothesis", as well as the "Lipset thesis". Videos Research organisations See also References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Synodic_rotation_period] | [TOKENS: 661]
Contents Synodic day A synodic day (or synodic rotation period or solar day) is the period for a celestial object to rotate once in relation to the star it is orbiting, and is the basis of solar time. The synodic day is distinguished from the sidereal day, which is one complete rotation in relation to distant stars and is the basis of sidereal time. In the case of a tidally locked planet, the same side always faces its parent star, and its synodic day is infinite. Its sidereal day, however, is equal to its orbital period. Earth Earth's synodic day is the time it takes for the Sun to pass over the same meridian (a line of longitude) on consecutive days, whereas a sidereal day is the time it takes for a given distant star to pass over a meridian on consecutive days. For example, in the Northern Hemisphere, a synodic day could be measured as the time taken for the Sun to move from exactly true south (i.e. its highest declination) on one day to exactly south again on the next day (or exactly true north in the Southern Hemisphere). For Earth, the synodic day is not constant, and changes over the course of the year due to the eccentricity of Earth's orbit around the Sun and the axial tilt of the Earth. The longest and shortest synodic days' durations differ by about 51 seconds. The mean length, however, is 24 hours (with fluctuations on the order of milliseconds), and is the basis of solar time. The difference between the mean and apparent solar time is the equation of time, which can also be seen in Earth's analemma. Because of the variation in the length of the synodic day, the days with the longest and shortest period of daylight do not coincide with the solstices near the equator. As viewed from Earth during the year, the Sun appears to slowly drift along an imaginary path coplanar with Earth's orbit, known as the ecliptic, on a spherical background of seemingly fixed stars. Each synodic day, this gradual motion is a little less than 1° eastward (360° per 365.25 days), in a manner known as prograde motion. Certain spacecraft orbits, Sun-synchronous orbits, have orbital periods that are a fraction of a synodic day. Combined with a nodal precession, this allows them to always pass over a location on Earth's surface at the same mean solar time. Moon Due to tidal locking with Earth, the Moon's synodic day (the lunar day or synodic rotation period) is the same as its synodic period with Earth and the Sun (the period of the lunar phases, the synodic lunar month, which is the month of the lunar calendar). Venus Due to the slow retrograde rotational speed of Venus, its synodic rotation period of 117 Earth days is about half the length of its sidereal rotational period (sidereal day) and even its orbital period. Mercury Due to Mercury's slow rotational speed and fast orbit around the Sun, its synodic rotation period of 176 Earth days is three times longer than its sidereal rotational period (sidereal day) and twice as long as its orbital period. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Timeline_of_the_history_of_the_Internet] | [TOKENS: 149]
Contents Timeline of the history of the Internet A timeline of the history of the Internet can stretch back as far as the 19th century. This timeline begins in 1960, and lists key events including the emergence of novel ideas, the first implementation of new technologies, and the introduction of new products and services that were significant at that point in time in the history of the Internet. These events led to the Internet as we know it today. Early research and development (1960-1981) Merging the networks and creating the Internet (1981-1994) Commercialization, privatization, broader access leads to the modern Internet (1995-present) Examples of Internet services (to be merged into the sections above) See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTEAttardoChabanne1992-20] | [TOKENS: 8460]
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Lod#cite_note-27] | [TOKENS: 4733]
Contents Lod Lod (Hebrew: לוד, fully vocalized: לֹד), also known as Lydda (Ancient Greek: Λύδδα) and Lidd (Arabic: اللِّدّ, romanized: al-Lidd, or اللُّدّ, al-Ludd), is a city 15 km (9+1⁄2 mi) southeast of Tel Aviv and 40 km (25 mi) northwest of Jerusalem in the Central District of Israel. It is situated between the lower Shephelah on the east and the coastal plain on the west. The city had a population of 90,814 in 2023. Lod has been inhabited since at least the Neolithic period. It is mentioned a few times in the Hebrew Bible and in the New Testament. Between the 5th century BCE and up until the late Roman period, it was a prominent center for Jewish scholarship and trade. Around 200 CE, the city became a Roman colony and was renamed Diospolis (Ancient Greek: Διόσπολις, lit. 'city of Zeus'). Tradition identifies Lod as the 4th century martyrdom site of Saint George; the Church of Saint George and Mosque of Al-Khadr located in the city is believed to have housed his remains. Following the Arab conquest of the Levant, Lod served as the capital of Jund Filastin; however, a few decades later, the seat of power was transferred to Ramla, and Lod slipped in importance. Under Crusader rule, the city was a Catholic diocese of the Latin Church and it remains a titular see to this day.[citation needed] Lod underwent a major change in its population in the mid-20th century. Exclusively Palestinian Arab in 1947, Lod was part of the area designated for an Arab state in the United Nations Partition Plan for Palestine; however, in July 1948, the city was occupied by the Israel Defense Forces, and most of its Arab inhabitants were expelled in the Palestinian expulsion from Lydda and Ramle. The city was largely resettled by Jewish immigrants, most of them expelled from Arab countries. Today, Lod is one of Israel's mixed cities, with an Arab population of 30%. Lod is one of Israel's major transportation hubs. The main international airport, Ben Gurion Airport, is located 8 km (5 miles) north of the city. The city is also a major railway and road junction. Religious references The Hebrew name Lod appears in the Hebrew Bible as a town of Benjamin, founded along with Ono by Shamed or Shamer (1 Chronicles 8:12; Ezra 2:33; Nehemiah 7:37; 11:35). In Ezra 2:33, it is mentioned as one of the cities whose inhabitants returned after the Babylonian captivity. Lod is not mentioned among the towns allocated to the tribe of Benjamin in Joshua 18:11–28. The name Lod derives from a tri-consonantal root not extant in Northwest Semitic, but only in Arabic (“to quarrel; withhold, hinder”). An Arabic etymology of such an ancient name is unlikely (the earliest attestation is from the Achaemenid period). In the New Testament, the town appears in its Greek form, Lydda, as the site of Peter's healing of Aeneas in Acts 9:32–38. The city is also mentioned in an Islamic hadith as the location of the battlefield where the false messiah (al-Masih ad-Dajjal) will be slain before the Day of Judgment. History The first occupation dates to the Neolithic in the Near East and is associated with the Lodian culture. Occupation continued in the Levant Chalcolithic. Pottery finds have dated the initial settlement in the area now occupied by the town to 5600–5250 BCE. In the Early Bronze, it was an important settlement in the central coastal plain between the Judean Shephelah and the Mediterranean coast, along Nahal Ayalon. Other important nearby sites were Tel Dalit, Tel Bareqet, Khirbat Abu Hamid (Shoham North), Tel Afeq, Azor and Jaffa. Two architectural phases belong to the late EB I in Area B. The first phase had a mudbrick wall, while the late phase included a circulat stone structure. Later excavations have produced an occupation later, Stratum IV. It consists of two phases, Stratum IVb with mudbrick wall on stone foundations and rounded exterior corners. In Stratum IVa there was a mudbrick wall with no stone foundations, with imported Egyptian potter and local pottery imitations. Another excavations revealed nine occupation strata. Strata VI-III belonged to Early Bronze IB. The material culture showed Egyptian imports in strata V and IV. Occupation continued into Early Bronze II with four strata (V-II). There was continuity in the material culture and indications of centralized urban planning. North to the tell were scattered MB II burials. The earliest written record is in a list of Canaanite towns drawn up by the Egyptian pharaoh Thutmose III at Karnak in 1465 BCE. From the fifth century BCE until the Roman period, the city was a centre of Jewish scholarship and commerce. According to British historian Martin Gilbert, during the Hasmonean period, Jonathan Maccabee and his brother, Simon Maccabaeus, enlarged the area under Jewish control, which included conquering the city. The Jewish community in Lod during the Mishnah and Talmud era is described in a significant number of sources, including information on its institutions, demographics, and way of life. The city reached its height as a Jewish center between the First Jewish-Roman War and the Bar Kokhba revolt, and again in the days of Judah ha-Nasi and the start of the Amoraim period. The city was then the site of numerous public institutions, including schools, study houses, and synagogues. In 43 BC, Cassius, the Roman governor of Syria, sold the inhabitants of Lod into slavery, but they were set free two years later by Mark Antony. During the First Jewish–Roman War, the Roman proconsul of Syria, Cestius Gallus, razed the town on his way to Jerusalem in Tishrei 66 CE. According to Josephus, "[he] found the city deserted, for the entire population had gone up to Jerusalem for the Feast of Tabernacles. He killed fifty people whom he found, burned the town and marched on". Lydda was occupied by Emperor Vespasian in 68 CE. In the period following the destruction of Jerusalem in 70 CE, Rabbi Tarfon, who appears in many Tannaitic and Jewish legal discussions, served as a rabbinic authority in Lod. During the Kitos War, 115–117 CE, the Roman army laid siege to Lod, where the rebel Jews had gathered under the leadership of Julian and Pappos. Torah study was outlawed by the Romans and pursued mostly in the underground. The distress became so great, the patriarch Rabban Gamaliel II, who was shut up there and died soon afterwards, permitted fasting on Ḥanukkah. Other rabbis disagreed with this ruling. Lydda was next taken and many of the Jews were executed; the "slain of Lydda" are often mentioned in words of reverential praise in the Talmud. In 200 CE, emperor Septimius Severus elevated the town to the status of a city, calling it Colonia Lucia Septimia Severa Diospolis. The name Diospolis ("City of Zeus") may have been bestowed earlier, possibly by Hadrian. At that point, most of its inhabitants were Christian. The earliest known bishop is Aëtius, a friend of Arius. During the following century (200-300CE), it's said that Joshua ben Levi founded a yeshiva in Lod. In December 415, the Council of Diospolis was held here to try Pelagius; he was acquitted. In the sixth century, the city was renamed Georgiopolis after St. George, a soldier in the guard of the emperor Diocletian, who was born there between 256 and 285 CE. The Church of Saint George and Mosque of Al-Khadr is named for him. The 6th-century Madaba map shows Lydda as an unwalled city with a cluster of buildings under a black inscription reading "Lod, also Lydea, also Diospolis". An isolated large building with a semicircular colonnaded plaza in front of it might represent the St George shrine. After the Muslim conquest of Palestine by Amr ibn al-'As in 636 CE, Lod which was referred to as "al-Ludd" in Arabic served as the capital of Jund Filastin ("Military District of Palaestina") before the seat of power was moved to nearby Ramla during the reign of the Umayyad Caliph Suleiman ibn Abd al-Malik in 715–716. The population of al-Ludd was relocated to Ramla, as well. With the relocation of its inhabitants and the construction of the White Mosque in Ramla, al-Ludd lost its importance and fell into decay. The city was visited by the local Arab geographer al-Muqaddasi in 985, when it was under the Fatimid Caliphate, and was noted for its Great Mosque which served the residents of al-Ludd, Ramla, and the nearby villages. He also wrote of the city's "wonderful church (of St. George) at the gate of which Christ will slay the Antichrist." The Crusaders occupied the city in 1099 and named it St Jorge de Lidde. It was briefly conquered by Saladin, but retaken by the Crusaders in 1191. For the English Crusaders, it was a place of great significance as the birthplace of Saint George. The Crusaders made it the seat of a Latin Church diocese, and it remains a titular see. It owed the service of 10 knights and 20 sergeants, and it had its own burgess court during this era. In 1226, Ayyubid Syrian geographer Yaqut al-Hamawi visited al-Ludd and stated it was part of the Jerusalem District during Ayyubid rule. Sultan Baybars brought Lydda again under Muslim control by 1267–8. According to Qalqashandi, Lydda was an administrative centre of a wilaya during the fourteenth and fifteenth century in the Mamluk empire. Mujir al-Din described it as a pleasant village with an active Friday mosque. During this time, Lydda was a station on the postal route between Cairo and Damascus. In 1517, Lydda was incorporated into the Ottoman Empire as part of the Damascus Eyalet, and in the 1550s, the revenues of Lydda were designated for the new waqf of Hasseki Sultan Imaret in Jerusalem, established by Hasseki Hurrem Sultan (Roxelana), the wife of Suleiman the Magnificent. By 1596 Lydda was a part of the nahiya ("subdistrict") of Ramla, which was under the administration of the liwa ("district") of Gaza. It had a population of 241 households and 14 bachelors who were all Muslims, and 233 households who were Christians. They paid a fixed tax-rate of 33,3 % on agricultural products, including wheat, barley, summer crops, vineyards, fruit trees, sesame, special product ("dawalib" =spinning wheels), goats and beehives, in addition to occasional revenues and market toll, a total of 45,000 Akçe. All of the revenue went to the Waqf. In 1051 AH/1641/2, the Bedouin tribe of al-Sawālima from around Jaffa attacked the villages of Subṭāra, Bayt Dajan, al-Sāfiriya, Jindās, Lydda and Yāzūr belonging to Waqf Haseki Sultan. The village appeared as Lydda, though misplaced, on the map of Pierre Jacotin compiled in 1799. Missionary William M. Thomson visited Lydda in the mid-19th century, describing it as a "flourishing village of some 2,000 inhabitants, imbosomed in noble orchards of olive, fig, pomegranate, mulberry, sycamore, and other trees, surrounded every way by a very fertile neighbourhood. The inhabitants are evidently industrious and thriving, and the whole country between this and Ramleh is fast being filled up with their flourishing orchards. Rarely have I beheld a rural scene more delightful than this presented in early harvest ... It must be seen, heard, and enjoyed to be appreciated." In 1869, the population of Ludd was given as: 55 Catholics, 1,940 "Greeks", 5 Protestants and 4,850 Muslims. In 1870, the Church of Saint George was rebuilt. In 1892, the first railway station in the entire region was established in the city. In the second half of the 19th century, Jewish merchants migrated to the city, but left after the 1921 Jaffa riots. In 1882, the Palestine Exploration Fund's Survey of Western Palestine described Lod as "A small town, standing among enclosure of prickly pear, and having fine olive groves around it, especially to the south. The minaret of the mosque is a very conspicuous object over the whole of the plain. The inhabitants are principally Moslim, though the place is the seat of a Greek bishop resident of Jerusalem. The Crusading church has lately been restored, and is used by the Greeks. Wells are found in the gardens...." From 1918, Lydda was under the administration of the British Mandate in Palestine, as per a League of Nations decree that followed the Great War. During the Second World War, the British set up supply posts in and around Lydda and its railway station, also building an airport that was renamed Ben Gurion Airport after the death of Israel's first prime minister in 1973. At the time of the 1922 census of Palestine, Lydda had a population of 8,103 inhabitants (7,166 Muslims, 926 Christians, and 11 Jews), the Christians were 921 Orthodox, 4 Roman Catholics and 1 Melkite. This had increased by the 1931 census to 11,250 (10,002 Muslims, 1,210 Christians, 28 Jews, and 10 Bahai), in a total of 2475 residential houses. In 1938, Lydda had a population of 12,750. In 1945, Lydda had a population of 16,780 (14,910 Muslims, 1,840 Christians, 20 Jews and 10 "other"). Until 1948, Lydda was an Arab town with a population of around 20,000—18,500 Muslims and 1,500 Christians. In 1947, the United Nations proposed dividing Mandatory Palestine into two states, one Jewish state and one Arab; Lydda was to form part of the proposed Arab state. In the ensuing war, Israel captured Arab towns outside the area the UN had allotted it, including Lydda. In December 1947, thirteen Jewish passengers in a seven-car convoy to Ben Shemen Youth Village were ambushed and murdered.In a separate incident, three Jewish youths, two men and a woman were captured, then raped and murdered in a neighbouring village. Their bodies were paraded in Lydda’s principal street. The Israel Defense Forces entered Lydda on 11 July 1948. The following day, under the impression that it was under attack, the 3rd Battalion was ordered to shoot anyone "seen on the streets". According to Israel, 250 Arabs were killed. Other estimates are higher: Arab historian Aref al Aref estimated 400, and Nimr al Khatib 1,700. In 1948, the population rose to 50,000 during the Nakba, as Arab refugees fleeing other areas made their way there. A key event was the Palestinian expulsion from Lydda and Ramle, with the expulsion of 50,000-70,000 Palestinians from Lydda and Ramle by the Israel Defense Forces. All but 700 to 1,056 were expelled by order of the Israeli high command, and forced to walk 17 km (10+1⁄2 mi) to the Jordanian Arab Legion lines. Estimates of those who died from exhaustion and dehydration vary from a handful to 355. The town was subsequently sacked by the Israeli army. Some scholars, including Ilan Pappé, characterize this as ethnic cleansing. The few hundred Arabs who remained in the city were soon outnumbered by the influx of Jews who immigrated to Lod from August 1948 onward, most of them from Arab countries. As a result, Lod became a predominantly Jewish town. After the establishment of the state, the biblical name Lod was readopted. The Jewish immigrants who settled Lod came in waves, first from Morocco and Tunisia, later from Ethiopia, and then from the former Soviet Union. Since 2008, many urban development projects have been undertaken to improve the image of the city. Upscale neighbourhoods have been built, among them Ganei Ya'ar and Ahisemah, expanding the city to the east. According to a 2010 report in the Economist, a three-meter-high wall was built between Jewish and Arab neighbourhoods and construction in Jewish areas was given priority over construction in Arab neighborhoods. The newspaper says that violent crime in the Arab sector revolves mainly around family feuds over turf and honour crimes. In 2010, the Lod Community Foundation organised an event for representatives of bicultural youth movements, volunteer aid organisations, educational start-ups, businessmen, sports organizations, and conservationists working on programmes to better the city. In the 2021 Israel–Palestine crisis, a state of emergency was declared in Lod after Arab rioting led to the death of an Israeli Jew. The Mayor of Lod, Yair Revivio, urged Prime Minister of Israel Benjamin Netanyahu to deploy Israel Border Police to restore order in the city. This was the first time since 1966 that Israel had declared this kind of emergency lockdown. International media noted that both Jewish and Palestinian mobs were active in Lod, but the "crackdown came for one side" only. Demographics In the 19th century and until the Lydda Death March, Lod was an exclusively Muslim-Christian town, with an estimated 6,850 inhabitants, of whom approximately 2,000 (29%) were Christian. According to the Israel Central Bureau of Statistics (CBS), the population of Lod in 2010 was 69,500 people. According to the 2019 census, the population of Lod was 77,223, of which 53,581 people, comprising 69.4% of the city's population, were classified as "Jews and Others", and 23,642 people, comprising 30.6% as "Arab". Education According to CBS, 38 schools and 13,188 pupils are in the city. They are spread out as 26 elementary schools and 8,325 elementary school pupils, and 13 high schools and 4,863 high school pupils. About 52.5% of 12th-grade pupils were entitled to a matriculation certificate in 2001.[citation needed] Economy The airport and related industries are a major source of employment for the residents of Lod. Other important factories in the city are the communication equipment company "Talard", "Cafe-Co" - a subsidiary of the Strauss Group and "Kashev" - the computer center of Bank Leumi. A Jewish Agency Absorption Centre is also located in Lod. According to CBS figures for 2000, 23,032 people were salaried workers and 1,405 were self-employed. The mean monthly wage for a salaried worker was NIS 4,754, a real change of 2.9% over the course of 2000. Salaried men had a mean monthly wage of NIS 5,821 (a real change of 1.4%) versus NIS 3,547 for women (a real change of 4.6%). The mean income for the self-employed was NIS 4,991. About 1,275 people were receiving unemployment benefits and 7,145 were receiving an income supplement. Art and culture In 2009-2010, Dor Guez held an exhibit, Georgeopolis, at the Petach Tikva art museum that focuses on Lod. Archaeology A well-preserved mosaic floor dating to the Roman period was excavated in 1996 as part of a salvage dig conducted on behalf of the Israel Antiquities Authority and the Municipality of Lod, prior to widening HeHalutz Street. According to Jacob Fisch, executive director of the Friends of the Israel Antiquities Authority, a worker at the construction site noticed the tail of a tiger and halted work. The mosaic was initially covered over with soil at the conclusion of the excavation for lack of funds to conserve and develop the site. The mosaic is now part of the Lod Mosaic Archaeological Center. The floor, with its colorful display of birds, fish, exotic animals and merchant ships, is believed to have been commissioned by a wealthy resident of the city for his private home. The Lod Community Archaeology Program, which operates in ten Lod schools, five Jewish and five Israeli Arab, combines archaeological studies with participation in digs in Lod. Sports The city's major football club, Hapoel Bnei Lod, plays in Liga Leumit (the second division). Its home is at the Lod Municipal Stadium. The club was formed by a merger of Bnei Lod and Rakevet Lod in the 1980s. Two other clubs in the city play in the regional leagues: Hapoel MS Ortodoxim Lod in Liga Bet and Maccabi Lod in Liga Gimel. Hapoel Lod played in the top division during the 1960s and 1980s, and won the State Cup in 1984. The club folded in 2002. A new club, Hapoel Maxim Lod (named after former mayor Maxim Levy) was established soon after, but folded in 2007. Notable people Twin towns-sister cities Lod is twinned with: See also References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Biological_classification] | [TOKENS: 4792]
Contents Taxonomy (biology) In biology, taxonomy (from Ancient Greek τάξις (taxis) 'arrangement' and -νομία (-nomia) 'method') is the scientific study of naming, defining (circumscribing) and classifying groups of biological organisms based on shared characteristics. Organisms are grouped into taxa (singular: taxon), and these groups are given a taxonomic rank; groups of a given rank can be aggregated to form a more inclusive group of higher rank, thus creating a taxonomic hierarchy. The principal ranks in modern use are domain, kingdom, phylum (division is sometimes used in botany in place of phylum), class, order, family, genus, and species. The Swedish botanist Carl Linnaeus is regarded as the founder of the current system of taxonomy, having developed a ranked system known as Linnaean taxonomy for categorizing organisms. With advances in the theory, data and analytical technology of biological systematics, the Linnaean system has transformed into a system of modern biological classification intended to reflect the evolutionary relationships among organisms, both living and extinct. Definition The exact definition of taxonomy varies from source to source, but the core of the discipline remains: the conception, naming, and classification of groups of organisms. As points of reference, recent definitions of taxonomy are presented below: The varied definitions either place taxonomy as a sub-area of systematics (definition 2), invert that relationship (definition 6), or appear to consider the two terms synonymous. There is some disagreement as to whether biological nomenclature is considered a part of taxonomy (definitions 1 and 2), or a part of systematics outside taxonomy. For example, definition 6 is paired with the following definition of systematics that places nomenclature outside taxonomy: In 1970, Michener et al. defined "systematic biology" and "taxonomy" in relation to one another as follows: Systematic biology (hereafter called simply systematics) is the field that This is a field with a long history that in recent years has experienced a notable renaissance, principally with respect to theoretical content. Part of the theoretical material has to do with evolutionary areas (topics e and f above), the rest relates especially to the problem of classification. Taxonomy is that part of Systematics concerned with topics (a) to (d) above. A whole set of terms including taxonomy, systematic biology, systematics, scientific classification, biological classification, and phylogenetics have at times had overlapping meanings – sometimes the same, sometimes slightly different, but always related and intersecting. The broadest meaning of "taxonomy" is used here. The term itself was introduced in 1813 by de Candolle, in his Théorie élémentaire de la botanique. John Lindley provided an early definition of systematics in 1830, although he wrote of "systematic botany" rather than using the term "systematics". Europeans tend to use the terms "systematics" and "biosystematics" for the study of biodiversity as a whole, whereas North Americans tend to use "taxonomy" more frequently. However, taxonomy, and in particular alpha taxonomy, is more specifically the identification, description, and naming (i.e., nomenclature) of organisms, while "classification" focuses on placing organisms within hierarchical groups that show their relationships to other organisms. A taxonomic revision or taxonomic review is a novel analysis of the variation patterns in a particular taxon. This analysis may be executed on the basis of any combination of the various available kinds of characters, such as morphological, anatomical, palynological, biochemical and genetic. A monograph or complete revision is a revision that is comprehensive for a taxon for the information given at a particular time, and for the entire world. Other (partial) revisions may be restricted in the sense that they may only use some of the available character sets or have a limited spatial scope. A revision results in a conformation of or new insights in the relationships between the subtaxa within the taxon under study, which may lead to a change in the classification of these subtaxa, the identification of new subtaxa, or the merger of previous subtaxa. Taxonomic characters are the taxonomic attributes that can be used to provide the evidence from which relationships (the phylogeny) between taxa are inferred. Kinds of taxonomic characters include: The term "alpha taxonomy" is primarily used to refer to the discipline of finding, describing, and naming taxa, particularly species. In earlier literature, the term had a different meaning, referring to morphological taxonomy, and the products of research through the end of the 19th century. William Bertram Turrill introduced the term "alpha taxonomy" in a series of papers published in 1935 and 1937 in which he discussed the philosophy and possible future directions of the discipline of taxonomy. ... there is an increasing desire amongst taxonomists to consider their problems from wider viewpoints, to investigate the possibilities of closer co-operation with their cytological, ecological and genetics colleagues and to acknowledge that some revision or expansion, perhaps of a drastic nature, of their aims and methods, may be desirable ... Turrill (1935) has suggested that while accepting the older invaluable taxonomy, based on structure, and conveniently designated "alpha", it is possible to glimpse a far-distant taxonomy built upon as wide a basis of morphological and physiological facts as possible, and one in which "place is found for all observational and experimental data relating, even if indirectly, to the constitution, subdivision, origin, and behaviour of species and other taxonomic groups". Ideals can, it may be said, never be completely realized. They have, however, a great value of acting as permanent stimulants, and if we have some, even vague, ideal of an "omega" taxonomy we may progress a little way down the Greek alphabet. Some of us please ourselves by thinking we are now groping in a "beta" taxonomy. Turrill thus explicitly excludes from alpha taxonomy various areas of study that he includes within taxonomy as a whole, such as ecology, physiology, genetics, and cytology. He further excludes phylogenetic reconstruction from alpha taxonomy. Later authors have used the term in a different sense, to mean the delimitation of species (not subspecies or taxa of other ranks), using whatever investigative techniques are available, and including sophisticated computational or laboratory techniques. Thus, Ernst Mayr in 1968 defined "beta taxonomy" as the classification of ranks higher than species. An understanding of the biological meaning of variation and of the evolutionary origin of groups of related species is even more important for the second stage of taxonomic activity, the sorting of species into groups of relatives ("taxa") and their arrangement in a hierarchy of higher categories. This activity is what the term classification denotes; it is also referred to as "beta taxonomy". How species should be defined in a particular group of organisms gives rise to practical and theoretical problems that are referred to as the species problem. The scientific work of deciding how to define species has been called microtaxonomy. By extension, macrotaxonomy is the study of groups at the higher taxonomic ranks subgenus and above, or simply in clades that include more than one taxon considered a species, expressed in terms of phylogenetic nomenclature. History While some descriptions of taxonomic history attempt to date taxonomy to ancient civilizations, a truly scientific attempt to classify organisms did not occur until the 18th century, with the possible exception of Aristotle, whose works hint at a taxonomy. Earlier works were primarily descriptive and focused on plants that were useful in agriculture or medicine. There are a number of stages in this scientific thinking. Early taxonomy was based on arbitrary criteria, the so-called "artificial systems", including Linnaeus's system of sexual classification for plants (Linnaeus's 1735 classification of animals was entitled "Systema Naturae" ("the System of Nature"), implying that he, at least, believed that it was more than an "artificial system"). Later came systems based on a more complete consideration of the characteristics of taxa, referred to as "natural systems", such as those of de Jussieu (1789), de Candolle (1813) and Bentham and Hooker (1862–1863). These classifications described empirical patterns and were pre-evolutionary in thinking. The publication of Charles Darwin's On the Origin of Species (1859) led to a new explanation for classifications, based on evolutionary relationships. This was the concept of phyletic systems, from 1883 onwards. This approach was typified by those of Eichler (1883) and Engler (1886–1892). The advent of cladistic methodology in the 1970s led to classifications based on the sole criterion of monophyly, supported by the presence of synapomorphies. Since then, the evidentiary basis has been expanded with data from molecular genetics that for the most part complements traditional morphology.[page needed][page needed][page needed] Naming and classifying human surroundings likely began with the onset of language. Distinguishing poisonous plants from edible plants is integral to the survival of human communities. Medicinal plant illustrations show up in Egyptian wall paintings from c. 1500 BC, indicating that the uses of different species were understood and that a basic taxonomy was in place. Organisms were first classified by Aristotle (Greece, 384–322 BC) during his stay on the island of Lesbos. He classified beings by their parts, or in modern terms attributes, such as having live birth, having four legs, laying eggs, having blood, or being warm-bodied. He divided all living things into two groups: plants and animals. Some of his groups of animals, such as Anhaima (animals without blood, translated as invertebrates) and Enhaima (animals with blood, roughly the vertebrates), as well as groups like the sharks and cetaceans, are commonly used. His student Theophrastus (Greece, 370–285 BC) carried on this tradition, mentioning some 500 plants and their uses in his Historia Plantarum. Several plant genera can be traced back to Theophrastus, such as Cornus, Crocus, and Narcissus. Taxonomy in the Middle Ages was largely based on the Aristotelian system, with additions concerning the philosophical and existential order of creatures. This included concepts such as the great chain of being in the Western scholastic tradition, again deriving ultimately from Aristotle. The Aristotelian system did not classify plants or fungi, due to the lack of microscopes at the time, as his ideas were based on arranging the complete world in a single continuum, as per the scala naturae (the Natural Ladder). This, as well, was taken into consideration in the great chain of being. Advances were made by scholars such as Procopius, Timotheus of Gaza, Demetrios Pepagomenos, and Thomas Aquinas. Medieval thinkers used abstract philosophical and logical categorizations more suited to abstract philosophy than to pragmatic taxonomy. In the Muslim world, Al-Damiri (d. 1405) wrote an influential work called Life of Animals (Ḥayāt al-ḥayawān al-kubrā, c.1371) which treats in alphabetic order of 931 animals mentioned in the Quran, the traditions and the poetic and proverbial literature of the Arabs. During the Renaissance and the Age of Enlightenment, categorizing organisms became more prevalent, and taxonomic works became ambitious enough to replace the ancient texts. This is sometimes credited to the development of sophisticated optical lenses, which allowed the morphology of organisms to be studied in much greater detail. One of the earliest authors to take advantage of this leap in technology was the Italian physician Andrea Cesalpino (1519–1603), who has been called "the first taxonomist". His magnum opus De Plantis came out in 1583, and described more than 1,500 plant species. Two large plant families that he first recognized are in use: the Asteraceae and Brassicaceae. In the 17th century, John Ray (England, 1627–1705) wrote many important taxonomic works. Arguably his greatest accomplishment was Methodus Plantarum Nova (1682), in which he published details of over 18,000 plant species. At the time, his classifications were perhaps the most complex yet produced by any taxonomist, as he based his taxa on many combined characters. The next major taxonomic works were produced by Joseph Pitton de Tournefort (France, 1656–1708). His work from 1700, Institutiones Rei Herbariae, included more than 9,000 species in 698 genera, which directly influenced Linnaeus, as it was the text he used as a young student. The Swedish botanist Carl Linnaeus (1707–1778) ushered in a new era of taxonomy. With his major works Systema Naturae 1st Edition in 1735, Species Plantarum in 1753, and Systema Naturae 10th Edition, he revolutionized modern taxonomy. His works implemented a standardized binomial naming system for animal and plant species, which proved to be an elegant solution to a chaotic and disorganized taxonomic literature. He not only introduced the standard of class, order, genus, and species, but also made it possible to identify plants and animals from his book, by using the smaller parts of the flower (known as the Linnaean system). Plant and animal taxonomists regard Linnaeus' work as the "starting point" for valid names (at 1753 and 1758 respectively). Names published before these dates are referred to as "pre-Linnaean", and not considered valid (with the exception of spiders published in Svenska Spindlar). Even taxonomic names published by Linnaeus himself before these dates are considered pre-Linnaean. Modern system of classification A pattern of groups nested within groups was specified by Linnaeus' classifications of plants and animals, and these patterns began to be represented as dendrograms of the animal and plant kingdoms toward the end of the 18th century, well before Charles Darwin's On the Origin of Species was published. The pattern of the "Natural System" did not entail a generating process, such as evolution, but may have implied it, inspiring early transmutationist thinkers. Among early works exploring the idea of a transmutation of species were Zoonomia in 1796 by Erasmus Darwin (Charles Darwin's grandfather), and Jean-Baptiste Lamarck's Philosophie zoologique of 1809. The idea was popularized in the Anglophone world by the speculative but widely read Vestiges of the Natural History of Creation, published anonymously by Robert Chambers in 1844. With Darwin's theory, a general acceptance quickly appeared that a classification should reflect the Darwinian principle of common descent. Tree of life representations became popular in scientific works, with known fossil groups incorporated. One of the first modern groups tied to fossil ancestors was birds. Using the then newly discovered fossils of Archaeopteryx and Hesperornis, Thomas Henry Huxley pronounced that they had evolved from dinosaurs, a group formally named by Richard Owen in 1842. The resulting description, that of dinosaurs "giving rise to" or being "the ancestors of" birds, is the essential hallmark of evolutionary taxonomic thinking. As more and more fossil groups were found and recognized in the late 19th and early 20th centuries, palaeontologists worked to understand the history of animals through the ages by linking together known groups. With the modern evolutionary synthesis of the early 1940s, an essentially modern understanding of the evolution of the major groups was in place. As evolutionary taxonomy is based on Linnaean taxonomic ranks, the two terms are largely interchangeable in modern use. The cladistic method has emerged since the 1960s. In 1958, Julian Huxley used the term clade. Later, in 1960, Cain and Harrison introduced the term cladistic. The salient feature is arranging taxa in a hierarchical evolutionary tree, with the desired objective of all named taxa being monophyletic. A taxon is called monophyletic if it includes all the descendants of an ancestral form. Groups that have descendant groups removed from them are termed paraphyletic, while groups representing more than one branch from the tree of life are called polyphyletic. Monophyletic groups are recognized and diagnosed on the basis of synapomorphies, shared derived character states. Cladistic classifications are compatible with traditional Linnean taxonomy and the Codes of Zoological and Botanical nomenclature, to a certain extent. An alternative system of nomenclature, the International Code of Phylogenetic Nomenclature or PhyloCode has been proposed, which regulates the formal naming of clades. Linnaean ranks are optional and have no formal standing under the PhyloCode, which is intended to coexist with the current, rank-based codes. While popularity of phylogenetic nomenclature has grown steadily in the last few decades, it remains to be seen whether a majority of systematists will eventually adopt the PhyloCode or continue using the current systems of nomenclature that have been employed (and modified, but arguably not as much as some systematists wish) for over 250 years. Kingdoms and domains Domains are a relatively new grouping. First proposed in 1977, Carl Woese's three-domain system was not generally accepted until later. One main characteristic of the three-domain method is the separation of Archaea and Bacteria, previously grouped into the single kingdom Bacteria (a kingdom also sometimes called Monera), with the Eukaryota for all organisms whose cells contain a nucleus. A small number of scientists include a sixth kingdom, Archaea, but do not accept the domain method. Thomas Cavalier-Smith, who published extensively on the classification of protists, in 2002 proposed that the Neomura, the clade that groups together the Archaea and Eucarya, would have evolved from Bacteria, more precisely from Actinomycetota. His 2004 classification treated the archaeobacteria as part of a subkingdom of the kingdom Bacteria, i.e., he rejected the three-domain system entirely. Stefan Luketa in 2012 proposed a five "dominion" system, adding Prionobiota (acellular and without nucleic acid) and Virusobiota (acellular but with nucleic acid) to the traditional three domains. Partial classifications exist for many individual groups of organisms and are revised and replaced as new information becomes available; however, comprehensive, published treatments of most or all life are rarer; recent examples are that of Adl et al., 2012 and 2019, which covers eukaryotes only with an emphasis on protists, and Ruggiero et al., 2015, covering both eukaryotes and prokaryotes to the rank of Order, although both exclude fossil representatives. A separate compilation (Ruggiero, 2014) covers extant taxa to the rank of Family. Other, database-driven treatments include the Encyclopedia of Life, the Global Biodiversity Information Facility, the NCBI taxonomy database, the Interim Register of Marine and Nonmarine Genera, the Open Tree of Life, and the Catalogue of Life. The Paleobiology Database is a resource for fossils. Application Biological taxonomy is a sub-discipline of biology, and is generally practiced by biologists known as "taxonomists", although enthusiastic naturalists are also frequently involved in the publication of new taxa. Because taxonomy aims to describe and organize life, the work conducted by taxonomists is essential for the study of biodiversity and the resulting field of conservation biology. Biological classification is a critical component of the taxonomic process. As a result, it informs the user as to what the relatives of the taxon are hypothesized to be. Biological classification uses taxonomic ranks, including among others (in order from most inclusive to least inclusive): domain, kingdom, phylum, class, order, family, genus, species, and strain.[note 1] The "definition" of a taxon is encapsulated by its description or its diagnosis or by both combined. There are no set rules governing the definition of taxa, but the naming and publication of new taxa is governed by sets of rules. In zoology, the nomenclature for the more commonly used ranks (superfamily to subspecies), is regulated by the International Code of Zoological Nomenclature (ICZN Code). In the fields of phycology, mycology, and botany, the naming of taxa is governed by the International Code of Nomenclature for algae, fungi, and plants (ICN). The initial description of a taxon involves five main requirements: However, often much more information is included, like the geographic range of the taxon, ecological notes, chemistry, behavior, etc. How researchers arrive at their taxa varies: depending on the available data, and resources, methods vary from simple quantitative or qualitative comparisons of striking features, to elaborate computer analyses of large amounts of DNA sequence data. An "authority" may be placed after a scientific name. The authority is the name of the scientist or scientists who first validly published the name. For example, in 1758, Linnaeus gave the Asian elephant the scientific name Elephas maximus, so the name is sometimes written as "Elephas maximus Linnaeus, 1758". The names of authors are often abbreviated: the abbreviation L., for Linnaeus, is commonly used. In botany, there is, in fact, a regulated list of standard abbreviations (see list of botanists by author abbreviation). The system for assigning authorities differs slightly between botany and zoology. However, it is standard that if the genus of a species has been changed since the original description, the original authority's name is placed in parentheses. Phenetics In phenetics, also known as taximetrics, or numerical taxonomy, organisms are classified based on overall similarity, regardless of their phylogeny or evolutionary relationships. It results in a measure of hypergeometric "distance" between taxa. Phenetic methods have become relatively rare in modern times, largely superseded by cladistic analyses, as phenetic methods do not distinguish shared ancestral (or plesiomorphic) traits from shared derived (or apomorphic) traits. However, certain phenetic methods, such as neighbor joining, have persisted, as rapid estimators of relationships when more advanced methods (such as Bayesian inference) are too computationally expensive. Databases Modern taxonomy uses database technologies to search and catalogue classifications and their documentation. While there is no commonly used database, there are comprehensive databases such as the Catalogue of Life, which attempts to list every documented species. The catalogue listed 1.64 million species for all kingdoms as of April 2016[update], claiming coverage of more than three-quarters of the estimated species known to modern science. See also Notes References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Nintendo_64] | [TOKENS: 8572]
Contents Nintendo 64 The Nintendo 64[a] (N64) is a home video game console developed and marketed by Nintendo. It was released in Japan on June 23, 1996, in North America on September 29, 1996, in Europe and Australia on March 1, 1997. It was Nintendo's third major home console, following the Super Nintendo Entertainment System, and competed with Sony's PlayStation and the Sega Saturn in the fifth generation of consoles. Nintendo began developing the N64 in 1993 in collaboration with Silicon Graphics. Named for its 64-bit CPU, the N64 features a coprocessor that processes graphics and sound separately, allowing for 3D graphics. The N64 controller was the first to include a thumbstick as a standard feature, and the console includes four controller ports for multiplayer games. Accessories include the Expansion Pak to boost system RAM from 4 to 8 MB, the Rumble Pak for force feedback, and the Controller Pak, a memory card. In Japan, Nintendo released the 64DD, a peripheral that adds internet connectivity and enables expanded, rewritable data storage via proprietary magnetic disks. It was a commercial failure and was never released internationally. The N64 received acclaim for its hardware and technical innovations. The American news magazine Time declared it 1996's "Machine of the Year". Nintendo sold 32.93 million N64s; while it was a major success in North America, it underperformed in Japan and Europe. Nintendo's decision to use ROM cartridges rather than optical discs, citing reduced loading times and software piracy concerns, alienated third-party developers due to cost and storage constraints; many opted to prioritize PlayStation development. This contributed to a relatively small library of 388 games and required Nintendo to rely on its major first-party franchises such as Mario and The Legend of Zelda, as well as games by second-party developers such as Rare. The N64 outsold the Saturn, but sold far less than the PlayStation. Nintendo discontinued the N64 in 2002, following the release of its successor, the GameCube. In retrospect, video game journalists regard the N64 as one of the most iconic game consoles. Several N64 games, such as Super Mario 64 (1996), GoldenEye 007 (1997), and The Legend of Zelda: Ocarina of Time (1998), have been listed among the greatest and most influential video games. Nintendo has rereleased many N64 games for its subsequent consoles via the Virtual Console and Nintendo Classics services. The N64 was the last major cartridge-based home console until the Nintendo Switch in 2017. History Following the video game crash of 1983, Nintendo revitalized the industry with the release of its second home console, the Family Computer (Famicom), launched in Japan in 1983 and later introduced internationally as the Nintendo Entertainment System (NES) in 1985. Both the NES and its successor, the Super Nintendo Entertainment System (SNES), Repressed in Japan in 1990 and internationally in 1991, achieved significant commercial success. However, SNES sales declined during the Japanese economic recession. At the same time, competition intensified with the arrival of the Sega Saturn, a 32-bit console, which outpaced the aging 16-bit SNES and highlighted the urgency for Nintendo to upgrade its hardware or risk losing market share. Additional competition came from Atari's Jaguar system and the 3DO system. In an effort to extend the SNES’s lifespan, Nintendo explored the development of a CD-ROM peripheral through partnerships with CD-ROM technology pioneers Philips and Sony. Despite the creation of early hardware prototypes, both collaborations ultimately collapsed, and no games were released by Nintendo or its third-party partners. Philips retained limited licensing rights and used them to release original Mario and Legend of Zelda games on its competing CD-i device. Meanwhile, Sony leveraged its progress to develop what would become the PlayStation console. During this period, third-party developers also expressed growing dissatisfaction with Nintendo’s strict licensing policies. Silicon Graphics, Inc. (SGI), a long-established leader in high-performance computing, sought to expand by adapting its supercomputing technology into the higher volume consumer market, starting with the video game industry. To support this shift, SGI redesigned its MIPS R4000 CPU family, reducing power consumption, and aimed to lower unit cost from up to US$200 (equivalent to $446 in 2025) to approximately $40 (equivalent to $89 in 2025). SGI developed a video game chipset prototype and sought an established industry partner. SGI founder Jim Clark first pitched the concept to Tom Kalinske, CEO of Sega of America, who said they were "quite impressed." However, Sega’s Japanese engineers rejected the design, citing technical issues, which SGI later resolved. Nintendo disputes this account, claiming SGI ultimately favored Nintendo because Sega had demanded exclusive rights to the technology, while Nintendo was open to a non-exclusive licensing agreement. In early 1993, Clark met with Nintendo president Hiroshi Yamauchi. By August 23, during Nintendo's annual Shoshinkai trade show, the companies announced a joint development and licensing agreement for what they called "Project Reality." They projected an arcade debut in 1994 and a home release by late 1995, targeting a retail price under $250 (equivalent to $528 in 2025). Michael Slater, publisher of Microprocessor Report highlighted the significance of the partnership saying, "The mere fact of a business relationship there is significant because of Nintendo's phenomenal ability to drive volume. If it works at all, it could bring MIPS to levels of volume [SGI] never dreamed of." SGI named the console’s core chipset "Reality Immersion Technology", featuring MIPS R4300i CPU and the Reality Coprocessor for graphics, audio, and memory management). NEC, Toshiba, and Sharp would provide manufacturing support. The chipset was a collaborative effort between SGI and its subsidiary, MIPS Technologies. SGI and Nintendo also partnered with Rambus, designing a bus architecture to transfer data at 500 Mb/s using its proprietary RDRAM. Rambus hoped the partnership would encourage RDRAM adoption in PCs. To enable game creation before the hardware was finalized, SGI offered a development platform based on the Onyx supercomputer to simulate expected console performance. The Onyx was priced at up to $250,000 (equivalent to $540,000 in 2025). It included a $50,000 (equivalent to $110,000 in 2025) RealityEngine2 graphics board and four 150 MHz R4400 CPUs. Once the chipset was finalized, the supercomputing setup was replaced by a simulation board integrated into low-end SGI Indy workstation in July 1995. SGI's early performance estimates proved largely accurate; LucasArts, for instance, ported a prototype Star Wars game to the final hardware in just three days.: 26 On June 23, 1994, at the Consumer Electronics Show, Nintendo announced that the upcoming console would be named the "Ultra 64". The console design was shown, but its controller remained under wraps. The most controversial detail was Nintendo’s decision to use limited-capacity ROM cartridges rather than the increasingly popular CD-ROM format, despite previous development work for a CD-based SNES.: 77 Nintendo defended the decision, citing the performance advantages of cartridges. The Ultra 64 was marketed as the world’s first 64-bit console. Though Atari had previously advertised the Jaguar as a 64-bit system, its architecture used two 32-bit coprocessors and a 16/32-bit Motorola 68000 CPU, falling short of Nintendo’s full 64-bit implementation. Later in 1994, Nintendo signed a licensing agreement with arcade giant Williams. The company's Midway studio would develop Ultra 64-branded arcade titles, including Killer Instinct and Cruis’n USA. However, these arcade machines used hardware distinct from the home console: they lacked the Reality Coprocessor, used different MIPS CPUs, and relied on hard drives instead of cartridges to store game data. The expanded storage enabled games like Killer Instinct to incorporate pre-rendered 3D character sprites and full-motion video backgrounds. In April 1995, it introduced its "Dream Team" of developers. Graphic development tools were provided by Alias Research and MultiGen, while Software Creations provided audio tools. Game development studios included Acclaim, Angel Studios, DMA Design, GameTek, Midway, Paradigm, Rare, Sierra On-Line, and Spectrum HoloByte. Despite the initial hype, the Dream Team did not live up to expectations. Some studios like GameTek failed to deliver games, while only a few, including Rare, Acclaim, and Midway, made a significant impact. Nintendo originally planned to launch the console as the "Ultra Famicom" in Japan and "Nintendo Ultra 64" internationally. While rumors claimed trademark conflicts with Konami's Ultra Games prompted a name change, Nintendo denied this, citing a desire for a unified global brand. The final name "Nintendo 64" was proposed by EarthBound creator Shigesato Itoi. Still, the original name lived on in the console's model numbering prefix "NUS-", widely believed to stand for "Nintendo Ultra Sixty-four." The newly renamed Nintendo 64 console was unveiled to the public in playable form on November 24 at Nintendo's Shoshinkai 1995 trade show. Eager for a preview, "hordes of Japanese schoolkids huddled in the cold outside ... the electricity of anticipation clearly rippling through their ranks". Game Zero magazine disseminated photos of the event two days later. Official coverage by Nintendo followed later via the Nintendo Power website and print magazine. The console was originally slated for release by Christmas of 1995. In May 1995, Nintendo delayed the release to April 21, 1996. Consumers anticipating a Nintendo release the following year at a lower price than the competition reportedly reduced the sales of competing Sega and Sony consoles during the important Christmas shopping season.: 24 Electronic Gaming Monthly editor Ed Semrad even suggested that Nintendo may have announced the April 21, 1996, release date with this end in mind, knowing in advance that the system would not be ready by that date. In its explanation of the delay, Nintendo claimed it needed more time for Nintendo 64 software to mature, and for third-party developers to produce games. Adrian Sfarti, a former engineer for SGI, attributed the delay to hardware problems; he claimed that the chips underperformed in testing and were being redesigned. In 1996, the Nintendo 64's software development kit was completely redesigned as the Windows-based Partner-N64 system, by Kyoto Microcomputer, Co. Ltd. of Japan. The Nintendo 64's release date was later delayed again, to June 23, 1996. Nintendo said the reason for this delay, and in particular, the cancellation of plans to release the console in all markets worldwide simultaneously, was that the company's marketing studies now indicated that they would not be able to manufacture enough units to meet demand by April 21, 1996, potentially angering retailers in the same way Sega had done with its surprise early launch of the Saturn in North America and Europe. To counteract the possibility that gamers would grow impatient with the wait for the Nintendo 64 and purchase one of the several competing consoles already on the market, Nintendo ran ads for the system well in advance of its announced release dates, with slogans like "Wait for it..." and "Is it worth the wait? Only if you want the best!" Popular Electronics called the launch a "much hyped, long-anticipated moment". Several months before the launch, GamePro reported that many gamers, including a large percentage of their own editorial staff, were already saying they favored the Nintendo 64 over the Saturn and PlayStation. The console was first released in Japan on June 23, 1996. Though the initial shipment of 300,000 units sold out on the first day, Nintendo successfully avoided a repeat of the Super Famicom launch day pandemonium, in part by using a wider retail network which included convenience stores. The remaining 200,000 units of the first production run shipped on June 26 and 30, with almost all of them reserved ahead of time. In the months between the Japanese and North American launches, the Nintendo 64 saw brisk sales on the American gray market, with import stores charging as much as $699 plus shipping for the system. The Nintendo 64 was first sold in North America on September 26, 1996, though having been advertised for the 29th. It was launched with just two games in the United States, Pilotwings 64 and Super Mario 64; Cruis'n USA was pulled from the line-up less than a month before launch because it did not meet Nintendo's quality standards. In 1994, prior to the launch, Nintendo of America chairman Howard Lincoln emphasized the quality of first-party games, saying "... we're convinced that a few great games at launch are more important than great games mixed in with a lot of dogs".: 77 Its American launch was wildly successful, breaking records - its first day sales were significantly higher than PlayStation's and Saturn's respective launches the year before. The PAL version of the console was released in Europe on March 1, 1997, except for France where it was released on September 1 of the same year. According to Nintendo of America representatives, Nintendo had been planning a simultaneous launch in Japan, North America, and Europe, but market studies indicated that worldwide demand for the system far exceeded the number of units they could have ready by launch, potentially leading to consumer and retailer frustration. Originally intended to be priced at US$250, the console was ultimately launched at US$199.99 to make it competitive with Sony and Sega offerings, as both the Saturn and PlayStation had been lowered to $199.99 earlier that summer. Nintendo priced the console as an impulse purchase, a strategy from the toy industry. The price of the console in the United States was further reduced in August 1998. The Nintendo 64's North American launch was backed with a $54 million marketing campaign by Leo Burnett Worldwide (meaning over $100 in marketing per North American unit that had been manufactured up to this point). While the competing Saturn and PlayStation both set teenagers and adults as their target audience, the Nintendo 64's target audience was pre-teens. To boost sales during the slow post-Christmas season, Nintendo and General Mills worked together on a promotional campaign that appeared in early 1999. The advertisement by Saatchi & Saatchi, New York began on January 25 and encouraged children to buy Fruit by the Foot snacks for tips to help them with their Nintendo 64 games. Ninety different tips were available, with three variations of thirty tips each. Nintendo advertised its Funtastic Series of peripherals with a $10 million print and television campaign from February 28 to April 30, 2000. Leo Burnett Worldwide was in charge again. Hardware The Nintendo 64's architecture is built around the Reality Coprocessor (RCP), which serves as the system’s central hub for processing graphics, audio, and memory management. It works in tandem with the VR4300, a 93.75 MHz 64-bit CPU fabricated by NEC with a performance of 125 million instructions per second. Popular Electronics compared its processing power to that of contemporary Pentium desktop processors. Though constrained by a narrower 32-bit system bus, the VR4300 retained the computational capabilities of the more powerful 64-bit MIPS R4300i on which it was based. However, software rarely utilized 64-bit precision, as Nintendo 64 games primarily relied on faster and more compact 32-bit operations. The RCP operates at 62.5 MHz and contains two critical components: the "Signal Processor", responsible for sound and graphics processing, and the "Display Processor", which manages pixel drawing. The RCP renders visual data into the graphics frame buffer and controls direct memory access (DMA), transferring video and audio data from memory to a digital-to-analog converter (DAC) for final output. A key advantage of the Nintendo 64's architecture is that the CPU and RCP operate in parallel, dividing tasks for better efficiency. While the VR4300 executes the main game logic, the RCP processes graphics and sound independently. This design enables 3D rendering and complex audio effects but also requires careful coordination to avoid performance bottlenecks. The Nintendo 64 was among the first consoles to implement a unified memory architecture, eliminating separate banks of random-access memory (RAM) for CPU, audio, and video operations. It features 4 MB of RDRAM (Rambus DRAM), expandable to 8 MB with the Expansion Pak. At the time, RDRAM was a relatively new technology that provided high bandwidth at a lower cost. Audio processing is handled by both the CPU and the RCP and is output through a DAC with a sample rate of up to 44.1 kHz with 16-bit depth, matching CD quality. However, this level of fidelity was rarely used due to the high CPU demand and the storage limitations of the ROM cartridges. Most games featured stereo sound, with some supporting Dolby Pro Logic surround sound. For video output, the system supports composite and S-Video output, using the same cables as the Super NES and GameCube. It can display up to 16.8 million colors and resolutions ranging from 256×224 to 640×480 pixels. While most games run at 320×240, some support higher resolutions, often requiring the Expansion Pak. The console also accommodates widescreen formats, with games offering either anamorphic 16:9 or letterboxed display modes. The Nintendo 64 controller features a distinctive "M"-shaped design, with a "control stick", making Nintendo the first manufacturer to include a thumbstick as a standard feature in its primary controller. While functionally similar to an analog stick, the control stick is digital, operating on the same principles as a ball mouse. The controller includes a D-pad and ten buttons: a large A and B button, a Start button, four C-buttons (Up, Down, Left, and Right), two shoulder buttons (L and R), and a Z trigger positioned on the back. Popular Electronics described its shape as "evocative of some alien spaceship." While noting that the three-handle design could be confusing, the magazine praised its versatility, stating "the separate grips allow different hand positions for various game types". A port on the bottom of the controller allows users to connect various accessories, including the Controller Pak for saving game data, the Rumble Pak for force feedback, and the Transfer Pak, which enabled data transfer between supported Nintendo 64 and Game Boy games. The Nintendo 64 was also one of the first consoles to feature four controller ports. According to Shigeru Miyamoto, Nintendo included four ports because it was the first console powerful enough to handle four-player split-screen gameplay without significant slowdown. After multiple attempts to develop a compact disc-based add-on for the Super NES, many in the industry expected Nintendo’s next console to follow Sony’s PlayStation in adopting the CD format. However, when the first Nintendo 64 prototypes debuted in November 1995, observers were surprised to find that the system once again used ROM cartridges. Nintendo 64 cartridges range in size from 4 to 64 MB and often include built-in save functionality. Nintendo’s selection of the cartridge medium was highly controversial and is frequently cited as a key factor in the company losing its dominant position in the gaming market. While cartridges offered advantages such as faster load times and durability, their limitations—higher production costs, lower storage capacity, and longer manufacturing lead times—posed challenges for developers. Many of the format’s benefits required innovative solutions, which only emerged later in the console’s lifecycle. The big strength was the N64 cartridge. We use the cartridge almost like normal RAM and are streaming all level data, textures, animations, music, sound and even program code while the game is running. With the final size of the levels and the amount of textures, the RAM of the N64 never would have been even remotely enough to fit any individual level. So the cartridge technology really saved the day. Nintendo cited several reasons for choosing cartridges. The biggest advantage was their fast load times—unlike CDs, which required lengthy loading screens, cartridges provided near-instant gameplay. This advantage had previously helped Nintendo compete against home computers like the Commodore 64 in the 1980s. Although cartridges are susceptible to long-term environmental damage, they are significantly more durable than compact discs. Another key factor was copyright protection—cartridges were harder to pirate than CDs, reducing widespread software piracy. While unauthorized N64-to-PC devices eventually emerged, they were far less common than the more easily copied PlayStation CDs. Cartridges also had notable drawbacks. They took longer to manufacture than CDs, requiring at least two weeks per production run. This forced publishers to predict demand ahead of time, risking either overproduction of costly cartridges or weeks-long shortages if demand was underestimated. Additionally, cartridges were significantly more expensive to produce than CDs, leading to higher game prices, typically US$10 (equivalent to $21 in 2025) more than PlayStation titles. Third-party developers also complained that they were at an unfair disadvantage. Since Nintendo controlled cartridge manufacturing, it could sell its own first-party games at a lower price, and prioritize their production over those of other companies. Storage limitations were another key issue. While Nintendo 64 cartridges maxed out at 64 MB, CDs could hold 650 MB. As games became more complex, this restriction forced compromises, including compressed textures, shorter music tracks, and fewer cutscenes. Full-motion video was rarely feasible, and many multiplatform games had to be scaled down for the N64. These cost and storage constraints pushed many third-party developers toward the PlayStation. Square and Enix, which had originally planned to release Final Fantasy VII and Dragon Warrior VII on the Nintendo 64, switched to Sony’s console due to storage constraints. Other developers, like Konami, released far fewer N64 titles than PlayStation games. As a result, new N64 releases were less frequent compared to its competitors. Despite these challenges, the Nintendo 64 remained competitive, bolstered by strong first-party titles and exclusive hits like GoldenEye 007. Nintendo’s flagship franchises, including Mario and Zelda, retained strong brand appeal, and deals with second-party developers like Rare further strengthened the console’s game library. Programming for the Nintendo 64 presented unique challenges alongside notable advantages. The Economist described development for the system as "horrendously complex". Like many game consoles and embedded systems, the Nintendo 64 featured highly specialized hardware optimizations, which were further complicated by design oversights, limitations in 3D technology, and manufacturing constraints. As the console neared the end of its lifecycle, Nintendo’s hardware chief, Genyo Takeda, repeatedly reflected on these difficulties, using the Japanese term hansei (反省), meaning "reflective regret." Looking back, he admitted, "When we made Nintendo 64, we thought it was logical that if you want to make advanced games, it becomes technically more difficult. We were wrong. We now understand it's the cruising speed that matters, not the momentary flash of peak power." Unlike the NES and Super NES, which employed region-specific branding and hardware variations, the Nintendo 64 maintained a consistent design and brand worldwide. While Nintendo initially announced the use of regional lockout chips to restrict game compatibility, the platform ultimately enforced region-locking through physical cartridge design, with each market having cartridges with different notches on the back, preventing a cartridge from one region from being inserted into a foreign console. The Nintendo 64 comes in several colors. The standard Nintendo 64 is charcoal gray, nearly black, and the controller is solid gray (later releases in the U.S., Canada, and Australia included a bonus second controller in Atomic Purple). The console was released in various colors and special editions. Most Nintendo 64 game cartridges are gray in color, but some games have a colored cartridge. Fourteen games have black cartridges, and other colors (such as yellow, blue, red, gold, and green) were each used for six or fewer games. Several games, such as The Legend of Zelda: Ocarina of Time, were released both in standard gray and in colored, limited edition versions. Games A total of 388 Nintendo 64 games were officially released, with just 85 exclusively sold in Japan. For comparison, the PlayStation received 4,105 games, the Saturn got over 1,000, the SNES got 1,755 games, and the NES got 716 Western releases plus over 1,000 in Japan. The considerably smaller Nintendo 64 game library has been attributed by some to the controversial decision not to adopt the CD-ROM, and programming difficulties for its complex architecture. This trend is also seen as a result of Hiroshi Yamauchi's strategy, announced during his speech at the Nintendo 64's November 1995 unveiling, that Nintendo would be restricting the number of games produced for the Nintendo 64 so that developers would focus on higher quality instead of quantity. The Los Angeles Times also observed that this was part of Nintendo's "penchant for perfection [...] while other platforms offer quite a bit of junk, Nintendo routinely orders game developers back to the boards to fix less-than-perfect titles". Although having much less third-party support than rival consoles, Nintendo's strong first-party franchises such as Mario enjoyed wide brand appeal. Second-parties of Nintendo, such as Rare, released groundbreaking titles. Consequently, the Nintendo 64 game library included a high number of critically acclaimed and widely sold games. According to TRSTS reports, three of the top five best-selling games in the U.S. for December 1996 were Nintendo 64 games (both of the remaining two were Super NES games). Super Mario 64 is the best-selling console game of the generation, with 11 million units sold beating Gran Turismo for the PlayStation (at 10.85 million) and Final Fantasy VII (at 9.72 million) in sales. The game also received much praise from critics and helped to pioneer three-dimensional control schemes. GoldenEye 007 was important in the evolution of the first-person shooter, and has been named one of the greatest in the genre. The Legend of Zelda: Ocarina of Time set the standard for future 3D action-adventure games and is considered by many to be one of the greatest games ever made. The most graphically demanding Nintendo 64 games on larger 32 or 64 MB cartridges are among the most advanced and detailed of 32- and 64-bit platforms. To maximize the hardware, developers created custom microcode. Nintendo 64 games running on custom microcode benefit from much higher polygon counts and more advanced lighting, animation, physics, and AI routines than its competition. Conker's Bad Fur Day is arguably the pinnacle of its generation combining multicolored real-time lighting that illuminates each area to real-time shadowing, and detailed texturing replete with a full in-game facial animation system. The Nintendo 64 is capable of executing many more advanced and complex rendering techniques than its competitors. It is the first home console to feature trilinear filtering, to smooth textures. This contrasts with the Saturn and PlayStation, which use nearest-neighbor interpolation and produce more pixelated textures. Overall however the results of the Nintendo cartridge system were mixed. The smaller storage size of ROM cartridges can limit the number of available textures. As a result, many games with much smaller 8 or 12 MB cartridges are forced to stretch textures over larger surfaces. Compounded by a limit of 4,096 bytes of on-chip texture memory, the result is often a distorted, out-of-proportion appearance. Many games with larger 32 or 64 MB cartridges avoid this issue entirely, including Resident Evil 2, Sin and Punishment: Successor of the Earth, and Conker's Bad Fur Day, allowing for more detailed graphics with multiple, multi-layered textures across all surfaces. Several Nintendo 64 games have been released for the Wii and Wii U Virtual Console (VC) services and are playable with the Classic Controller, GameCube controller, Wii U Pro Controller, or Wii U GamePad. Differences include a higher resolution and a more consistent framerate than the Nintendo 64 originals. Some features, such as Rumble Pak functionality, are not available in the Wii versions. Some features are also changed on the Virtual Console releases. For example, the VC version of Pokémon Snap allows players to send photos through the Wii's message service, and Wave Race 64's in-game content was altered due to the expiration of the Kawasaki license. Several games developed by Rare were released on Microsoft's Xbox Live Arcade service, including Banjo-Kazooie, Banjo-Tooie, and Perfect Dark, following Microsoft's acquisition of Rareware in 2002. One exception is Donkey Kong 64, released in April 2015 on the Wii U Virtual Console, as Nintendo retained the rights to the game. Select Nintendo 64 games have been re-released via the Nintendo Classics service as part of the "Expansion Pack" tier of the Nintendo Switch Online service. With the launch of the Nintendo Switch 2 on June 5, 2025, the additional features of the Nintendo 64 - Nintendo Classics will offer CRT filter, rewind function and button remapping (one of these features is also available on the Nintendo Switch). Several unofficial third-party emulators can play Nintendo 64 games on other platforms, such as Windows, Macintosh, and smartphones. Accessories Nintendo released a peripheral platform called 64DD, where "DD" stands for "Disk Drive". Connecting to the expansion slot at the bottom of the system, the 64DD turns the Nintendo 64 console into an Internet appliance, a multimedia workstation, and an expanded gaming platform. This large peripheral allows players to play Nintendo 64 disk-based games, capture images from an external video source, and it allowed players to connect to the now-defunct Japanese Randnet online service. Not long after its limited mail-order release, the peripheral was discontinued. Only nine games were released, including the four Mario Artist games (Paint Studio, Talent Studio, Communication Kit, and Polygon Studio). Many planned games were eventually released in cartridge format or on other game consoles. The 64DD and the accompanying Randnet online service were released only in Japan. To illustrate the fundamental significance of the 64DD to all game development at Nintendo, lead designer Shigesato Itoi said: "I came up with a lot of ideas because of the 64DD. All things start with the 64DD. There are so many ideas I wouldn't have been allowed to come up with if we didn't have the 64DD". Shigeru Miyamoto concluded: "Almost every new project for the N64 is based on the 64DD. ... we'll make the game on a cartridge first, then add the technology we've cultivated to finish it up as a full-out 64DD game". iQue Player The iQue Player was a handheld TV game Nintendo 64 system that released only in China on November 17, 2003, after China banned video game consoles. The games that were released in the iQue Player's lifetime (from 2003 to 2016) are Super Mario 64, The Legend of Zelda: Ocarina of Time, Mario Kart 64, Wave Race 64, Star Fox 64, Yoshi's Story, Paper Mario, Super Smash Bros., F-Zero X, Dr. Mario 64, Excitebike 64, Sin and Punishment, Custom Robo and Animal Crossing. Reception The Nintendo 64 received acclaim from critics. Reviewers praised the console's advanced 3D graphics and gameplay, while criticizing the lack of games. On G4techTV's Filter, the Nintendo 64 was voted up to No. 1 by registered users. In February 1996, Next Generation magazine called the Nintendo Ultra 64 the "best kept secret in videogames" and the "world's most powerful game machine". It called the system's November 24, 1995, unveiling at Shoshinkai "the most anticipated videogaming event of the 1990s, possibly of all time". Previewing the Nintendo 64 shortly prior to its launch, Time magazine praised the realistic movement and gameplay provided by the combination of fast graphics processing, pressure-sensitive controller, and the Super Mario 64 game. The review praised the "fastest, smoothest game action yet attainable via joystick at the service of equally virtuoso motion", where "[f]or once, the movement on the screen feels real".: 61 Asked if consumers should buy a Nintendo 64 at launch, buy it later, or buy a competing system, a panel of six GamePro editors voted almost unanimously to buy at launch; one editor said consumers who already own a PlayStation and are on a limited budget should buy it later, and all others should buy it at launch. At launch, the Los Angeles Times called the system "quite simply, the fastest, most graceful game machine on the market". Its form factor was described as small, light, and "built for heavy play by kids" unlike the "relatively fragile Sega Saturn". Showing concern for a major console product launch during a sharp, several-year long, decline in the game console market, the review said that the long-delayed Nintendo 64 was "worth the wait" in the company's pursuit of quality. Although the Times expressed concerns about having only two launch games at retail and twelve expected by Christmas, this was suggested to be part of Nintendo's "penchant for perfection", as "while other platforms offer quite a bit of junk, Nintendo routinely orders game developers back to the boards to fix less-than-perfect titles". Describing the quality control incentives associated with cartridge-based development, the Times cited Nintendo's position that cartridge game developers tend to "place a premium on substance over flash", and noted that the launch games lack the "poorly acted live-action sequences or half-baked musical overtures" which it says tend to be found on CD-ROM games. Praising Nintendo's controversial choice of the cartridge medium with its "nonexistent" load times and "continuous, fast-paced action CD-ROMs simply cannot deliver", the review concluded that "the cartridge-based Nintendo 64 delivers blistering speed and tack-sharp graphics that are unheard of on personal computers and make competing 32-bit, disc-based consoles from Sega and Sony seem downright sluggish". Time named it the 1996 Machine of the Year, saying the machine had "done to video-gaming what the 707 did to air travel". The magazine said the console achieved "the most realistic and compelling three-dimensional experience ever presented by a computer". Time credited the Nintendo 64 with revitalizing the video game market, "rescuing this industry from the dustbin of entertainment history". The magazine suggested that the Nintendo 64 would play a major role in introducing children to digital technology in the final years of the 20th century. The article concluded by saying the console had already provided "the first glimpse of a future where immensely powerful computing will be as common and easy to use as our televisions".: 73 The console also won the 1996 Spotlight Award for Best New Technology. Popular Electronics complimented the system's hardware, calling its specifications "quite impressive". It found the controller "comfortable to hold, and the controls to be accurate and responsive". In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the Nintendo 64 scores of 8.0, 7.0, 7.5, 7.5, and 9.0. They highly praised the power of the hardware and the quality of the first-party games, especially those developed by Rare's and Nintendo's internal studios, but also commented that the third-party output to date had been mediocre and the first-party output was not enough by itself to provide Nintendo 64 owners with a steady stream of good games or a full breadth of genres. Next Generation's end of 1997 review expressed similar concern about third party support, while also noting signs that the third party output was improving, and speculated that the Nintendo 64's arrival late in its generation could lead to an early obsolescence when Sony and Sega's successor consoles launched. However, they said that for some, Nintendo's reliably high-quality software would outweigh those drawbacks, and gave the system 3 1/2 out of 5 stars. Developer Factor 5, which created some of the system's most technologically advanced games along with the system's audio development tools for Nintendo, said, "[T]he N64 is really sexy because it combines the performance of an SGI machine with a cartridge. We're big arcade fans, and cartridges are still the best for arcade games or perhaps a really fast CD-ROM. But there's no such thing for consoles yet [as of 1998]". The Nintendo 64 was highly successful in the North America region; conversely, sales proved to be underwhelming in the domestic Japanese and in European markets. Nintendo reported that the system's vintage hardware and software sales had ceased by 2004, three years after the GameCube's launch; as of December 31, 2009, the Nintendo 64 had yielded a lifetime total of 5.54 million system units sold in Japan, 20.63 million in the Americas, and 6.75 million in other regions, for a total of 32.93 million units. The Nintendo 64 was in heavy demand upon its release. David Cole, industry analyst, said "You have people fighting to get it from stores". Time called the purchasing interest "that rare and glorious middle-class Cabbage Patch-doll frenzy". The magazine said celebrities Matthew Perry, Steven Spielberg, and Chicago Bulls players called Nintendo to ask for special treatment to get their hands on the console. In North America and Europe, the console had only two launch games, with Super Mario 64 as its killer app. During the system's first three days on the market, retailers sold 350,000 of 500,000 available console units. During its first four months, the console yielded 500,000 unit sales in North America. Nintendo successfully outsold Sony and Sega early in 1997 in the United States; by the end of its first full year, the console had sold 3.6 million units in the United States. BusinessWire reported that the Nintendo 64 was responsible for Nintendo's sales having increased by 156% by 1997. Five different Nintendo 64 games exceeded 1 million in sales during 1997. After a strong launch year, the decision to use the cartridge format is said to have contributed to the diminished release pace and higher price of games compared to the competition, and thus Nintendo was unable to maintain its lead in the United States. The console would continue to outsell the Sega Saturn throughout the generation, but would trail behind the PlayStation. Nintendo's efforts to attain dominance in the key 1997 holiday shopping season were also hurt by game delays. Five high-profile Nintendo games slated for release by Christmas 1997 (The Legend of Zelda: Ocarina of Time, Banjo-Kazooie, Conker's Quest, Yoshi's Story, and Major League Baseball Featuring Ken Griffey Jr.) were delayed until 1998, and Diddy Kong Racing was announced at the last minute in an effort to somewhat fill the gaps. In an effort to take the edge off of the console's software pricing disadvantage, Nintendo worked to lower manufacturing costs for Nintendo 64 cartridges, and leading into the 1997 holiday shopping season announced a new pricing structure which amounted to a roughly 15% price cut on both first-party and third-party games. Response from third-party publishers was positive, with key third-party publisher Capcom saying the move led them to reconsider their decision not to publish games for the console. In Japan, the console was not as successful, failing to outsell the PlayStation and the Sega Saturn. Benimaru Itō, a developer for Mother 3 and friend of Shigeru Miyamoto, speculated in 1997 that the Nintendo 64's lower popularity in Japan was due to the lack of role-playing video games. Nintendo CEO Hiroshi Yamauchi also said the console's lower popularity in Japan was most likely due to lack of role-playing games, and the small number of games being released in general. The higher price of cartridges as opposed to CD-ROM has also been cited as a reason for the system's lackluster third-party support, which led to domestically big titles, such as Dragon Quest VII, moving away from Nintendo's platforms to its rivals. Shigeru Miyamoto commented at the time that the Nintendo 64's situation in Japan was grim and that it was also tough in Europe, but that these were overcome by its success in America and therefore "the business has become completely viable". The Nintendo 64 is one of the most recognized video game systems in history, Designed in tandem with the controller, Super Mario 64 and The Legend of Zelda: Ocarina of Time are widely considered by critics and the public to be two of the greatest and most influential games of all time. GoldenEye 007 is one of the most influential games for the shooter genre. The Aleck 64 is a Nintendo 64 design in arcade form, designed by Seta in cooperation with Nintendo, and sold from 1998 to 2003 only in Japan. In 2011, IGN ranked it as the ninth-greatest video game console of all time. Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/New_Style] | [TOKENS: 2191]
Contents Old Style and New Style dates Old Style (O.S.) and New Style (N.S.) indicate dating systems before and after a calendar change, respectively. Usually, they refer to the change from the Julian calendar to the Gregorian calendar as enacted in various European countries between 1582 and 1923. Before as well as after the legal change, writers used the dual dating convention to specify a given day by giving its date according to both styles of dating (to ensure that the day concerned was identified unambiguously). In England, Wales, Ireland, and Britain's American colonies, there were two calendar changes, both in 1752. The first adjusted the start of a new year from 25 March (Lady Day, the Feast of the Annunciation) to 1 January, a change which Scotland had made in 1600. The second discarded the Julian calendar in favour of the Gregorian calendar, skipping 11 days in the calendar for September 1752 to do so. For countries such as Russia where no start-of-year adjustment took place,[a] O.S. and N.S. simply indicate the Julian and Gregorian dating systems respectively. Differences between Julian and Gregorian dates The need to correct the calendar arose from the realisation that the correct figure for the number of days in a year is not 365.25 (365 days 6 hours) as assumed by the Julian calendar but slightly less (c. 365.242 days). The Julian calendar therefore has too many leap years. The consequence was that the basis for the calculation of the date of Easter, as decided in the 4th century, had drifted from reality. The Gregorian calendar reform also dealt with the accumulated difference between these figures, between the years 325 and 1582, by skipping 10 days to set the ecclesiastical date of the equinox to be 21 March, the median date of its occurrence at the time of the First Council of Nicea in 325. Countries that adopted the Gregorian calendar after 1699 needed to skip an additional day for each subsequent new century that the Julian calendar had added since then. When the British Empire did so in 1752, the gap had grown to eleven days;[b] when Russia did so (as its civil calendar) in 1918, thirteen days needed to be skipped.[c] Britain and its colonies or possessions In the Kingdom of Great Britain and its possessions, the Calendar (New Style) Act 1750 introduced two concurrent changes to the calendar. The first, which applied to England, Wales, Ireland and the British colonies, changed the start of the year from 25 March to 1 January, with effect from "the day after 31 December 1751".[d] (Scotland had already made this aspect of the changes, on 1 January 1600.) The second (in effect[e]) adopted the Gregorian calendar in place of the Julian calendar. Thus "New Style" can refer to the start-of-year adjustment, to the adoption of the Gregorian calendar, or to the combination of the two. It was through their use in the Calendar Act 1750 that the notations "Old Style" and "New Style" came into common usage. When recording British history, it is usual to quote the date as originally recorded at the time of the event, but with the year number adjusted to start on 1 January. The latter adjustment may be needed because the start of the civil calendar year had not always been 1 January and was altered at different times in different countries.[f] From 1155 to 1752, the civil or legal year in England began on 25 March (Lady Day); so for example, the execution of Charles I was recorded at the time in Parliament as happening on 30 January 1648 (Old Style). In newer English-language texts, this date is usually shown as "30 January 1649" (New Style). The corresponding date in the Gregorian calendar is 9 February 1649, the date by which his contemporaries in some parts of continental Europe would have recorded his execution. The O.S./N.S. designation is particularly relevant for dates which fall between the start of the "historical year" (1 January) and the legal start date, where different. This was 25 March in England, Wales, Ireland and the colonies until 1752, and until 1600 in Scotland. Thereafter, in both cases, it became 1 January. In Britain, 1 January was celebrated as the New Year festival from as early as the 13th century, despite the recorded (civil) year not incrementing until 25 March,[g] but the "year starting 25th March was called the Civil or Legal Year, although the phrase Old Style was more commonly used". To reduce misunderstandings about the date, it was normal even in semi-official documents such as parish registers to place a statutory new-year heading after 24 March (for example "1661") and another heading from the end of the following December, 1661/62, a form of dual dating to indicate that in the following twelve weeks or so, the year was 1661 Old Style but 1662 New Style. Some more modern sources, often more academic ones (e.g. the History of Parliament) also use the 1661/62 style for the period between 1 January and 24 March for years before the introduction of the New Style calendar in England. Other notations The Gregorian calendar was implemented in Russia on 14 February 1918 by dropping the Julian dates of 1–13 February 1918,[h] pursuant to a Sovnarkom decree signed 24 January 1918 (Julian) by Vladimir Lenin. The decree required that the Julian date was to be written in parentheses after the Gregorian date, until 1 July 1918. It is common in English-language publications to use the familiar Old Style or New Style terms to discuss events and personalities in other countries, especially with reference to the Russian Empire and the very beginning of Soviet Russia. For example, in the article "The October (November) Revolution", the Encyclopædia Britannica uses the format of "25 October (7 November, New Style)" to describe the date of the start of the revolution. The Latin equivalents, which are used in many languages, are, on the one hand, stili veteris (genitive) or stilo vetere (ablative), abbreviated st.v., and meaning "(of/in) old style"; and, on the other, stili novi or stilo novo, abbreviated st.n. and meaning "(of/in) new style". The Latin abbreviations may be capitalised differently by different users, e.g., St.n. or St.N. for stili novi. There are equivalents for these terms in other languages as well, such as the German a.St. ("alter Stil" for O.S.). Transposition of historical event dates and possible date conflicts Usually, the mapping of New Style dates onto Old Style dates with a start-of-year adjustment works well with little confusion for events before the introduction of the Gregorian calendar. For example, the Battle of Agincourt is well known to have been fought on 25 October 1415, which is Saint Crispin's Day. However, for the period between the first introduction of the Gregorian calendar on 15 October 1582 and its introduction in Britain on 14 September 1752, there can be considerable confusion between events in Continental Western Europe and in British domains. Events in Continental Western Europe are usually reported in English-language histories by using the Gregorian calendar. For example, the Battle of Blenheim is always given as 13 August 1704. However, confusion occurs when an event involves both. For example, William III of England arrived at Brixham in England on 5 November (Julian calendar), after he had set sail from the Netherlands on 11 November (Gregorian calendar) 1688. The Battle of the Boyne in Ireland took place a few months later on 1 July 1690 (Julian calendar). That maps to 11 July (Gregorian calendar), conveniently close to the Julian date of the subsequent (and more decisive) Battle of Aughrim on 12 July 1691 (Julian). The latter battle was commemorated annually throughout the 18th century on 12 July, following the usual historical convention of commemorating events of that period within Great Britain and Ireland by mapping the Julian date directly onto the modern Gregorian calendar date (as happens, for example, with Guy Fawkes Night on 5 November). The Battle of the Boyne was commemorated with smaller parades on 1 July. However, both events were combined in the late 18th century, and continue to be celebrated as "The Twelfth". Because of the differences, British writers and their correspondents often employed two dates, a practice called dual dating, more or less automatically. Letters concerning diplomacy and international trade thus sometimes bore both Julian and Gregorian dates to prevent confusion. For example, Sir William Boswell wrote to Sir John Coke from The Hague a letter dated "12/22 Dec. 1635". In his biography of John Dee, The Queen's Conjurer, Benjamin Woolley surmises that because Dee fought unsuccessfully for England to embrace the 1583/84 date set for the change, "England remained outside the Gregorian system for a further 170 years, communications during that period customarily carrying two dates". In contrast, Thomas Jefferson, who lived while the British Isles and colonies converted to the Gregorian calendar, instructed that his tombstone bear his date of birth by using the Julian calendar (notated O.S. for Old Style) and his date of death by using the Gregorian calendar. At Jefferson's birth, the difference was eleven days between the Julian and Gregorian calendars and so his birthday of 2 April in the Julian calendar is 13 April in the Gregorian calendar. Similarly, George Washington is now officially reported as having been born on 22 February 1732, rather than on 11 February 1731/32 (Julian calendar). The philosopher Jeremy Bentham, born on 4 February 1747/8 (Julian calendar), in later life celebrated his birthday on 15 February. There is some evidence that the calendar change was not easily accepted. Many British people continued to celebrate their holidays "Old Style" well into the 19th century,[i] a practice that the author Karen Bellenir considered to reveal a deep emotional resistance to calendar reform. See also Notes References Sources External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Chief_Justice_of_the_United_States] | [TOKENS: 2421]
Contents Chief Justice of the United States The chief justice of the United States is the chief judge of the Supreme Court of the United States and is the highest-ranking officer of the federal judiciary. Article II, Section 2, Clause 2 of the Constitution grants plenary power to the president to nominate, and, with the advice and consent of the Senate, appoint "Judges of the Supreme Court", who serve until they die, resign, retire, or are impeached and convicted. The existence of a chief justice is only explicit in Article I, Section 3, Clause 6 which states that the chief justice shall preside over the impeachment trial of the president; this has occurred three times, for Andrew Johnson, Bill Clinton, and for Donald Trump's first impeachment. The chief justice has significant influence in the selection of cases for review, presides when oral arguments are held, and leads the discussion of cases among the justices. Additionally, when the court renders an opinion, the chief justice, if in the majority, chooses who writes the court's opinion; however, when deciding a case, the chief justice's vote counts no more than that of any other justice. While nowhere mandated, the presidential oath of office is by tradition administered by the chief justice. The chief justice serves as a spokesperson for the federal government's judicial branch and acts as a chief administrative officer for the federal courts. The chief justice presides over the Judicial Conference and, in that capacity, appoints the director and deputy director of the Administrative Office. The chief justice is an ex officio member of the Board of Regents of the Smithsonian Institution and, by custom, is elected chancellor of the board. Since the Supreme Court was established in 1789, 17 people have served as Chief Justice, beginning with John Jay (1789–1795). The current chief justice is John Roberts (since 2005). Five of the 17 chief justices—John Rutledge, Edward Douglass White, Charles Evans Hughes, Harlan Fiske Stone, and William Rehnquist—served as associate justices before becoming chief justice. Additionally, Chief Justice William Howard Taft had previously served as president. Origin, title and appointment The United States Constitution does not explicitly establish an office of the Chief Justice but presupposes its existence with a single reference in Article I, Section 3, Clause 6: "When the President of the United States is tried, the Chief Justice shall preside." Nothing more is said in the Constitution regarding the office. Article III, Section 1, which authorizes the establishment of the Supreme Court, refers to all members of the court simply as "judges". The Judiciary Act of 1789 created the distinctive titles of Chief Justice of the Supreme Court of the United States and Associate Justice of the Supreme Court of the United States. In 1866, Salmon P. Chase assumed the title of Chief Justice of the United States, and Congress began using the new title in subsequent legislation. The first person whose Supreme Court commission contained the modified title was Melville Fuller in 1888. The associate justice title was not altered in 1866 and remains as originally created. The chief justice, like all federal judges, is nominated by the president and confirmed to office by the U.S. Senate. Article III, Section 1 of the Constitution specifies that they "shall hold their Offices during good Behavior." This language has been interpreted to mean that judicial appointments are effectively for life and that once in office, a justice's tenure ends only when the justice dies, retires, resigns, or is removed from office through the impeachment process. Since 1789, 15 presidents have made a total of 22 official nominations to the position of chief justice. The salary of the chief justice is set by Congress; as of 2024, the annual salary is $312,200, which is slightly higher than that of associate justices, which is $298,500. The practice of appointing an individual to serve as Chief Justice is grounded in tradition; while the Constitution mandates that there be a chief justice, it is silent on the subject of how one is chosen and by whom. There is no specific constitutional prohibition against using another method to select the chief justice from among those justices properly appointed and confirmed to the Supreme Court. Three incumbent associate justices have been nominated by the president and confirmed by the Senate as Chief Justice: Edward Douglass White in 1910, Harlan Fiske Stone in 1941, and William Rehnquist in 1986. A fourth, Abe Fortas, was nominated to the position in 1968 but was not confirmed. As an associate justice does not have to resign their seat on the court in order to be nominated as Chief Justice, Fortas remained an associate justice. Similarly, when Associate Justice William Cushing was nominated and confirmed as Chief Justice in January 1796 but declined the office, he too remained on the court. Two former associate justices subsequently returned to service on the court as Chief Justice. John Rutledge was the first. President Washington gave him a recess appointment in 1795. However, his subsequent nomination to the office was not confirmed by the Senate, and he left office and the court. In 1930, former associate justice Charles Evans Hughes was confirmed as Chief Justice. Additionally, in December 1800, former chief justice John Jay was nominated and confirmed to the position a second time but ultimately declined it, opening the way for the appointment of John Marshall. Powers and duties Along with their general responsibilities as a member of the Supreme Court, the chief justice has several unique duties to fulfill. Article I, Section 3 of the U.S. Constitution stipulates that the chief justice shall preside over the Senate trial of an impeached president of the United States. Three chief justices have presided over presidential impeachment trials: Salmon P. Chase (1868 trial of Andrew Johnson), William Rehnquist (1999 trial of Bill Clinton), and John Roberts (2020 trial of Donald Trump; Roberts declined to preside over Trump's second trial in 2021, which took place after the end of Trump's presidency. Senate president pro-tempore Patrick Leahy presided). All three presidents were acquitted in the Senate. Although the Constitution is silent on the matter, the chief justice would, under Senate rules adopted in 1999 before the Clinton trial, preside over the trial of an impeached vice president. This rule was established to preclude the possibility of a vice president presiding over their own trial. Many of the court's procedures and inner workings are governed by the rules of protocol based on the seniority of the justices. The chief justice always ranks first in the order of precedence—regardless of the length of the officeholder's service (even if shorter than that of one or more associate justices). This elevated status has enabled successive chief justices to define and refine both the court's culture and its judicial priorities. The chief justice sets the agenda for the weekly meetings where the justices review the petitions for certiorari, to decide whether to hear or deny each case. The Supreme Court agrees to hear less than one percent of the cases petitioned to it. While associate justices may append items to the weekly agenda, in practice this initial agenda-setting power of the chief justice has significant influence over the direction of the court. Nonetheless, a chief justice's influence may be limited by circumstances and the associate justices' understanding of legal principles; it is definitely limited by the fact that they have only a single vote of nine on the decision whether to grant or deny certiorari. Despite the chief justice's elevated stature, their vote carries the same legal weight as the vote of each associate justice. Additionally, they have no legal authority to overrule the verdicts or interpretations of the other eight judges or tamper with them. The task of assigning who shall write the opinion for the majority falls to the most senior justice in the majority. Thus, when the chief justice is in the majority, they always assign the opinion. Early in his tenure, Chief Justice John Marshall insisted upon holdings which the justices could unanimously back as a means to establish and build the court's national prestige. In doing so, Marshall would often write the opinions himself and actively discouraged dissenting opinions. Associate Justice William Johnson eventually persuaded Marshall and the rest of the court to adopt its present practice: one justice writes an opinion for the majority, and the rest are free to write their own separate opinions or not, whether concurring or dissenting. The chief justice's formal prerogative—when in the majority—to assign which justice will write the court's opinion is perhaps their most influential power, as this enables them to influence the historical record. They may assign this task to the individual justice best able to hold together a fragile coalition, to an ideologically amenable colleague, or to themselves. Opinion authors can have a large influence on the content of an opinion; two justices in the same majority, given the opportunity, might write very different majority opinions. A chief justice who knows the associate justices well can therefore do much—by the simple act of selecting the justice who writes the opinion of the court—to affect the general character or tone of an opinion, which in turn can affect the interpretation of that opinion in cases before lower courts in the years to come. The chief justice chairs the conferences where cases are discussed and tentatively voted on by the justices. They normally speak first and so have influence in framing the discussion. Although the chief justice votes first—the court votes in order of seniority—they may strategically pass in order to ensure membership in the majority if desired. It is reported that: Chief Justice Warren Burger was renowned, and even vilified in some quarters, for voting strategically during conference discussions on the Supreme Court in order to control the Court's agenda through opinion assignment. Indeed, Burger is said to have often changed votes to join the majority coalition, cast "phony votes" by voting against his preferred position, and declined to express a position at conference. The chief justice has traditionally administered the presidential oath of office to new U.S. presidents. This is merely custom, and is not a constitutional responsibility of the chief justice. The Constitution does not require that the presidential oath be administered by anyone in particular, simply that it be taken by the president. Law empowers any federal or state judge, as well as notaries public, to administer oaths and affirmations. The chief justice ordinarily administers the oath of office to newly appointed and confirmed associate justices, whereas the seniormost associate justice will normally swear in a new chief justice. If the chief justice is ill or incapacitated, the oath is usually administered by the seniormost member of the Supreme Court. Eight times, someone other than the chief justice of the United States administered the oath of office to the president. Since the tenure of William Howard Taft, the office of chief justice has moved beyond just first among equals. The chief justice also: Unlike Senators and Representatives, who are constitutionally prohibited from holding any other "office of trust or profit" of the United States or of any state while holding their congressional seats, the chief justice and the other members of the federal judiciary are not barred from serving in other positions. John Jay served as a diplomat to negotiate the Jay Treaty, and Earl Warren chaired the President's Commission on the Assassination of President Kennedy. Under 28 U.S.C. § 3, when the chief justice is unable to discharge their functions, or when that office is vacant, the chief justice's duties are carried out by the senior associate justice until the disability or vacancy ends. When William Rehnquist was ill in 2004, John Paul Stevens acted in his stead, presiding over oral arguments. Currently, Clarence Thomas is the senior associate justice. List of chief justices Since the Supreme Court was established in 1789, the following 17 men have served as chief justice: Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Stellar_classification#Class_M] | [TOKENS: 8228]
Contents Stellar classification In astronomy, stellar classification is the classification of stars based on their spectral characteristics. Electromagnetic radiation from the star is analyzed by splitting it with a prism or diffraction grating into a spectrum exhibiting the rainbow of colors interspersed with spectral lines. Each line indicates a particular chemical element or molecule, with the line strength indicating the abundance of that element. The strengths of the different spectral lines vary mainly due to the temperature of the photosphere, although in some cases there are true abundance differences. The spectral class of a star is a short code primarily summarizing the ionization state, giving an objective measure of the photosphere's temperature. Most stars are currently classified under the Morgan–Keenan (MK) system using the letters O, B, A, F, G, K, and M, a sequence from the hottest (O-type) to the coolest (M-type). Each letter class is then subdivided using a numeric digit with 0 being hottest and 9 being coolest (e.g., A8, A9, F0, and F1 form a sequence from hotter to cooler). The sequence has been expanded with three classes for other stars that do not fit in the classical system: W, S and C. Some stellar remnants or objects of deviating mass have also been assigned letters: D for white dwarfs and L, T and Y for brown dwarfs (and exoplanets). In the MK system, a luminosity class is added to the spectral class using Roman numerals. This is based on the width of certain absorption lines in the star's spectrum, which vary with the density of the atmosphere and so distinguish giant stars from dwarfs. Luminosity class 0 or Ia+ is used for hypergiants, class I for supergiants, class II for bright giants, class III for regular giants, class IV for subgiants, class V for main-sequence stars, class sd (or VI) for subdwarfs, and class D (or VII) for white dwarfs. The full spectral class for the Sun is then G2V, indicating a main-sequence star with a surface temperature around 5,800 K. Conventional colour description The conventional colour description takes into account only the peak of the stellar spectrum. In actuality, however, stars radiate in all parts of the spectrum. Because all spectral colours combined appear white, the actual apparent colours the human eye would observe are far lighter than the conventional colour descriptions would suggest. This characteristic of 'lightness' indicates that the simplified assignment of colours within the spectrum can be misleading. Excluding colour-contrast effects in dim light, in typical viewing conditions there are no green, cyan, indigo, or violet stars. "Yellow" dwarfs such as the Sun are white, "red" dwarfs are a deep shade of yellow/orange, and "brown" dwarfs do not literally appear brown, but hypothetically would appear dim red or grey/black to a nearby observer. Modern classification The modern classification system is known as the Morgan–Keenan (MK) classification. Each star is assigned a spectral class (from the older Harvard spectral classification, which did not include luminosity) and a luminosity class using Roman numerals as explained below, forming the star's spectral type. Other modern stellar classification systems, such as the UBV system, are based on color indices—the measured differences in three or more color magnitudes. Those numbers are given labels such as "U−V" or "B−V", which represent the colors passed by two standard filters (e.g. Ultraviolet, Blue and Visual). The Harvard system is a one-dimensional classification scheme by astronomer Annie Jump Cannon, who re-ordered and simplified the prior alphabetical system by Draper (see History). Stars are grouped according to their spectral characteristics by single letters of the alphabet, optionally with numeric subdivisions. Main-sequence stars vary in surface temperature from approximately 2,000 to 50,000 K, whereas more-evolved stars – in particular, newly-formed white dwarfs – can have surface temperatures above 100,000 K. Physically, the classes indicate the temperature of the star's atmosphere and are normally listed from hottest to coldest. The traditional mnemonic for remembering the order of the spectral type letters, from hottest to coolest, is "Oh, Be A Fine Guy/Girl: Kiss Me!". Many alternative mnemonics have been proposed, in contests held by astronomy courses and organizations, but the traditional mnemonic remains the most popular. The spectral classes O through M, as well as other more specialized classes discussed later, are subdivided by Arabic numerals (0–9), where 0 denotes the hottest stars of a given class. For example, A0 denotes the hottest stars in class A and A9 denotes the coolest ones. Fractional numbers are allowed; for example, the star Mu Normae is classified as O9.7. The Sun is classified as G2. The fact that the Harvard classification of a star indicated its surface or photospheric temperature (or more precisely, its effective temperature) was not fully understood until after its development, though by the time the first Hertzsprung–Russell diagram was formulated (by 1914), this was generally suspected to be true. In the 1920s, the Indian physicist Meghnad Saha derived a theory of ionization by extending well-known ideas in physical chemistry pertaining to the dissociation of molecules to the ionization of atoms. First he applied it to the solar chromosphere, then to stellar spectra. Harvard astronomer Cecilia Payne then demonstrated that the O-B-A-F-G-K-M spectral sequence is actually a sequence in temperature. Because the classification sequence predates our understanding that it is a temperature sequence, the placement of a spectrum into a given subtype, such as B3 or A7, depends upon (largely subjective) estimates of the strengths of absorption features in stellar spectra. As a result, these subtypes are not evenly divided into any sort of mathematically representable intervals. The Yerkes spectral classification, also called the MK, or Morgan-Keenan (alternatively referred to as the MKK, or Morgan-Keenan-Kellman) system from the authors' initials, is a system of stellar spectral classification introduced in 1943 by William Wilson Morgan, Philip C. Keenan, and Edith Kellman from Yerkes Observatory. This two-dimensional (temperature and luminosity) classification scheme is based on spectral lines sensitive to stellar temperature and surface gravity, which is related to luminosity (whilst the Harvard classification is based on just surface temperature). Later, in 1953, after some revisions to the list of standard stars and classification criteria, the scheme was named the Morgan–Keenan classification, or MK, which remains in use today. Denser stars with higher surface gravity exhibit greater pressure broadening of spectral lines. The gravity, and hence the pressure, on the surface of a giant star is much lower than for a dwarf star because the radius of the giant is much greater than a dwarf of similar mass. Therefore, differences in the spectrum can be interpreted as luminosity effects and a luminosity class can be assigned purely from examination of the spectrum. A number of different luminosity classes are distinguished, as listed in the table below. Marginal cases are allowed; for example, a star may be either a supergiant or a bright giant, or may be in between the subgiant and main-sequence classifications. In these cases, two special symbols are used between the two luminosity classes: For example, a star classified as A3-4III/IV would be in between spectral types A3 and A4, while being either a giant star or a subgiant. Sub-dwarf classes have also been used: VI for sub-dwarfs (stars slightly less luminous than the main sequence). Nominal luminosity class VII (and sometimes higher numerals) is now rarely used for white dwarf or "hot sub-dwarf" classes, since the temperature-letters of the main sequence and giant stars no longer apply to white dwarfs. Occasionally, letters a and b are applied to luminosity classes other than supergiants; for example, a giant star slightly less luminous than typical may be given a luminosity class of IIIb, while a luminosity class IIIa indicates a star slightly brighter than a typical giant. A sample of extreme V stars with strong absorption in He II λ4686 spectral lines have been given the Vz designation. An example star is HD 93129 B. Additional nomenclature, in the form of lower-case letters, can follow the spectral type to indicate peculiar features of the spectrum. For example, 59 Cygni is listed as spectral type B1.5Vnne, indicating a spectrum with the general classification B1.5V, as well as very broad absorption lines and certain emission lines. History The reason for the odd arrangement of letters in the Harvard classification is historical, having evolved from the earlier Secchi classes and been progressively modified as understanding improved. During the 1860s and 1870s, pioneering stellar spectroscopist Angelo Secchi created the Secchi classes in order to classify observed spectra. By 1866, he had developed three classes of stellar spectra, shown in the table below. In the late 1890s, this classification began to be superseded by the Harvard classification, which is discussed in the remainder of this article. The Roman numerals used for Secchi classes should not be confused with the completely unrelated Roman numerals used for Yerkes luminosity classes and the proposed neutron star classes. After the death of her husband, Mary Anna Draper began to fund the creation of the Harvard Plate Stacks and the study of these plates at the Harvard College Observatory. The director of the Observatory, Edward C. Pickering began to hire pioneering female astronomers collectively known as the Harvard Computers. Though they would study many different astronomical subjects, an early result of this work was the first edition of The Henry Draper Memorial Catalogue of Stellar Spectra, first published in 1890. Williamina Fleming classified most of the spectra in the first edition of the catalogue and is credited with classifying over 10,000 featured stars and discovering 10 novae and more than 200 variable stars. With the help of the Harvard Computers, especially Williamina Fleming, the first iteration of the Henry Draper catalogue was devised to replace the Roman-numeral scheme established by Angelo Secchi. The catalogue used a scheme in which the previously used Secchi classes (I to V) were subdivided into more specific classes, given letters from A to P. Also, the letter Q was used for stars not fitting into any other class. Fleming worked with Pickering to differentiate 17 different classes based on the intensity of hydrogen spectral lines, which causes variation in the wavelengths emanated from stars and results in variation in color appearance. The spectra in class A tended to produce the strongest hydrogen absorption lines while spectra in class O produced virtually no visible lines. The lettering system displayed the gradual decrease in hydrogen absorption in the spectral classes when moving down the alphabet. This classification system was later modified by Annie Jump Cannon and Antonia Maury to produce the Harvard spectral classification scheme. In 1897, another astronomer at Harvard, Antonia Maury, placed the Orion subtype of Secchi class I ahead of the remainder of Secchi class I, thus placing the modern type B ahead of the modern type A. She was the first to do so, although she did not use lettered spectral types, but rather a series of twenty-two types numbered from I–XXII. Because the 22 Roman numeral groupings did not account for additional variations in spectra, three additional divisions were made to further specify differences: Lowercase letters were added to differentiate relative line appearance in spectra; the lines were defined as: Antonia Maury published her own stellar classification catalogue in 1897 called "Spectra of Bright Stars Photographed with the 11 inch Draper Telescope as Part of the Henry Draper Memorial", which included 4,800 photographs and Maury's analyses of 681 bright northern stars. This was the first instance in which a woman was credited for an observatory publication. In 1901, Annie Jump Cannon returned to the lettered types, but dropped all letters except O, B, A, F, G, K, M, and N used in that order, as well as P for planetary nebulae and Q for some peculiar spectra. She also used types such as B5A for stars halfway between types B and A, F2G for stars one fifth of the way from F to G, and so on. Finally, by 1912, Cannon had changed the types B, A, B5A, F2G, etc. to B0, A0, B5, F2, etc. This is essentially the modern form of the Harvard classification system. This system was developed through the analysis of spectra on photographic plates, which could convert light emanated from stars into a readable spectrum. A luminosity classification known as the Mount Wilson system was used to distinguish between stars of different luminosities. This notation system is still sometimes seen on modern spectra. Spectral types The stellar classification system is taxonomic, based on type specimens, similar to classification of species in biology: The categories are defined by one or more standard stars for each category and sub-category, with an associated description of the distinguishing features. Stars are often referred to as early or late types. "Early" is a synonym for hotter, while "late" is a synonym for cooler. Depending on the context, "early" and "late" may be absolute or relative terms. "Early" as an absolute term would therefore refer to O or B, and possibly A stars. As a relative reference it relates to stars hotter than others, such as "early K" being perhaps K0, K1, K2 and K3. "Late" is used in the same way, with an unqualified use of the term indicating stars with spectral types such as K and M, but it can also be used for stars that are cool relative to other stars, as in using "late G" to refer to G7, G8, and G9. In the relative sense, "early" means a lower Arabic numeral following the class letter, and "late" means a higher number. This obscure terminology is a hold-over from a late nineteenth century model of stellar evolution, which supposed that stars were powered by gravitational contraction via the Kelvin–Helmholtz mechanism, which is now known to not apply to main-sequence stars. If that were true, then stars would start their lives as very hot "early-type" stars and then gradually cool down into "late-type" stars. This mechanism provided ages of the Sun that were much smaller than what is observed in the geologic record, and was rendered obsolete by the discovery that stars are powered by nuclear fusion. The terms "early" and "late" were carried over, beyond the demise of the model they were based on. O-type stars are very hot and extremely luminous, with most of their radiated output in the ultraviolet range. These are the rarest of all main-sequence stars. About 1 in 3,000,000 (0.00003%) of the main-sequence stars in the solar neighborhood are O-type stars.[c] Some of the most massive stars lie within this spectral class. O-type stars frequently have complicated surroundings that make measurement of their spectra difficult. O-type spectra formerly were defined by the ratio of the strength of the He II λ4541 relative to that of He I λ4471, where λ is the radiation wavelength. Spectral type O7 was defined to be the point at which the two intensities are equal, with the He I line weakening towards earlier types. Type O3 was, by definition, the point at which said line disappears altogether, although it can be seen very faintly with modern technology. Due to this, the modern definition uses the ratio of the nitrogen line N IV λ4058 to N III λλ4634-40-42. O-type stars have dominant lines of absorption and sometimes emission for He II lines, prominent ionized (Si IV, O III, N III, and C III) and neutral helium lines, strengthening from O5 to O9, and prominent hydrogen Balmer lines, although not as strong as in later types. Higher-mass O-type stars do not retain extensive atmospheres due to the extreme velocity of their stellar wind, which may reach 2,000 km/s. Because they are so massive, O-type stars have very hot cores and burn through their hydrogen fuel very quickly, so they are the first stars to leave the main sequence. When the MKK classification scheme was first described in 1943, the only subtypes of class O used were O5 to O9.5. The MKK scheme was extended to O9.7 in 1971 and O4 in 1978, and new classification schemes that add types O2, O3, and O3.5 have subsequently been introduced. Example spectral standards: B-type stars are very luminous and blue. Their spectra have neutral helium lines, which are most prominent at the B2 subclass, and moderate hydrogen lines. As O- and B-type stars are so energetic, they only live for a relatively short time. Thus, due to the low probability of kinematic interaction during their lifetime, they are unable to stray far from the area in which they formed, apart from runaway stars. The transition from class O to class B was originally defined to be the point at which the He II λ4541 disappears. However, with modern equipment, the line is still apparent in the early B-type stars. Today for main-sequence stars, the B class is instead defined by the intensity of the He I violet spectrum, with the maximum intensity corresponding to class B2. For supergiants, lines of silicon are used instead; the Si IV λ4089 and Si III λ4552 lines are indicative of early B. At mid-B, the intensity of the latter relative to that of Si II λλ4128-30 is the defining characteristic, while for late B, it is the intensity of Mg II λ4481 relative to that of He I λ4471. These stars tend to be found in their originating OB associations, which are associated with giant molecular clouds. The Orion OB1 association occupies a large portion of a spiral arm of the Milky Way and contains many of the brighter stars of the constellation Orion. About 1 in 800 (0.125%) of the main-sequence stars in the solar neighborhood are B-type main-sequence stars.[c] B-type stars are relatively uncommon and the closest is Regulus, at around 80 light years. Massive yet non-supergiant stars known as Be stars have been observed to show one or more Balmer lines in emission, with the hydrogen-related electromagnetic radiation series projected out by the stars being of particular interest. Be stars are generally thought to feature unusually strong stellar winds, high surface temperatures, and significant attrition of stellar mass as the objects rotate at a curiously rapid rate. Objects known as B[e] stars – or B(e) stars for typographic reasons – possess distinctive neutral or low ionisation emission lines that are considered to have forbidden mechanisms, undergoing processes not normally allowed under current understandings of quantum mechanics. Example spectral standards: A-type stars are among the more common naked eye stars, and are white or bluish-white. They have strong hydrogen lines, at a maximum by A0, and also lines of ionized metals (Fe II, Mg II, Si II) at a maximum at A5. The presence of Ca II lines is notably strengthening by this point. About 1 in 160 (0.625%) of the main-sequence stars in the solar neighborhood are A-type stars,[c] which includes 9 stars within 15 parsecs. Example spectral standards: F-type stars have strengthening spectral lines H and K of Ca II. Neutral metals (Fe I, Cr I) beginning to gain on ionized metal lines by late F. Their spectra are characterized by the weaker hydrogen lines and ionized metals. Their color is white. About 1 in 33 (3.03%) of the main-sequence stars in the solar neighborhood are F-type stars,[c] including 1 star Procyon A within 20 ly. Example spectral standards: G-type stars, including the Sun, have prominent spectral lines H and K of Ca II, which are most pronounced at G2. They have even weaker hydrogen lines than F, but along with the ionized metals, they have neutral metals. There is a prominent spike in the G band of CN molecules. Class G main-sequence stars make up about 7.5%, nearly one in thirteen, of the main-sequence stars in the solar neighborhood. There are 21 G-type stars within 10pc.[c] Class G contains the "Yellow Evolutionary Void". Supergiant stars often swing between O or B (blue) and K or M (red). While they do this, they do not stay for long in the unstable yellow supergiant class. Example spectral standards: K-type stars are orangish stars that are slightly cooler than the Sun. They make up about 12% of the main-sequence stars in the solar neighborhood.[c] There are also giant K-type stars, which range from hypergiants like RW Cephei, to giants and supergiants, such as Arcturus, whereas orange dwarfs, like Alpha Centauri B, are main-sequence stars. They have extremely weak hydrogen lines, if those are present at all, and mostly neutral metals (Mn I, Fe I, Si I). By late K, molecular bands of titanium oxide become present. Mainstream theories (those rooted in lower harmful radioactivity and star longevity) would thus suggest such stars have the optimal chances of heavily evolved life developing on orbiting planets (if such life is directly analogous to Earth's) due to a broad habitable zone yet much lower harmful periods of emission compared to those with the broadest such zones. Example spectral standards: Class M stars are by far the most common. About 76% of the main-sequence stars in the solar neighborhood are class M stars.[c][f] However, class M main-sequence stars (red dwarfs) have such low luminosities that none are bright enough to be seen with the unaided eye, unless under exceptional conditions. The brightest-known M class main-sequence star is Lacaille 8760, class M0V, with magnitude 6.7 (the limiting magnitude for typical naked-eye visibility under good conditions being typically quoted as 6.5), and it is extremely unlikely that any brighter examples will be found. Although most class M stars are red dwarfs, most of the largest-known supergiant stars in the Milky Way are class M stars, such as VY Canis Majoris, VV Cephei, Antares, and Betelgeuse. Furthermore, some larger, hotter brown dwarfs are late class M, usually in the range of M6.5 to M9.5. The spectrum of a class M star contains lines from oxide molecules (in the visible spectrum, especially TiO) and all neutral metals, but absorption lines of hydrogen are usually absent. TiO bands can be strong in class M stars, usually dominating their visible spectrum by about M5. Vanadium(II) oxide bands become present by late M. Example spectral standards: Extended spectral types A number of new spectral types have been taken into use from newly discovered types of stars. Spectra of some very hot and bluish stars exhibit marked emission lines from carbon or nitrogen, or sometimes oxygen. Once included as type O stars, the Wolf–Rayet stars of class W or WR are notable for spectra lacking hydrogen lines. Instead their spectra are dominated by broad emission lines of highly ionized helium, nitrogen, carbon, and sometimes oxygen. They are thought to mostly be dying supergiants with their hydrogen layers blown away by stellar winds, thereby directly exposing their hot helium shells. Class WR is further divided into subclasses according to the relative strength of nitrogen and carbon emission lines in their spectra (and outer layers). WR spectra range is listed below: Although the central stars of most planetary nebulae (CSPNe) show O-type spectra, around 10% are hydrogen-deficient and show WR spectra. These are low-mass stars and to distinguish them from the massive Wolf–Rayet stars, their spectra are enclosed in square brackets: e.g. [WC]. Most of these show [WC] spectra, some [WO], and very rarely [WN]. The slash stars are O-type stars with WN-like lines in their spectra. The name "slash" comes from their printed spectral type having a slash in it (e.g. "Of/WNL")). There is a secondary group found with these spectra, a cooler, "intermediate" group designated "Ofpe/WN9". These stars have also been referred to as WN10 or WN11, but that has become less popular with the realisation of the evolutionary difference from other Wolf–Rayet stars. Recent discoveries of even rarer stars have extended the range of slash stars as far as O2-3.5If*/WN5-7, which are even hotter than the original "slash" stars. They are O stars with strong magnetic fields. Designation is Of?p. The new spectral types L, T, and Y were created to classify infrared spectra of cool stars. This includes both red dwarfs and brown dwarfs that are very faint in the visible spectrum. Brown dwarfs, stars that do not undergo hydrogen fusion, cool as they age and so progress to later spectral types. Brown dwarfs start their lives with M-type spectra and will cool through the L, T, and Y spectral classes, faster the less massive they are; the highest-mass brown dwarfs cannot have cooled to Y or even T dwarfs within the age of the universe. Because this leads to an unresolvable overlap between spectral types' effective temperature and luminosity for some masses and ages of different L-T-Y types, no distinct temperature or luminosity values can be given. Class L dwarfs get their designation because they are cooler than M stars and L is the remaining letter alphabetically closest to M. Some of these objects have masses large enough to support hydrogen fusion and are therefore stars, but most are of substellar mass and are therefore brown dwarfs. They are a very dark red in color and brightest in infrared. Their atmosphere is cool enough to allow metal hydrides and alkali metals to be prominent in their spectra. Due to low surface gravity in giant stars, TiO- and VO-bearing condensates never form. Thus, L-type stars larger than dwarfs can never form in an isolated environment. However, it may be possible for these L-type supergiants to form through stellar collisions, an example of which is V838 Monocerotis while in the height of its luminous red nova eruption. Class T dwarfs are cool brown dwarfs with surface temperatures between approximately 550 and 1,300 K (277 and 1,027 °C; 530 and 1,880 °F). Their emission peaks in the infrared. Methane is prominent in their spectra. Study of the number of proplyds (protoplanetary disks, clumps of gas in nebulae from which stars and planetary systems are formed) indicates that the number of stars in the galaxy should be several orders of magnitude higher than what was previously conjectured. It is theorized that these proplyds are in a race with each other. The first one to form will become a protostar, which are very violent objects and will disrupt other proplyds in the vicinity, stripping them of their gas. The victim proplyds will then probably go on to become main-sequence stars or brown dwarfs of the L and T classes, which are quite invisible to us. Brown dwarfs of spectral class Y are cooler than those of spectral class T and have qualitatively different spectra from them. A total of 17 objects have been placed in class Y as of August 2013. Although such dwarfs have been modelled and detected within forty light-years by the Wide-field Infrared Survey Explorer (WISE) there is no well-defined spectral sequence yet and no prototypes. Nevertheless, several objects have been proposed as spectral classes Y0, Y1, and Y2. The spectra of these prospective Y objects display absorption around 1.55 micrometers. Delorme et al. have suggested that this feature is due to absorption from ammonia, and that this should be taken as the indicative feature for the T-Y transition. In fact, this ammonia-absorption feature is the main criterion that has been adopted to define this class. However, this feature is difficult to distinguish from absorption by water and methane, and other authors have stated that the assignment of class Y0 is premature. The latest brown dwarf proposed for the Y spectral type, WISE 1828+2650, is a > Y2 dwarf with an effective temperature originally estimated around 300 K, the temperature of the human body. Parallax measurements have, however, since shown that its luminosity is inconsistent with it being colder than ~400 K. The coolest Y dwarf currently known is WISE 0855−0714 with an approximate temperature of 250 K, and a mass just seven times that of Jupiter. The mass range for Y dwarfs is 9–25 Jupiter masses, but young objects might reach below one Jupiter mass (although they cool to become planets), which means that Y class objects straddle the 13 Jupiter mass deuterium-fusion limit that marks the current IAU division between brown dwarfs and planets. Young brown dwarfs have low surface gravities because they have larger radii and lower masses compared to the field stars of similar spectral type. These sources are marked by a letter beta (β) for intermediate surface gravity and gamma (γ) for low surface gravity. Indication for low surface gravity are weak CaH, KI and NaI lines, as well as strong VO line. Alpha (α) stands for normal surface gravity and is usually dropped. Sometimes an extremely low surface gravity is denoted by a delta (δ). The suffix "pec" stands for peculiar. The peculiar suffix is still used for other features that are unusual and summarizes different properties, indicative of low surface gravity, subdwarfs and unresolved binaries. The prefix sd stands for subdwarf and only includes cool subdwarfs. This prefix indicates a low metallicity and kinematic properties that are more similar to halo stars than to disk stars. Subdwarfs appear bluer than disk objects. The red suffix describes objects with red color, but an older age. This is not interpreted as low surface gravity, but as a high dust content. The blue suffix describes objects with blue near-infrared colors that cannot be explained with low metallicity. Some are explained as L+T binaries, others are not binaries, such as 2MASS J11263991−5003550 and are explained with thin and/or large-grained clouds. Carbon-stars are stars whose spectra indicate production of carbon – a byproduct of triple-alpha helium fusion. With increased carbon abundance, and some parallel s-process heavy element production, the spectra of these stars become increasingly deviant from the usual late spectral classes G, K, and M. Equivalent classes for carbon-rich stars are S and C. The giants among those stars are presumed to produce this carbon themselves, but some stars in this class are double stars, whose odd atmosphere is suspected of having been transferred from a companion that is now a white dwarf, when the companion was a carbon-star. Originally classified as R and N stars, these are also known as carbon stars. These are red giants, near the end of their lives, in which there is an excess of carbon in the atmosphere. The old R and N classes ran parallel to the normal classification system from roughly mid-G to late M. These have more recently been remapped into a unified carbon classifier C with N0 starting at roughly C6. Another subset of cool carbon stars are the C–J-type stars, which are characterized by the strong presence of molecules of 13 CN in addition to those of 12 CN. A few main-sequence carbon stars are known, but the overwhelming majority of known carbon stars are giants or supergiants. There are several subclasses: Class S stars form a continuum between class M stars and carbon stars. Those most similar to class M stars have strong ZrO absorption bands analogous to the TiO bands of class M stars, whereas those most similar to carbon stars have strong sodium D lines and weak C2 bands. Class S stars have excess amounts of zirconium and other elements produced by the s-process, and have more similar carbon and oxygen abundances to class M or carbon stars. Like carbon stars, nearly all known class S stars are asymptotic-giant-branch stars. The spectral type is formed by the letter S and a number between zero and ten. This number corresponds to the temperature of the star and approximately follows the temperature scale used for class M giants. The most common types are S3 to S5. The non-standard designation S10 has only been used for the star Chi Cygni when at an extreme minimum. The basic classification is usually followed by an abundance indication, following one of several schemes: S2,5; S2/5; S2 Zr4 Ti2; or S2*5. A number following a comma is a scale between 1 and 9 based on the ratio of ZrO and TiO. A number following a slash is a more-recent but less-common scheme designed to represent the ratio of carbon to oxygen on a scale of 1 to 10, where a 0 would be an MS star. Intensities of zirconium and titanium may be indicated explicitly. Also occasionally seen is a number following an asterisk, which represents the strength of the ZrO bands on a scale from 1 to 5. In between the M and S classes, border cases are named MS stars. In a similar way, border cases between the S and C-N classes are named SC or CS. The sequence M → MS → S → SC → C-N is hypothesized to be a sequence of increased carbon abundance with age for carbon stars in the asymptotic giant branch. The class D (for Degenerate) is the modern classification used for white dwarfs—low-mass stars that are no longer undergoing nuclear fusion and have shrunk to planetary size, slowly cooling down. Class D is further divided into spectral types DA, DB, DC, DO, DQ, DX, and DZ. The letters are not related to the letters used in the classification of other stars, but instead indicate the composition of the white dwarf's visible outer layer or atmosphere. The white dwarf types are as follows: The type is followed by a number giving the white dwarf's surface temperature. This number is a rounded form of 50400/Teff, where Teff is the effective surface temperature, measured in kelvins. Originally, this number was rounded to one of the digits 1 through 9, but more recently fractional values have started to be used, as well as values below 1 and above 9.(For example DA1.5 for IK Pegasi B) Two or more of the type letters may be used to indicate a white dwarf that displays more than one of the spectral features above. A different set of spectral peculiarity symbols are used for white dwarfs than for other types of stars: Luminous blue variables (LBVs) are rare, massive and evolved stars that show unpredictable and sometimes dramatic variations in their spectra and brightness. During their "quiescent" states, they are usually similar to B-type stars, although with unusual spectral lines. During outbursts, they are more similar to F-type stars, with significantly lower temperatures. Many papers treat LBV as its own spectral type. Finally, the classes P and Q are left over from the system developed by Cannon for the Henry Draper Catalogue. They are occasionally used for certain objects, not associated with a single star: Type P objects are stars within planetary nebulae (typically young white dwarfs or hydrogen-poor M giants); type Q objects are novae. Stellar remnants Stellar remnants are objects associated with the death of stars. Included in the category are white dwarfs, and as can be seen from the radically different classification scheme for class D, stellar remnants are difficult to fit into the MK system. The Hertzsprung–Russell diagram, which the MK system is based on, is observational in nature so these remnants cannot easily be plotted on the diagram, or cannot be placed at all. Old neutron stars are relatively small and cold, and would fall on the far right side of the diagram. Planetary nebulae are dynamic and tend to quickly fade in brightness as the progenitor star transitions to the white dwarf branch. If shown, a planetary nebula would be plotted to the right of the diagram's upper right quadrant. A black hole emits no visible light of its own, and therefore would not appear on the diagram. A classification system for neutron stars using Roman numerals has been proposed: type I for less massive neutron stars with low cooling rates, type II for more massive neutron stars with higher cooling rates, and a proposed type III for more massive neutron stars (possible exotic star candidates) with higher cooling rates. The more massive a neutron star is, the higher neutrino flux it carries. These neutrinos carry away so much heat energy that after only a few years the temperature of an isolated neutron star falls from the order of billions to only around a million Kelvin. This proposed neutron star classification system is not to be confused with the earlier Secchi spectral classes and the Yerkes luminosity classes. Replaced spectral classes Several spectral types, all previously used for non-standard stars in the mid-20th century, have been replaced during revisions of the stellar classification system. They may still be found in old editions of star catalogs: R and N have been subsumed into the new C class as C-R and C-N. Stellar classification, habitability, and the search for life While humans may eventually be able to colonize any kind of stellar habitat, this section will address the probability of life arising around other stars. Stability, luminosity, and lifespan are all factors in stellar habitability. Humans know of only one star that hosts life, the G-class Sun, a star with an abundance of heavy elements and low variability in brightness. The Solar System is also unlike many stellar systems in that it only contains one star (see Habitability of binary star systems). Working from these constraints and the problems of having an empirical sample set of only one, the range of stars that are predicted to be able to support life is limited by a few factors. Of the main-sequence star types, stars more massive than 1.5 times that of the Sun (spectral types O, B, and A) age too quickly for advanced life to develop (using Earth as a guideline). On the other extreme, dwarfs of less than half the mass of the Sun (spectral type M) are likely to tidally lock planets within their habitable zone, along with other problems (see Habitability of red dwarf systems). While there are many problems facing life on red dwarfs, many astronomers continue to model these systems due to their sheer numbers and longevity. For these reasons NASA's Kepler Mission is searching for habitable planets at nearby main-sequence stars that are less massive than spectral type A but more massive than type M—making the most probable stars to host life dwarf stars of types F, G, and K. See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Keyboard_technology] | [TOKENS: 4207]
Contents Keyboard technology The technology of computer keyboards includes many elements. Many different keyboard technologies have been developed to meet consumer demands and optimized for industrial applications. The standard full-size (100%) computer alphanumeric keyboard typically uses 101 to 105 keys; keyboards integrated in laptop computers are typically less comprehensive. Virtual keyboards, which are mostly accessed via a touchscreen interface, have no physical switches and provide artificial audio and haptic feedback instead. This variety of keyboard can prove useful, as it is not limited by the rigid nature of physical computer keyboards. The majority of modern keyboards include a control processor and indicator lights to provide feedback to the user (and to the central processor) about what state the keyboard is in. Plug-and-play technology means that its "out of the box" layout can be notified to the system, making the keyboard immediately ready to use without the need for further configuration, unless the user so desires. This also enables manufacture of generic keyboards for a variety of language markets, that differ only in the symbols engraved on the keytops. Keystroke sensing A common membrane design consists of three layers. The top and bottom layer have exposed electrical matrix traces, and the middle layer is a spacer to prevent current from passing through the top and bottom conductive traces passively. When pressure is applied to the top membrane, it bridges the top and bottom conductive contact pads, allowing current to transfer. Two of the most common types of membrane keyboards include full-travel rubber dome over membrane and flat-panel membrane keyboards. Flat-panel membrane keyboards are most often found on appliances like microwave ovens or photocopiers. Full-travel rubber dome over membrane keyboards are the most common keyboard design manufactured today. In these keyboards, a rubber dome sheet is placed above the membranes, ensuring that the domes align with the contact pads. The rubber dome serves a dual purpose: it acts as a tactile return spring and provides a soft surface to transfer force onto the top membrane. To bridge the connection between the two contact pads, the rubber dome must be fully depressed. Rubber dome over membrane keyboards became very popular with computer manufacturers as they sought to reduce costs while PC prices declined. A common, compact variant of rubber dome over membrane is the scissor-switch, based on the scissors mechanism. Due to the requirement of many notebooks to be slim, they require the keyboards to be low-profile. Therefore, this technology is most commonly featured on notebooks. The keys are attached to the keyboard via two plastic pieces that interlock in a "scissor"-like fashion and snap to the keyboard and the keycap. These keyboards are generally quiet and the keys require little force to press. Scissor-switch keyboards are typically slightly more expensive. They are harder to clean (due to the limited movement of the keys and their multiple attachment points) but also less likely to get debris in them as the gaps between the keys are often smaller (as there is no need for extra room to allow for the 'wiggle' in the key, as typically found on a membrane keyboard). Flat-panel membrane keyboards are often used in harsh environments where water or leak-proofing is desirable. They can have non-tactile, polydome tactile and metal dome tactile keys. Polydome tactile membrane switches use polyester, or PET, and is formed to create a stiff plastic dome. When the stiff polydome is pressed, the conductive ink on the back of the polydome connects with the bottom layer of the circuit. Metal dome membrane switches are made of stainless steel and offer enhanced durability and reliability and can feature custom dome designs. Non-tactile flat-panel membrane keyboards have little to no keypress feel and often issue a beep or flash of light on actuation. Although this keyboard design was commonly used in the early days of the personal computer (on the Sinclair ZX80, ZX81, and Atari 400), they have been supplanted by more responsive and modern designs. Computer keyboards made of flexible silicone or polyurethane materials can roll up in a bundle. This type of keyboard can take advantage of the thin flexible plastic membranes, but still pose the risk of damage. When they are completely sealed in rubber, they are water resistant. Roll-up keyboards provide relatively little tactile feedback. Because these keyboards are typically made of silicone, they unfavorably tend to attract dirt, dust, and hair. Keyboards which have metal contact switches typically use discrete modules for each key. This type of switch are usually composed of a housing, a spring, and a slider, and sometimes other parts such as a separate tactile leaf or clickbar. At rest, the metal contacts inside of the switch are held apart. As the switch is pressed down, the contacts are held together to conduct current for actuation. Many switch designs use gold for contact material to prolong the lifetime of the switch by preventing switch failure from oxidization. Most designs use a metal leaf, where the movable contact is a leaf spring. A major producer of discrete metal contact switches is Cherry, who has manufactured the Cherry MX family of switches since the 1980s. Cherry's color-coding system of categorizing switches has been imitated by other switch manufacturers, such as Gateron and Kailh among many others. Keyboards which utilize this technology are commonly referred to as "mechanical keyboards", but there is not a universally agreed-upon clear-cut definition for this term. Since mid-2000s, mechanical keyboards are used by gamers and professionals again. Hot-swappable keyboards are keyboards in which switches can be pulled out and replaced without requiring the typical solder connection. Instead of the switch pins being directly soldered to the keyboard's PCB, hot-swap sockets are instead soldered on. Hot-swap sockets can allow users to change different switches out of the keyboard without having the tools or knowledge required to solder. The reed module in a reed switch consists of two metal contacts inside of a glass bubble usually sealed with some inert gas like nitrogen to help prevent particle build-up. The slider in the housing pushes a magnet down in front of the reed capsule and the magnetic field causes the reed contacts to become attracted to each other and make contact. The reed switch mechanism was originally invented in 1936 by W B Ellwood at Bell Telephone Laboratories. Although reed switches use metal leaf contacts, they are considered separate from all other forms of metal contact switch because the contacts are operated magnetically instead of using physical force from a slider to be pressed together. In a capacitive mechanism, pressing a key changes the capacitance of a pattern of capacitor pads. The pattern consists of two D-shaped capacitor pads for each switch, printed on a printed circuit board (PCB) and covered by a thin, insulating film of soldermask which acts as a dielectric. For the most common, foam and foil implementation of this technology, the movable part ends with a flat foam element about the size of an aspirin tablet, finished with aluminum foil. Opposite the switch is a PCB with the capacitor pads. When the key is pressed, the foil tightly clings to the surface of the PCB, forming a daisy chain of two capacitors between contact pads and itself separated with a thin soldermask, and thus "shorting" the contact pads with an easily detectable drop of capacitive reactance between them. Usually, this permits a pulse or pulse train to be sensed. An advantage of the capacitive technology is that the switch is not dependent on the flow of current through metal contacts to actuate. There is no debouncing necessary. The sensor tells enough about the distance of the keypress to allow the user to adjust the actuation point (key sensitivity). This adjustment can be done with the help of the bundled software and individually for each key, if so implemented. A keyboard which utilizes these abilities include the Real Force RGB. IBM's Model F keyboard is a design consisting of a buckling spring over a capacitive PCB, similar to the later Model M keyboard, but instead used membrane sensing in place of a PCB. The Topre Corporation design for switches uses a conical spring below a rubber dome. The dome provides resistance, while the spring does the capacitive action. Hall effect keyboards use Hall effect sensors to detect the movement of a magnet by the potential difference in voltage. When a key is depressed, it moves a magnet that is detected by a solid-state sensor. Because they require no physical contact for actuation, Hall-effect keyboards are extremely reliable and can accept millions of keystrokes before failing. They are used for ultra-high reliability applications such as nuclear power plants, aircraft cockpits, and critical industrial environments. They can easily be made totally waterproof, and can resist large amounts of dust and contaminants. Because a magnet and sensor are required for each key, as well as custom control electronics, they are expensive to manufacture. A hall switch works through magnetic fields. Every switch has a small magnet fixed inside it. When the electricity passes through the main circuit, it creates a magnetic flux. Every time a key is pressed, the magnetic intensity changes. This change is noticed by the circuit and the sensors send the information to the motherboard. Optical switch technology was introduced in 1962 by Harley E. Kelchner for use in a typewriter machine with the purpose of reducing the noise generated by typewriter keys. An optical keyboard technology utilizes light-emitting devices and photo sensors to optically detect actuated keys, offering faster response times and eliminating the need for physical contact between moving parts. Most commonly the emitters and sensors are located at the perimeter, mounted on a small PCB. The light is directed from side to side of the keyboard interior, and it can only be blocked by the actuated keys. Most optical keyboards require at least two beams (most commonly a vertical beam and a horizontal beam) to determine the actuated key. Some optical keyboards use a special key structure that blocks the light in a certain pattern, allowing only one beam per row of keys (most commonly a horizontal beam). The mechanism of the optical keyboard is very simple – a light beam is sent from the emitter to the receiving sensor, and the actuated key blocks, reflects, refracts or otherwise interacts with the beam, resulting in an identified key. A major advantage of optical switch technology is that it is very resistant to moisture, dust, and debris because there are no metal contacts that can corrode. The specialist DataHand keyboard uses optical technology to sense keypresses with a single light beam and sensor per key. The keys are held in their rest position by magnets; when the magnetic force is overcome to press a key, the optical path is unblocked and the keypress is registered. A laser projection device approximately the size of a computer mouse projects the outline of keyboard keys onto a flat surface, such as a table or desk. This type of keyboard is portable enough to be easily used with PDAs and cellphones, and many models have retractable cords and wireless capabilities. However, this design is prone to error, as accidental disruption of the laser will generate unwanted keystrokes. This type of keyboard's inherent lack of tactile feedback makes it often undesirable. Notable switch mechanisms The buckling spring mechanism (now expired U.S. patent 4,118,611) atop the switch is responsible for the characteristic clicky response of the keyboard. This mechanism controls a small hammer that strikes a capacitive or membrane switch. IBM's Model F keyboard series was the first to employ buckling spring key-switches, which used capacitive sensing to actuate. The original patent was never employed in an actual production keyboard but it establishes the basic premise of a buckling spring. The IBM Model M is a large family of computer keyboards created by IBM that began in late 1983 when IBM patented a membrane buckling spring key-switch design. The main intent of this design was to halve the production cost of the Model F. The most well known full-size Model M is known officially as the IBM Enhanced Keyboard. In 1993, two years after spawning Lexmark, IBM transferred its keyboard operations to the daughter company. New Model M keyboards continued to be manufactured for IBM by Lexmark until 1996, when Unicomp was established and purchased the keyboard patents and tooling equipment to continue their production. IBM continued to make Model M's in their Scotland factory until 1999. Debouncing When a key is pressed, it oscillates (bounces) against its contacts several times before settling. When released, it oscillates again until it comes to rest. Although it happens on a scale too small to be visible to the naked eye, it can be enough to register multiple keystrokes. To resolve this, the processor in a keyboard debounces the keystrokes, by averaging the signal over time to produce one "confirmed" keystroke that (usually) corresponds to a single press or release. Early membrane keyboards had limited typing speed because they had to do significant debouncing. This was a noticeable problem on the ZX81. Keycaps Keycaps are used on full-travel keyboards. While modern keycaps are typically surface-printed, they can also be double-shot molded, laser marked, dye sublimation printed, engraved, or made of transparent material with printed paper inserts. There are also keycaps which utilize thin shells that are placed over key bases, which were used on several IBM PC keyboards. A more modern but rare innovation is the OLED key which, on some specialty devices like stream controller keypads, feature a tiny OLED display under each keycap, which can be customized in software to display any icon or glyph the user wishes. Due to high cost most of these devices are not full keyboards for typing but are meant for shortcuts for launching applications or macro commands for other programs. Switches allow for the removal and replacement of keycaps with a common stem type. Stabilizers Almost all keyboards which utilize keys two or more units in length (such as the typical space bar or enter key) use stabilizers to ensure consistent movement and prevent key wobble during typing. Various lubricants and padding techniques can be used to reduce the rattle and ticking of components. Other parts A modern PC keyboard typically includes a control processor and indicator lights to provide feedback to the user about what state the keyboard is in. Depending on the sophistication of the controller's programming, the keyboard may also offer other special features. The processor is usually a single chip 8048 microcontroller variant. The keyboard switch matrix is wired to its inputs and it processes the incoming keystrokes and sends the results down a serial cable (the keyboard cord) to a receiver in the main computer box. It also controls the illumination of the "caps lock", "num lock" and "scroll lock" lights. A common test for whether the computer has crashed is pressing the "caps lock" key. The keyboard sends the key code to the keyboard driver running in the main computer; if the main computer is operating, it commands the light to turn on. All the other indicator lights work in a similar way. The keyboard driver also tracks the shift, alt and control state of the keyboard. Keyboard switch matrix The keyboard switch matrix is often drawn with horizontal wires and vertical wires in a grid which is called a matrix circuit. It has a switch at some or all intersections, much like a multiplexed display. Almost all keyboards have only the switch (but no diode) at each intersection, which causes "ghost keys" and "key jamming" when multiple keys are pressed (rollover). Certain, often more expensive, keyboards have a diode between each intersection, allowing the keyboard microcontroller to accurately sense any number of simultaneous keys being pressed, without generating erroneous ghost keys. Alternative text-entering methods Optical character recognition (OCR) is preferable to rekeying for converting existing text that is already written down but not in machine-readable format (for example, a Linotype-composed book from the 1940s). In other words, to convert the text from an image to editable text (that is, a string of character codes), a person could re-type it, or a computer could look at the image and deduce what each character is. OCR technology has already reached an impressive state (for example, Google Book Search) and promises more for the future. Speech recognition converts speech into machine-readable text (that is, a string of character codes). This technology has also reached an advanced state and is implemented in various software products. For certain uses (e.g., transcription of medical or legal dictation; journalism; writing essays or novels) speech recognition is starting to replace the keyboard. However, the lack of privacy when issuing voice commands and dictation makes this kind of input unsuitable for many environments. Pointing devices can be used to enter text or characters in contexts where using a physical keyboard would be inappropriate or impossible. These accessories typically present characters on a display, in a layout that provides fast access to the more frequently used characters or character combinations. Popular examples of this kind of input are Graffiti, Dasher and on-screen virtual keyboards. Other issues Unencrypted Bluetooth keyboards are known to be vulnerable to signal theft for keylogging by other Bluetooth devices in range. Microsoft wireless keyboards 2011 and earlier are documented to have this vulnerability. Keystroke logging (often called keylogging) is a method of capturing and recording user keystrokes. While it can be used legally to measure employee activity, or by law enforcement agencies to investigate suspicious activities, it is also used by hackers for illegal or malicious acts. Hackers use keyloggers to obtain passwords or encryption keys. Keystroke logging can be achieved by both hardware and software means. Hardware key loggers are attached to the keyboard cable or installed inside standard keyboards. Software keyloggers work on the target computer's operating system and gain unauthorized access to the hardware, hook into the keyboard with functions provided by the OS, or use remote access software to transmit recorded data out of the target computer to a remote location. Some hackers also use wireless keylogger sniffers to collect packets of data being transferred from a wireless keyboard and its receiver, and then they crack the encryption key being used to secure wireless communications between the two devices. Anti-spyware applications are able to detect many keyloggers and remove them. Responsible vendors of monitoring software support detection by anti-spyware programs, thus preventing abuse of the software. Enabling a firewall does not stop keyloggers per se, but can possibly prevent transmission of the logged material over the net if properly configured. Network monitors (also known as reverse-firewalls) can be used to alert the user whenever an application attempts to make a network connection. This gives the user the chance to prevent the keylogger from "phoning home" with his or her typed information. Automatic form-filling programs can prevent keylogging entirely by not using the keyboard at all. Most keyloggers can be fooled by alternating between typing the login credentials and typing characters somewhere else in the focus window.[needs update] Keyboards are also known to emit electromagnetic signatures that can be detected using special spying equipment to reconstruct the keys pressed on the keyboard. Neal O'Farrell, executive director of the Identity Theft Council, revealed to InformationWeek that "More than 25 years ago, a couple of former spooks showed me how they could capture a user's ATM PIN, from a van parked across the street, simply by capturing and decoding the electromagnetic signals generated by every keystroke," O'Farrell said. "They could even capture keystrokes from computers in nearby offices, but the technology wasn't sophisticated enough to focus in on any specific computer." The use of a keyboard may cause serious injury (such as carpal tunnel syndrome or other repetitive strain injuries) to the hands, wrists, arms, neck or back. The risks of injuries can be reduced by taking frequent short breaks to get up and walk around a couple of times every hour. Users should also vary tasks throughout the day, to avoid overuse of the hands and wrists. When typing on a keyboard, a person should keep the shoulders relaxed with the elbows at the side, with the keyboard and mouse positioned so that reaching is not necessary. The chair height and keyboard tray should be adjusted so that the wrists are straight, and the wrists should not be rested on sharp table edges. Wrist or palm rests should not be used while typing. Some adaptive technology ranging from special keyboards, mouse replacements and pen tablet interfaces to speech recognition software can reduce the risk of injury. Pause software reminds the user to pause frequently. Switching to a much more ergonomic mouse, such as a vertical mouse or joystick mouse may provide relief. By using a touchpad or a stylus pen with a graphic tablet, in place of a mouse, one can lessen the repetitive strain on the arms and hands. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Mission_statement] | [TOKENS: 1337]
Contents Mission statement A mission statement is a short statement of why an organization exists, what its overall goal is, the goal of its operations: what kind of product or service it provides, its primary customers or market, and its geographical region of operation. It may include a short statement of such fundamental matters as the organization's values or philosophies, a business's main competitive advantages, or a desired future state—the "vision". Historically it is associated with Christian religious groups; indeed, for many years, a missionary was assumed to be a person on a specifically religious mission. The word "mission" dates from 1598, originally of Jesuits sending ("missio", Latin for "act of sending") members abroad. A mission statement is not simply a description of an organization by an external party, but an expression, made by an organization's leaders, of their desires and intent for the organization. A mission statement aims to communicate the organisation's purpose and direction to its employees, customers, vendors, and other stakeholders. A mission statement also creates a sense of identity for employees. Organizations normally do not change their mission statements over time, since they define their continuous, ongoing purpose and focus. According to Chris Bart, professor of strategy and governance at McMaster University, a commercial mission statement consists of three essential components: Bart estimates that in practice, only about ten percent of mission statements say something meaningful. For this reason, such statements are widely regarded with contempt. Purpose Although the notion of business purpose may transcend that of a mission statement, the sole purpose of a commercial mission statement is to summarize a company's main goal/agenda, it outlines in brief terms what the goal of a company is. Some generic examples of mission statements would be, "To provide the best service possible within the banking sector for our customers." or "To provide the best experience for all of our customers." The reason why businesses make use of mission statements is to make it clear what they look to achieve as an organization, not only to themselves and their employees but to the customers and other people who are a part of the business, such as shareholders. As a company evolves, so will their mission statement. This is to make sure that the company remains on track and to ensure that the mission statement does not lose its touch and become boring or stale. It is important that a mission statement is not confused with a vision statement. As discussed earlier, the main purpose of a mission statement is to get across the ambitions of an organisation in a short and simple fashion; it is not necessary to go into detail for the mission statement which is evident in examples given. The reason why it is important that a mission statement and vision statement are not confused is because they both serve different purposes. Vision statements tend to be more related to strategic planning and lean more towards discussing where a company aims to be in the future. Religious mission statements are less explicit about key market, contribution and distinction, but clearly describe the organization's purpose. For example: "Peoples Church is called to proclaim the Gospel of Christ and the beliefs of the evangelical Christian faith, to maintain the worship of God, and to inspire in all persons a love for Christ, a passion for righteousness, and a consciousness of their duties to God and their fellow human beings. We pledge our lives to Christ and covenant with each other to demonstrate His Spirit through worship, witnessing, and ministry to the needs of the people of this church and the community." Advantages Provides direction: Mission statements are a way to direct a business into the right path. They play a part in helping the business make better decisions which can be beneficial to them. Without the mission statement providing direction, businesses may struggle when it comes to making decisions and planning for the future. This is why providing direction could be considered one of the most advantageous points of a mission statement. Clear purpose: Having a clear purpose can remove any potential ambiguities that may surround the existence of a business. People who are interested in the progression of the business, such as stakeholders, will want to know that the business is making the right choices and progressing more towards achieving their goals, which will help to remove any doubt the stakeholders may have in the business. A mission statement can act as a motivational tool within an organisation, and it can allow employees to all work towards one common goal that benefits both the organisation and themselves. This can help with factors such as employee satisfaction and productivity. It is important that employees feel a sense of purpose. Giving them this sense of purpose will allow them to focus more on their daily tasks and help them realise the goals of the organisation and their role. Disadvantages Although it is mostly beneficial for a business to craft a good mission statement, there are some situations where a mission statement can be considered pointless or not useful to a business. Unrealistic: In some cases, mission statements can be too optimistic, sapping the performance and morale of the employees. Inability to meet too high a standard could demotivate employees in the long term. Unrealistic mission statements also serve no purpose and can be considered a waste of management's time. Poor decisions could be made in an attempt to achieve unrealistic goals, which have the potential to harm the business, and waste of both time and resources, which could be better spent on much more important tasks within the organisation such as decision-making for the business. Designing a Statement According to an independent contributor to Forbes, the following questions must be answered in the mission statement: When designing a mission statement, it should be very clear to the audience what the purpose of it is. It is ideal for a business to be able to communicate their mission, goals and objectives to the reader without including any unnecessary information through the mission statement. Richard Branson has commented on ways of crafting a good mission statement; he explains the importance of having a mission statement that is clear and straight to the point and does not contain unnecessary baffling. He went on to analyse a mission statement, using Yahoo's mission statement at the time (2013) as an example. In his evaluation of the mission statement, he seemed to suggest that while the statement sounded interesting, most people would not be able to understand the message it is putting across. In other words, the message of the mission statement potentially meant nothing to the audience. This further backs up the idea that a good mission statement is one that is clear and answers the right questions in a simple manner, and does not over complicate things. An example of a good mission statement would be Google's, which is "to organise the world's information and make it universally accessible and useful."[failed verification] See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Systema_Naturae] | [TOKENS: 1950]
Contents Systema Naturae Systema Naturae (originally in Latin written Systema Naturæ with the ligature æ) is one of the major works of Swedish botanist, zoologist, and physician Carl Linnaeus (1707–1778) and introduced the Linnaean taxonomy. Although the system, now known as binomial nomenclature, was partially developed by the Bauhin brothers, Gaspard and Johann, Linnaeus was the first to use it consistently throughout his book. The first edition was published in 1735. The full title of the 10th edition (1758), which was the most important one, was Systema naturæ per regna tria naturæ, secundum classes, ordines, genera, species, cum characteribus, differentiis, synonymis, locis, which appeared in English in 1806 with the title: "A General System of Nature, Through the Three Grand Kingdoms of Animals, Vegetables, and Minerals, Systematically Divided Into their Several Classes, Orders, Genera, Species, and Varieties, with their Habitations, Manners, Economy, Structure and Peculiarities". The tenth edition of this book (1758), published in Stockholm, is considered the starting point of zoological nomenclature. In 1766–1768 Linnaeus published the much enhanced 12th edition, the last under his authorship. Another again enhanced work in the same style titled "Systema Naturae" was published by Johann Friedrich Gmelin between 1788 and 1793. Since at least the early 20th century, zoologists have commonly recognized this as the last edition belonging to this series. Overview Linnaeus (later known as "Carl von Linné", after his ennoblement in 1761) published the first edition of Systema Naturae in 1735, during his stay in the Netherlands. As was customary for the scientific literature of its day, the book was published in Latin. In it, he outlined his ideas for the hierarchical classification of the natural world, dividing it into the animal kingdom (regnum animale), the plant kingdom (regnum vegetabile), and the "mineral kingdom" (regnum lapideum). Linnaeus's Systema Naturae lists only about 10,000 species of organisms, of which about 6,000 are plants and 4,236 are animals. According to the historian of botany William T. Stearn, "Even in 1753, he believed that the number of species of plants in the whole world would hardly reach 10,000; in his whole career, he named about 7,700 species of flowering plants." Linnaeus developed his classification of the plant kingdom in an attempt to describe and understand the natural world as a reflection of the logic of God's creation. His sexual system, where species with the same number of stamens were treated in the same group, was convenient, but in his view artificial. Linnaeus believed in God's creation and that no deeper relationships were to be expressed. The classification of animals was more natural than for plants. For instance, humans were for the first time placed together with other primates, as Anthropomorpha. They were also divided into four varieties, as distinguished by skin color and corresponding with the four known continents and temperaments. The tenth edition expanded on these varieties with behavioral and cultural traits that the Linnean Society acknowledges as having cemented colonial stereotypes and provided one of the foundations for scientific racism. As a result of the popularity of the work, and the number of new specimens sent to him from around the world, Linnaeus kept publishing new and ever-expanding editions of his work. It grew from 11 very large pages in the first edition (1735) to 2,400 pages in the 12th edition (1766–1768). Also, as the work progressed, he made changes; in the first edition, whales were classified as fishes, following the work of Linnaeus' friend and "father of ichthyology" Peter Artedi; in the 10th edition, published in 1758, whales were moved into the mammal class. In this same edition, he introduced two-part names (see binomen) for animal species, which he had done for plant species (see binary name) in the 1753 publication of Species Plantarum. The system eventually developed into modern Linnaean taxonomy, a hierarchically organized biological classification. After Linnaeus' health declined in the early 1770s, publication of editions of Systema Naturae went in two directions. Another Swedish scientist, Johan Andreas Murray, issued the Regnum Vegetabile section separately in 1774 as the Systema Vegetabilium, rather confusingly labelled the 13th edition. Meanwhile, a 13th edition of the entire Systema appeared in parts between 1788 and 1793. It was as the Systema Vegetabilium that Linnaeus' work became widely known in England following translation from the Latin by the Lichfield Botanical Society, as A System of Vegetables (1783–1785). Taxonomy In his Imperium Naturæ, Linnaeus established three kingdoms, namely Regnum Animale, Regnum Vegetabile, and Regnum Lapideum. This approach, the Animal, Vegetable, and Mineral Kingdoms, survives until today in the popular mind, notably in the form of parlour games: "Is it animal, vegetable or mineral?" The classification was based on five levels: kingdom, class, order, genus, and species. While species and genus were seen as God-given (or "natural"), the three higher levels were seen by Linnaeus as constructs. The concept behind the set ranks being applied to all groups was to make a system that was easy to remember and navigate, a task in which most say he succeeded. Linnaeus's work had a huge impact on science; it was indispensable as a foundation for biological nomenclature, now regulated by the Nomenclature Codes. Two of his works, the first edition of the Species Plantarum (1753) for plants and the 10th edition of the Systema Naturæ (1758), are accepted to be among the starting points of nomenclature. Most of his names for species and genera were published at very early dates and thus take priority over those of other, later authors. Zoology has one exception, which is a monograph on Swedish spiders, Svenska Spindlar, published by Carl Clerck in 1757, so the names established there take priority over the Linnean names. His exceptional importance to science was less in the value of his taxonomy, but more in his deployment of skillful young students abroad to collect specimens.[page needed] At the close of the 18th century, his system had effectively become the standard for biological classification. Only in the animal kingdom is the higher taxonomy of Linnaeus still more or less recognizable and some of these names are still in use, but usually not quite for the same groups as used by Linnaeus. He divided the Animal Kingdom into six classes; in the tenth edition (1758), these were: Linnaeus was one of the first scientists to classify humans as primates (originally Anthropomorpha for "manlike"), eliciting some controversy for placing people among animals, thus not ruling over nature. He distinguished humans (Homo sapiens) from Homo troglodytes, a species of human-like creatures with exaggerated or non-human characteristics, despite finding limited evidence. He divided Homo sapiens into four varieties, corresponding with the four known continents and four temperaments (some editions also classify Ferus wild children and Monstrosus monstrous to accommodate adaptations to extreme environments). The first edition included Europæus albescens (whitish Europeans), Americanus rubescens (reddish Americans), Asiaticus fuscus (tawny Asians), and Africanus nigriculus (blackish Africans). The 10th edition solidified these descriptions by removing the "ish" qualifiers (e.g. albus "white" instead of albescens "whitish") and revising the characterization of Asiaticus from fuscus (tawny) to luridus (pale yellow). It also incorporates behavioral and cultural traits that the Linnean Society recognizes as having cemented colonial stereotypes and provided one of the foundations for scientific racism. The orders and classes of plants, according to his Systema Sexuale, were never intended to represent natural groups (as opposed to his ordines naturales in his Philosophia Botanica), but only for use in identification. They were used in that sense well into the 19th century. The Linnaean classes for plants, in the Sexual System, were: Linnaeus's taxonomy of minerals has long since fallen out of use. In the 10th edition, 1758, of the Systema Naturæ, the Linnaean classes were: Editions Gmelin's 13th (decima tertia) edition of Systema Naturae (1788–1793) should be carefully distinguished from the more limited Systema Vegetabilium first prepared and published by Johan Andreas Murray in 1774 (but labelled as "thirteenth edition"). The dates of publication for Gmelin's edition were the following: See also References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Sega_Saturn] | [TOKENS: 8945]
Contents Sega Saturn The Sega Saturn[a][b] is a home video game console developed by Sega and released on November 22, 1994, in Japan, May 11, 1995, in North America, and July 8, 1995, in Europe. Part of the fifth generation of video game consoles, it is the successor to the successful Genesis. The Saturn has a dual-CPU architecture and eight processors. Its games are in CD-ROM format, including several ports of arcade games and original games. Development of the Saturn began in 1992, the same year Sega's groundbreaking 3D Model 1 arcade hardware debuted. The Saturn was designed around a new CPU from the Japanese electronics company Hitachi. Another video display processor was added in early 1994 to better compete with the 3D graphics of Sony's forthcoming PlayStation. The Saturn was initially successful in Japan but not in the United States, where it was hindered by a surprise May 1995 launch, four months before its scheduled release date. After the debut of the Nintendo 64 in late 1996, the Saturn rapidly lost market share in the US, where it was discontinued in 1998. The Saturn is considered a commercial failure; this was affected by the cancellation of Sonic X-treme, planned as the first 3D entry in Sega's popular Sonic the Hedgehog series. The Saturn was succeeded in 1998 by the Dreamcast, having sold 9.26 million units sold worldwide, most in Japan. The Saturn has several well-regarded games, including Nights into Dreams, the Panzer Dragoon series, and the Virtua Fighter series, although much of its library was confined to the Japanese market where the system fared better than it did in the West. The Saturn's reception is mixed due to its complex hardware design and limited third-party support; Sega's management has been criticized for its decisions during the Saturn's development and discontinuation. History In the early 1990s, Sega had success with the Genesis (known as the Mega Drive in most countries outside of North America), backed by aggressive advertising campaigns and the popularity of its Sonic the Hedgehog series. Sega also had success with arcade games; in 1992 and 1993, the new Sega Model 1 arcade system board showcased Sega AM2's Virtua Racing and Virtua Fighter (the first 3D fighting game), crucial to popularizing 3D polygonal graphics. The Model 1 was expensive, so several alternatives helped bring Sega's newest arcade games to Genesis, such as the Virtua Processor chip used for Virtua Racing, and the 32X add-on. Development of the Saturn was supervised by Hideki Sato, Sega's director and deputy general manager of research and development. According to project manager Hideki Okamura, the project codenamed Saturn started over two years before its announcement at the Tokyo Toy Show in June 1994. It was developed by the same team that developed the System 32 arcade board. Sato regrets that he did not go with the Model 1 arcade hardware as a base, as he was too concerned of leaving all the developers behind that were focused on sprites rather than 3D, which were the majority of developers. In 1993, Sega and the Japanese electronics company Hitachi formed a joint venture to develop a new CPU for the Saturn, which resulted in the creation of the "SuperH RISC Engine" (or SH-2) later that year. The Saturn was designed around a dual-SH2 configuration. According to Kazuhiro Hamada, Sega's section chief for Saturn development during the system's conception, "the SH-2 was chosen for reasons of cost and efficiency. The chip has a calculation system similar to a DSP [digital signal processor], but we realized that a single CPU would not be enough to calculate a 3D world." Although the Saturn's design was largely finished before the end of 1993, reports in early 1994 of the technical capabilities of Sony's upcoming PlayStation console prompted Sega to include another video display processor (VDP) to improve 2D performance and 3D texture mapping. Sega considered making CD-ROM-based and cartridge-only versions of the Saturn, but discarded the idea due to concerns over the lower quality and higher price of cartridge games. According to president Tom Kalinske, Sega of America "fought against the architecture of Saturn for quite some time". Seeking an alternative graphics chip for the Saturn, Kalinske attempted to broker a deal with Silicon Graphics, but Sega of Japan rejected the proposal. Silicon Graphics subsequently collaborated with Nintendo on the Nintendo 64. Kalinske, Sony Electronic Publishing's Olaf Olafsson, and Sony America's Micky Schulhof had discussed development of a joint "Sega/Sony hardware system", which never materialized due to Sega's desire to create hardware for both 2D and 3D visuals and Sony's competing notion of focusing on 3D technology. Publicly, Kalinske defended the Saturn's design: "Our people feel that they need the multiprocessing to be able to bring to the home what we're doing next year in the arcades." In 1993, Sega restructured its internal studios in preparation for the Saturn's launch. To ensure high-quality 3D games would be available early in the Saturn's life, and to create a more energetic working environment, developers from Sega's arcade division were asked to create console games. New teams, such as the Panzer Dragoon developer Team Andromeda, were formed during this time. In early 1994, the Sega Titan Video arcade system was announced as an arcade counterpart to the Saturn. In April 1994, Acclaim Entertainment announced it would be the first American publisher to produce software for the Titan. In January 1994, Sega began to develop the 32X add-on for the Genesis, as a less expensive entry into the 32-bit era. The 32X was approved by Sega CEO Hayao Nakayama and widely supported by Sega of America employees. According to the former Sega of America producer Scot Bayless, Nakayama was worried that the Saturn would not be available until after 1994 and that the recently released Atari Jaguar would reduce Sega's hardware sales. As a result, Nakayama ordered his engineers to have the system ready for launch by the end of the year. The 32X would not be compatible with the Saturn, but Sega executive Richard Brudvik-Lindner pointed out that the 32X would play Genesis games, and had the same system architecture as the Saturn. This was justified by Sega's statement that both platforms would run at the same time, and that the 32X would be aimed at players who could not afford the more expensive Saturn. According to Sega of America research and development head Joe Miller, the 32X familiarized development teams with the dual SH-2 architecture also used in the Saturn. Because the machines share many parts and were prepared to launch around the same time, tensions emerged between Sega of America and Sega of Japan when the Saturn was given priority. Sega released the Saturn in Japan on November 22, 1994, at a price of ¥44,800 (equivalent to US$440 at the time). Virtua Fighter, a faithful port of the popular arcade game, sold at a nearly one-to-one ratio with the Saturn console at launch and was crucial to the system's early success in Japan. Though Sega had wanted to launch with Clockwork Knight and Panzer Dragoon, the only other first-party game available at launch was Wan Chai Connection. Boosted by the popularity of Virtua Fighter, Sega's initial shipment of 200,000 Saturn units sold out on the first day. Sega waited until the December 3 launch of the PlayStation to ship more units; when both were sold side by side, the Saturn proved more popular. Meanwhile, Sega released the 32X on November 21, 1994, in North America, December 3, 1994, in Japan, and January 1995 in PAL territories, at less than half of the Saturn's launch price. After the holiday season, however, interest in the 32X rapidly declined. Half a million Saturn units were sold in Japan by the end of 1994 (compared to 300,000 PlayStation units), and sales exceeded 1 million within the following six months. There were conflicting reports that the PlayStation had a higher sell-through rate, and the system gradually began to overtake the Saturn in sales during 1995. Sony attracted many third-party developers to the PlayStation with a liberal $10 licensing fee, excellent development tools, and the introduction of a 7- to 10-day order system that allowed publishers to meet demand more efficiently than the 10- to 12-week lead times for cartridges that had previously been standard in the Japanese video game industry. In March 1995, Sega of America CEO Tom Kalinske announced the Saturn's launch in the U.S. on "Saturnday" (Saturday), September 2, 1995. However, Sega of Japan mandated an early launch to give the Saturn an advantage over the PlayStation. At the first Electronic Entertainment Expo (E3) in Los Angeles on May 11, 1995, Kalinske gave a keynote presentation in which he revealed the release price of $399 (including a copy of Virtua Fighter), and described the features of the console. Kalinske also revealed that, due to "high consumer demand", Sega had already shipped 30,000 Saturns to Toys "R" Us, Babbage's, Electronics Boutique, and Software Etc. for immediate release. The announcement upset retailers who were not informed of the surprise release, including Best Buy and Walmart; KB Toys, which was not part of the early launch, responded by refusing to carry the Saturn and its games. Sony subsequently unveiled the retail price for the PlayStation; Olaf Olafsson, the head of Sony Computer Entertainment America (SCEA), summoned Steve Race to the stage, who uttered "$299", and then walked away to applause. The Saturn's release in Europe also came before the previously announced North American date, on July 8, 1995, at £399.99. European retailers and press did not have time to promote the system or its games, harming sales. The PlayStation launched in Europe on September 29, 1995; by November, it had already outsold the Saturn by a factor of three in the United Kingdom, where Sony had allocated £20 million of marketing during the holiday season compared to Sega's £4 million. The Saturn's U.S. launch was accompanied by a reported $50 million advertising campaign including coverage in publications such as Wired and Playboy. Early advertising for the system was targeted at a more mature, adult audience than the Genesis ads. The early rescheduling yielded only six launch games (all published by Sega) because most third-party games were scheduled around the original launch date. Virtua Fighter's relative lack of popularity in the West, combined with a release schedule of only two games between the surprise launch and September 1995, prevented Sega from capitalizing on the Saturn's early timing. Within two days of its North American launch on September 9, 1995, the PlayStation, backed by a large marketing campaign, had more units sold than the Saturn had in the five months following its surprise launch, with almost all of the initial shipment of 100,000 units being sold in advance, and the rest selling out across the U.S. A high-quality port of the Namco arcade game Ridge Racer contributed to the PlayStation's early success, and garnered favorable media in comparison to the Saturn version of Sega's Daytona USA, which was considered inferior to its arcade counterpart. Namco, a longtime arcade competitor with Sega, also unveiled the Namco System 11 arcade board, based on raw PlayStation hardware. Although the System 11 is technically inferior to Sega's Model 2 arcade board, its lower price made it attractive to smaller arcades. Following a 1994 acquisition of Sega developers, Namco released Tekken for the System 11 and PlayStation. Directed by former Virtua Fighter designer Seiichi Ishii, Tekken was intended to be fundamentally similar, with the addition of detailed textures and twice the frame rate. Tekken surpassed Virtua Fighter in popularity due to its superior graphics and nearly arcade-perfect console port, becoming the first million-selling PlayStation game. On October 2, Sega announced a Saturn price reduction to $299. High-quality Saturn ports of the Sega Model 2 arcade hits Sega Rally Championship, Virtua Cop, and Virtua Fighter 2 (running at 60 frames per second at a high resolution) were available by the end of the year and were generally regarded as superior to competitors on the PlayStation. Notwithstanding a subsequent increase in Saturn sales during the 1995 holiday season, the games were not enough to reverse the PlayStation's decisive lead. By 1996, the PlayStation had a considerably larger library than the Saturn, although Sega hoped to generate interest with upcoming exclusives such as Nights into Dreams. An informal survey of retailers showed that the Saturn and PlayStation sold in roughly equal numbers during the first quarter of 1996. Within its first year, the PlayStation secured over 20% of the entire U.S. video game market. On the first day of the May 1996 E3 show, Sony announced a PlayStation price reduction to $199, a reaction to the release of the Model 2 Saturn in Japan at a price roughly equivalent to $199. On the second day, Sega announced it would match this price, though Saturn hardware was more expensive to manufacture. I thought the world of [Hayao] Nakayama because of his love of software. We spoke about building a new hardware platform that I would be very, very involved with, shape the direction of this platform, and hire a new team of people and restructure Sega. That, to me, was a great opportunity. After the launch of the PlayStation and Saturn, sales of 16-bit games and consoles continued to account for 64% of the video game market in 1995. Sega underestimated the continued popularity of the Genesis, and did not have the inventory to meet demand. Sega was able to capture 43% of the dollar share of the U.S. video game market and sell more than 2 million Genesis units in 1995, but Kalinske estimated that "we could have sold another 300,000 Genesis systems in the November/December timeframe." Nakayama's decision to focus on the Saturn over the Genesis, based on the systems' relative performance in Japan, has been cited as the major contributing factor in this miscalculation. Due to long-standing disagreements with Sega of Japan, Kalinske lost interest in his work as CEO of Sega of America. By early 1996, rumors were circulating that Kalinske planned to leave Sega, and a July 13 article in the press reported speculation that Sega of Japan was planning significant changes to Sega of America's management. On July 16, 1996, Sega announced that Kalinske would leave Sega after September 30, and that Shoichiro Irimajiri had been appointed chairman and CEO of Sega of America. A former Honda executive, Irimajiri had been involved with Sega of America since joining Sega in 1993. Sega also announced that David Rosen and Nakayama had resigned from their positions as chairman and co-chairman of Sega of America, though both remained with the company. Bernie Stolar, a former executive at Sony Computer Entertainment of America, was named Sega of America's executive vice president in charge of product development and third-party relations. Stolar, who had arranged a six-month PlayStation exclusivity deal for Mortal Kombat 3 and helped build close relations with Electronic Arts while at Sony, was perceived as a major asset by Sega officials. Finally, Sega of America made plans to expand its PC software business. Stolar was not supportive of the Saturn, deciding it was poorly designed, and publicly announced at E3 1997 that "the Saturn is not our future". Though Stolar had "no interest in lying to people" about the Saturn's prospects, he continued to emphasize quality games for the system, and later said that "we tried to wind it down as cleanly as we could for the consumer". At Sony, Stolar had opposed the localization of Japanese games that he decided would not represent PlayStation well in North America, and advocated a similar policy for the Saturn, although he later sought to distance himself from his actions. These changes were accompanied by a softer image that Sega was beginning to portray in its advertising, including removing the "Sega!" scream and holding press events for the education industry. Marketing for the Saturn in Japan also changed with the introduction of Segata Sanshiro (played by Hiroshi Fujioka), a character in a series of TV advertisements starting in 1997; the character eventually starred in a Saturn game. Temporarily abandoning arcade development, Sega AM2 head Yu Suzuki began developing several Saturn-exclusive games, including a role-playing game in the Virtua Fighter series. Initially conceived as an obscure prototype, "The Old Man and the Peach Tree", and intended to address the flaws of contemporary Japanese RPGs (such as poor non-player character artificial intelligence routines), Virtua Fighter RPG evolved into a planned 11-part, 45-hour "revenge epic in the tradition of Chinese cinema", which Suzuki hoped would become the Saturn's killer app. The game was eventually released as Shenmue for the Saturn's successor, the Dreamcast. As Sonic Team was working on Nights into Dreams, Sega tasked the U.S.-based Sega Technical Institute (STI) with developing the first fully 3D entry in its popular Sonic the Hedgehog series. The game, Sonic X-treme, was moved to the Saturn after several prototypes for other hardware (including the 32X) were discarded. It featured a fisheye lens camera system that rotated levels with Sonic's movement. After Nakayama ordered the game be reworked around the engine created for its boss battles, the developers were forced to work between 16 and 20 hours a day to meet their December 1996 deadline. Weeks of development were wasted after Stolar rescinded STI's access to Sonic Team's Nights into Dreams engine following an ultimatum by Nights programmer Yuji Naka. After programmer Ofer Alon quit and designers Chris Senn and Chris Coffin became ill, Sonic X-Treme was cancelled in early 1997. Sonic Team started work on an original 3D Sonic game for the Saturn, but development shifted to the Dreamcast as Sonic Adventure. STI was disbanded in 1996 as a result of changes in management at Sega of America. Journalists and fans have speculated about the impact a completed X-treme might have had on the market. David Houghton of GamesRadar described the prospect of "a good 3D Sonic game" on the Saturn as "a 'What if...' situation on a par with the dinosaurs not becoming extinct". IGN's Travis Fahs called X-treme "the turning point not only for Sega's mascot and their 32-bit console, but for the entire company [and] an empty vessel for Sega's ambitions and the hopes of their fans". Dave Zdyrko, who operated a prominent Saturn fan website during the system's lifespan, said: "I don't know if [X-treme] could've saved the Saturn, but [...] Sonic helped make the Genesis and it made absolutely no sense why there wasn't a great new Sonic title ready at or near the launch of the [Saturn]." In a 2007 retrospective, producer Mike Wallis maintained that X-treme "definitely would have been competitive" with Nintendo's Super Mario 64. Next Generation reported in late 1996 that X-treme would have harmed Sega's reputation if it did not compare well to contemporary competition. Naka said he had been relieved by the cancellation, because the game was not promising. From 1993 to early 1996, although Sega's revenue declined as part of an industry-wide slowdown, the company retained control of 38% of the U.S. video game market (compared to Nintendo's 30% and Sony's 24%). Eight hundred thousand PlayStation units were sold in the U.S. by the end of 1995, compared to 400,000 Saturn units. In part due to an aggressive price war, the PlayStation outsold the Saturn by two to one in 1996, and Sega's 16-bit sales declined markedly. By the end of 1996, the PlayStation had 2.9 million units sold in the U.S., more than twice the 1.2 million Saturn units sold. The Christmas 1996 "Three Free" pack, which bundled the Saturn with Daytona USA, Virtua Fighter 2, and Virtua Cop, drove sales dramatically and ensured the Saturn remained a competitor into 1997. However, the Saturn failed to take the lead. After the launch of the Nintendo 64 in 1996, sales of the Saturn and its games were sharply reduced, and the PlayStation outsold the Saturn by three-to-one in the U.S. in 1997. The 1997 release of Final Fantasy VII significantly increased the PlayStation's popularity in Japan. The game helped push PlayStation sales ahead of the Saturn in Japan, after the PlayStation and Saturn had been very close in Japan prior to the game's release. As of August 1997, Sony controlled 47% of the console market, Nintendo 40%, and Sega only 12%. Neither price cuts nor high-profile game releases proved helpful. Reflecting decreased demand for the system, worldwide Saturn shipments during March to September 1997 declined from 2.35 million to 600,000 versus the same period in 1996; shipments in North America declined from 800,000 to 50,000. Due to the Saturn's poor performance in North America, 60 of Sega of America's 200 employees were laid off in late 1997. I thought the Saturn was a mistake as far as hardware was concerned. The games were obviously terrific, but the hardware just wasn't there. As a result of Sega's deteriorating financial situation, Nakayama resigned as president in January 1998 in favor of Irimajiri. Stolar subsequently acceded to president of Sega of America. Following five years of generally declining profits, in the fiscal year ending March 31, 1998, Sega suffered its first parent and consolidated financial losses since its 1988 listing on the Tokyo Stock Exchange. Due to a 54.8% decline in consumer product sales (including a 75.4% decline overseas), the company reported a net loss of ¥43.3 billion ($327.8 million) and a consolidated net loss of ¥35.6 billion ($269.8 million). Shortly before announcing its financial losses, Sega announced that it was discontinuing the Saturn in North America to prepare for the launch of its successor. Only 7 Saturn games were released in North America in 1998 (Magic Knight Rayearth is the final official release), compared to 119 in 1996. The Saturn lasted longer in Japan, with Irimajiri announcing in early 1998 that Sega would continue supporting the Saturn in Japan after its successor was released. Between June 1996 and August 1998, a further 1,103,468 consoles and 29,685,781 games were sold in Japan, giving the Saturn a Japanese attach rate of 16.71 games per console, the highest of that generation. As of February 1997, the attach rate was four games per console worldwide. Rumors about the upcoming Dreamcast, spread mainly by Sega, were leaked to the public before the last Saturn games were released. The Dreamcast was released on November 27, 1998, in Japan and on September 9, 1999, in North America. The decision to abandon the Saturn effectively left the Western market without Sega games for over one year. Sega suffered an additional ¥42.881 billion consolidated net loss in the fiscal year ending March 1999 and announced plans to eliminate 1,000 jobs, nearly a quarter of its workforce. Worldwide Saturn sales include at least the following amounts in each territory: 5.75 million in Japan (surpassing Genesis sales of 3.58 million there), 1.8 million in the United States, 1 million in Europe, and 530,000 elsewhere. With lifetime sales of 9.26 million units, the Saturn is considered a commercial failure, although its install base in Japan, where it did better than the West, surpassed the Nintendo 64's 5.54 million, where it became Sega's highest-selling home console. The Saturn ultimately shipped more than 6 million units in Japan. Lack of distribution has been cited as a significant factor of the Saturn's failure, because the system's surprise launch had damaged Sega's reputation with key retailers. Conversely, Nintendo's long delay in releasing a 3D console and damage to Sega's reputation caused by poorly supported Genesis add-ons are considered major factors allowing Sony's establishment in the video game market. Technical specifications Featuring eight processors, the Saturn's central processing units are two Hitachi SH-2 microprocessors clocked at 28.6 MHz and capable of 56 MIPS. It uses a Motorola 68EC000 running at 11.3 MHz as a sound controller; a custom sound processor with an integrated Yamaha FH1 DSP running at 22.6 MHz: 6 capable of up to 32 sound channels with both FM synthesis and 16-bit 44.1 kHz pulse-code modulation; and two video display processors: the VDP1 (which handles sprites and polygons) and the VDP2 (which handles backgrounds).: 9 Its double-speed CD-ROM drive is controlled by a dedicated Hitachi SH-1 processor to reduce load time. The System Control Unit (SCU), which controls all buses and functions as a co-processor of the main SH-2 CPU, has an internal DSP running at 14.3 MHz.: 6, 8 It features a cartridge slot that allows memory expansion, 16 Mbit of work random-access memory (RAM), 12 Mbit of video RAM, 4 Mbit of RAM for sound functions, 4 Mbit of CD buffer RAM and 256 Kbit (32 KB) of battery backup RAM. Its RCA video output displays at resolutions from 320×224 to 704×224 pixels, with up to 16.78 million colors. The Saturn measures 260 mm × 230 mm × 83 mm (10.2 in × 9.1 in × 3.3 in). It was packaged with an instruction manual, control pad, stereo AV cable, and 100 V AC power supply consuming approximately 15 W. One very fast central processor would be preferable. I don't think all programmers have the ability to program two CPUs—most can only get about one-and-a-half times the speed you can get from one SH-2. I think that only 1 in 100 programmers are good enough to get this kind of speed [double] out of the Saturn. The Saturn had technically impressive hardware at the time of its release, but its complexity made harnessing this power difficult for developers accustomed to conventional programming. The greatest disadvantage was that both CPUs shared the same bus and were unable to access system memory at the same time. Making full use of the 4 KB of cache memory in each CPU was critical to maintaining performance. For example, Virtua Fighter used one CPU for each character, while Nights used one CPU for 3D environments and the other for 2D objects. The Visual Display Processor 2 (VDP2), which can generate and manipulate backgrounds, has also been cited as one of the system's most important features. The Saturn's design elicited mixed commentary among game developers and journalists. Developers quoted by Next Generation in December 1995 described the Saturn as "a real coder's machine [for] those who love to get their teeth into assembly and really hack the hardware [with] more flexibility [and] more calculating power than the PlayStation". The sound board was widely praised. Lobotomy Software programmer Ezra Dreisbach described the Saturn as significantly slower than the PlayStation, whereas Kenji Eno of WARP observed little difference. In particular, Dreisbach criticized the Saturn's use of quadrilaterals as its basic geometric primitive, in contrast to the triangles rendered by the PlayStation and the Nintendo 64. Ken Humphries of Time Warner Interactive remarked that compared to the PlayStation, the Saturn was worse at generating polygons but better at sprites. Third-party development was initially hindered by the lack of useful software libraries and development tools, requiring developers to use assembly language. During early Saturn development, programming in assembly had a speed increase of two to five times above higher-level languages such as C. Sega responded to complaints about the difficulty of programming for the Saturn by writing new graphics libraries which were claimed to make development easier. Sega of America purchased a United Kingdom-based development firm, Cross Products, to produce the Saturn's development system. Treasure CEO Masato Maegawa stated that the Nintendo 64 was more difficult to develop for than the Saturn. Traveller's Tales founder Jon Burton said that though the PlayStation was easier "to get started on [...] you quickly reach [its] limits", whereas the Saturn's "complicated [hardware could] improve the speed and look of a game when all used together correctly". A major criticism was the Saturn's use of 2D sprites to generate polygons and simulate 3D space. The PlayStation has a different design, based entirely on 3D triangle-based polygonal rendering, with no direct 2D support. As a result, several analysts described the Saturn as an "essentially" 2D system. For example, Steven L. Kent stated: "Although Nintendo and Sony had true 3D game machines, Sega had a 2D console that did a good job with 3D objects but wasn't optimized for 3D environments." The Saturn hardware is extremely difficult to emulate. Several Saturn models were produced in Japan. An updated model in a recolored light gray (officially white) was released at ¥20,000 to reduce the system's cost and raise its appeal among women and younger children. Two models were released by third parties: Hitachi released the Hi-Saturn (a smaller model equipped with a car navigation function), and JVC released the V-Saturn. Saturn controllers have various complementary color schemes. The system also supports several accessories. A wireless controller powered by AA batteries uses infrared signal to connect. Designed to work with Nights, the Saturn 3D Pad includes both a control pad and an analog stick for directional input. Sega also released several versions of arcade sticks as peripherals, including the Virtua Stick, the Virtua Stick Pro, the Mission Analog Stick, and the Twin Stick. Sega created a light gun peripheral, the Virtua Gun, for shooting games such as Virtua Cop, and the Arcade Racer, a wheel for racing games. The Play Cable connects two Saturn consoles for multiplayer gaming across two screens, and a multitap connects up to six players to the same console. One console with two multitaps can support up to 12 players. Other accessories include RAM expansion cartridges, keyboard, mouse, floppy disk drive, and movie card. Like the Genesis, the Saturn had an Internet-based gaming service. The Sega NetLink is a 28.8k modem for the cartridge slot for direct dial multiplayer games Daytona USA, Duke Nukem 3D, Saturn Bomberman, Sega Rally, and Virtual On: Cyber Troopers. In Japan, a pay-to-play service was used. It can be used for web browsing, email, and online chat. Because the NetLink was released before the keyboard, Sega produced a series of CDs containing hundreds of website addresses so that Saturn owners could browse with the joypad. In 1995, Sega announced a variant of the Saturn featuring a built-in NetLink modem codenamed Pluto, but it was never released. Sega developed a Saturn-based arcade board, the Sega ST-V (or Titan), intended as an affordable alternative to Sega's Model 2 arcade board and as a testing ground for upcoming Saturn software. The Titan was criticized for its comparatively weak performance compared to the Sega Model 2 arcade system by Yu Suzuki, and it was overproduced by Sega's arcade division. Because Sega already had the Die Hard license, members of Sega AM1 working at the Sega Technical Institute developed Die Hard Arcade for the Titan to clear excess inventory. Die Hard became the most successful Sega arcade game produced in the United States at that point. Other games released for the Titan include Golden Axe: The Duel and Virtua Fighter Kids. Game library Much of the Saturn's library comprises Sega's arcade ports, including Daytona USA, The House of the Dead, Last Bronx, Sega Rally Championship, the Virtua Cop series, the Virtua Fighter series, and Virtual-On. Ports of 2D Capcom fighting games including Vampire Savior, Marvel Super Heroes vs. Street Fighter, and Street Fighter Alpha 3 were noted for their faithfulness to their arcade originals. Fighters Megamix, developed by Sega AM2 for the Saturn rather than arcades, combined characters from Fighting Vipers and Virtua Fighter to positive reviews. Highly rated Saturn exclusives include Panzer Dragoon Saga, Dragon Force, Guardian Heroes, Nights, Panzer Dragoon II Zwei, and Shining Force III. PlayStation games such as Castlevania: Symphony of the Night, Resident Evil, and Wipeout 2097 received Saturn ports with mixed results. The first-person shooter PowerSlave featured some of the most impressive 3D graphics on the Saturn, leading Sega to contract its developers, Lobotomy Software, to produce ports of Duke Nukem 3D and Quake. While Electronic Arts's limited support for the Saturn and Sega's failure to develop a football game for late 1995 gave Sony the lead in the sports genre, Sega Sports published Saturn sports games including the well-regarded World Series Baseball and Sega Worldwide Soccer series. Due to the cancellation of Sonic X-treme, the Saturn lacks an exclusive Sonic the Hedgehog platformer. Instead, it received a graphically enhanced port of the Genesis game Sonic 3D Blast, as well as the compilation Sonic Jam and the racing game Sonic R. The main character of the platformer Bug! was seen as a potential mascot for the Saturn, but failed to catch on as the Sonic series had. Instead, Sonic Team developed the score attack game Nights into Dreams, considered one of the most important Saturn games. The gameplay involves steering the imp-like protagonist Nights, as it flies on a mostly 2D plane across surreal stages. Although it lacked the fully 3D environments of Nintendo's Super Mario 64, the emphasis by Nights on unfettered movement and graceful acrobatic techniques showcased the intuitive potential of analog control. Sonic Team's next game, Burning Rangers, a fully 3D action-adventure game involving a team of outer-space firefighters, garnered praise for its transparency effects and distinctive art direction, but was released in limited quantities late in the Saturn's lifespan and criticized for its short length. Many well-regarded Saturn games were exclusive to Japan, such as the Sakura Wars series. Co-developed by Sega and Red Entertainment, Sakura Wars mixes elements of tactical RPGs, anime cutscenes, and visual novels. Sakura Wars and Grandia helped popularize the Saturn in Japan, but never had a Western release due to Sega of America's policy of not localizing RPGs and other Japanese games that might have damaged the Saturn's reputation in North America. Some games that launched on Saturn, such as Dead or Alive, Grandia, and Lunar: Silver Star Story Complete, were only released on the PlayStation in the West. Working Designs localized several Japanese Saturn games, but switched to PlayStation following a public feud between Stolar and the Working Designs president, Victor Ireland. According to the review aggregator GameRankings, Panzer Dragoon Saga is the most acclaimed Saturn game. It was praised for its cinematic presentation, evocative plot, and unique battle system. However, Sega released fewer than 20,000 retail copies in North America in what IGN's Levi Buchanan characterized as an example of the Saturn's "ignominious send-off" in the region. Similarly, only the first of three installments of Shining Force III was released outside Japan. The Saturn's library was criticised for its lack of sequels to high-profile Sega Genesis franchises, with Sega of Japan's cancellation of a third installment in Sega of America's popular Eternal Champions series cited as a significant source of controversy. Later ports of Saturn games including Guardian Heroes, Nights Into Dreams, and Shin Megami Tensei: Devil Summoner: Soul Hackers continued to garner positive reviews. Partly due to rarity, Saturn games such as Panzer Dragoon Saga and Radiant Silvergun are noted for their cult following. Due to the Saturn's commercial failure and hardware limitations, games such as Resident Evil 2, Shenmue, Sonic Adventure, and Virtua Fighter 3 were cancelled and moved to the Dreamcast. Reception and legacy At the time of the Saturn's release, Famicom Tsūshin awarded it 24 out of 40, higher than the PlayStation's 19 out of 40. In June 1995, Dennis Lynch of the Chicago Tribune and Albert Kim of Entertainment Weekly praised the Saturn as the most advanced console available; Lynch praised the double-speed CD-ROM drive and "intense surround-sound capabilities" and Kim cited Panzer Dragoon as a "lyrical and exhilarating epic" demonstrating the ability of new technology to "transform" the industry. In December 1995, Next Generation gave the Saturn three and a half stars out of five, highlighting Sega's marketing and arcade background as strengths but the system's complexity as a weakness. Four critics in Electronic Gaming Monthly's December 1996 Buyer's Guide rated the Saturn 8, 6, 7, and 8 out of 10 and the PlayStation 9, 10, 9, and 9. By December 1998, EGM's reviews were more mixed, with reviewers citing the lack of games as a major problem. According to EGM reviewer Crispin Boyer, "the Saturn is the only system that can thrill me one month and totally disappoint me the next". Retrospective feedback of the Saturn is mixed, but generally praises its game library. According to Greg Sewart of 1UP.com, "the Saturn will go down in history as one of the most troubled, and greatest, systems of all time". In 2009, IGN named the Saturn the 18th-best console of all time, praising its unique game library. According to the reviewers, "While the Saturn ended up losing the popularity contest to both Sony and Nintendo [...] Nights into Dreams, the Virtua Fighter and Panzer Dragoon series are all examples of exclusive titles that made the console a fan favorite." Edge noted that "hardened loyalists continue to reminisce about the console that brought forth games like Burning Rangers, Guardian Heroes, Dragon Force and Panzer Dragoon Saga". In 2015, The Guardian's Keith Stuart wrote that "the Saturn has perhaps the strongest line-up of 2D shooters and fighting games in console history". Retro Gamer's Damien McFerran wrote: "Even today, despite the widespread availability of sequels and re-releases on other formats, the Sega Saturn is still a worthwhile investment for those who appreciate the unique gameplay styles of the companies that supported it." IGN's Adam Redsell wrote "[Sega's] devil-may-care attitude towards game development in the Saturn and Dreamcast eras is something that we simply do not see outside of the indie scene today." Necrosoft Games director Brandon Sheffield said that "the Saturn was a landing point for games that were too 'adult' in content for other systems, as it was the only one that allowed an 18+ rating for content in Japan [...] some games, like Enemy Zero used it to take body horror to new levels, an important step toward the expansion of games and who they served." Sewart praised the Saturn's first-party games as "Sega's shining moment as a game developer", with Sonic Team demonstrating its creative range and AM2 producing numerous technically impressive arcade ports. He also commented on the many Japan-exclusive Saturn releases, which he connected with a subsequent boom in the game import market. IGN's Travis Fahs was critical of the Saturn library's lack of "fresh ideas" and "precious few high-profile franchises", in contrast to what he described as Sega's more creative Dreamcast output. Sega has been criticized for its management of the Saturn. McFerran said its management staff had "fallen out of touch with both the demands of the market and the industry". Stolar has also been criticized; according to Fahs, "Stolar's decision to abandon the Saturn made him a villain to many Sega fans, but [...] it was better to regroup than to enter the next fight battered and bruised. Dreamcast would be Stolar's redemption." Stolar defended his decision, saying, "I felt Saturn was hurting the company more than helping it. That was a battle that we weren't going to win." Sheffield said that the Saturn's quadrilaterals undermined third-party support, but because "nVidia invested in quads" at the same time, there had been "a remote possibility" they could have "become the standard instead of triangles [...] if somehow, magically, the Saturn were the most popular console of that era." Speaking more positively, former Working Designs president Victor Ireland described the Saturn as "the start of the future of console gaming" because it "got the better developers thinking and designing with parallel-processing architecture in mind for the first time". In GamesRadar, Justin Towell wrote that the Saturn's 3D Pad "set the template for every successful controller that followed, with analog shoulder triggers and left thumbstick [...] I don't see any three-pronged controllers around the office these days." Douglass C. Perry of Gamasutra noted that, from its surprise launch to its ultimate failure, the Saturn "soured many gamers on Sega products". Sewart and IGN's Levi Buchanan cited the failure of the Saturn as the major reason for Sega's downfall as a hardware manufacturer, but USgamer's Jeremy Parish described it as "more a symptom [...] than a cause" of the decline, which began with add-ons for the Genesis that fragmented the market and continued with Sega of America's and Sega of Japan's competing designs for the Dreamcast. Sheffield portrayed Sega's mistakes with the Saturn as emblematic of the broader then-decline of the Japanese gaming industry: "They thought they were invincible, and that structure and hierarchy were necessary for their survival, but more flexibility, and a greater participation with the West could have saved them." According to Stuart, Sega "didn't see [...] the roots of a prevailing trend, away from arcade conversions and traditional role-playing adventures and toward a much wider console development community with fresh ideas about gameplay and structure". Pulp365 reviews editor Matt Paprocki concluded that "the Saturn is a relic, but an important one, which represents the harshness of progress and what it can leave in its wake". Notes References Bibliography
========================================