sentence1
stringlengths
1
133k
sentence2
stringlengths
1
131k
Version 2.2 (7-day trial, 2015-05-27): Browser version: Published by MobiSystems, Inc.: Includes 350,000 words, 75,000 audio pronunciations in both British and American voices. Version 2.1.0.4 (full version, 2015-04-07): Oxford References online edition 1st impression (2011-??-??) ?th impression (2015-??-??) Fictitious entry The dictionary includes an entry for the word "esquivalience," which it defines as meaning "the willful avoidance of one's official responsibilities." This is a fictitious entry, intended to protect the copyright of the publication. The entry was invented by Christine Lindberg, one of the editors of the NOAD. With the publication of the second edition, a rumor circulated that the dictionary contained a fictitious entry in the letter 'e'. New Yorker contributing editor Henry Alford combed the section, and discussed several unusual entries he found with a group of American lexicographers. Most found "esquivalience" to be the most likely candidate, and when Alford approached NOAD editor in chief Erin McKean she confirmed it was a fake entry, which had been present since the first edition, in order to protect the copyright of the CD-ROM edition. Of the word, she said "its inherent fakeitude is fairly obvious." The fake entry apparently ensnared dictionary.com, which included an entry for it (that has since been removed) which it attributed to Webster's New Millennium Dictionary, both of which are owned by the private company Lexico. Possibly due to its licensing of Oxford dictionaries, Google Dictionary included the word, listing three meanings and giving usage examples. Other Oxford dictionaries Oxford American Dictionary (OAD) Oxford English Dictionary (OED) Shorter Oxford English Dictionary (SOED) Oxford Dictionary of English (ODE) Concise Oxford English Dictionary (COED) Australian Oxford Dictionary (AOD) Canadian Oxford Dictionary (CanOD) Oxford Advanced Learner's Dictionary (OALD) See also Dord Trap street References Bibliography New Oxford American Dictionary, First Edition, Elizabeth J. Jewell and Frank R. Abate (editors), 2192 pages, September 2001, Oxford University Press, . New Oxford American Dictionary, Second Edition, Erin McKean (editor), 2096 pages, May 2005, Oxford University Press, . New Oxford American Dictionary, Third Edition, Angus Stevenson and Christine A. Lindberg (editors), 2096 pages, August 2010, Oxford University Press, . External links Oxford references pages: 3rd edition Oxford University Press pages: 3rd edition The New Oxford American Dictionary, Second Edition website MobiSystems pages: New Oxford American Dictionary with Audio Google Play pages: New Oxford American Dictionary iTunes pages: iOS WordWeb pages: New Oxford American Dictionary 1998 non-fiction
Abate. Second edition Published in May 2005, the second edition was edited by Erin McKean. The edition added nearly 3,000 new words, senses, and phrases. It was in a large format, with 2096 pages, and was 8½" by 11" in size. It included a CD-ROM with the full text of the dictionary for Palm OS devices. Since 2005 Apple Inc.'s Mac OS X operating system has come bundled with a dictionary application and widget which credits as its source "Oxford American Dictionaries", and contains the full text of NOAD2. The Amazon Kindle reading device also uses NOAD as its built-in dictionary, along with a choice for the Oxford Dictionary of English. Oxford University Press published NOAD2 in electronic form in 2006 at the OxfordAmericanDictionary.com, and in 2010, along with the Oxford Dictionary of English, as part of Oxford Dictionaries Online. Third edition Published in August 2010, the third edition was edited by Angus Stevenson and Christine A. Lindberg. This edition includes over 2,000 new words, senses, and phrases, and over 1,000(1225) illustrations; hundreds of new and revised explanatory notes, new "Word Trends" feature charts usage for rapidly changing words and phrases. hardcover edition () ?th impression (2010-09-02) Android version: Published by MobiSystems, Inc. Premium version includes unlimited time use, offline mode, priority support, no ads. Version 5.1.020 (): Includes redesigned user interface, ability to share word definitions, 'Word of the Day' feature, new camera search function Version 7.1.184 (): Support split screen for Android 7, Shortcut Items for Android 7.1 (Camera, Voice Search, Dictionary) Version 7.1.191 (30-day trial, Android 4.1, 2017-01-03): Includes over 350,000 words, phrases and meanings, 75,000 audio pronunciations of both common and rare words in available in both British & American voice versions iOS version: Published by MobiSystems, Inc. Version 8.1 (): Includes redesigned user interface, ability to share word definitions, 'Word of the Day' feature, new camera search function Version 8.5.4 (): Includes invite and share for iPhone 6S, iPhone 6S+ and iPhone7 users. Version 8.5.6 (full version, iOS 8, 2017-02-23/24): Includes Voice Over, Voice Search for iOS 10. Windows version: Published by MobiSystems, Inc. Version 2.2 (7-day trial, 2015-05-27): Browser version: Published by MobiSystems, Inc.: Includes 350,000 words, 75,000 audio pronunciations in both British and American voices. Version 2.1.0.4 (full version, 2015-04-07): Oxford References online edition 1st impression (2011-??-??) ?th impression (2015-??-??) Fictitious entry The dictionary includes an entry
The major division is between Western and Eastern family of New Latin. The Western family includes most Romance-speaking regions (France, Spain, Portugal, Italy) and the British Isles; the Eastern family includes Central Europe (Germany and Poland), Eastern Europe (Russia and Ukraine) and Scandinavia (Denmark, Sweden). The Western family is characterized, inter alia, by having a front variant of the letter g before the vowels æ, e, i, œ, y and also pronouncing j in the same way (except in Italy). In the Eastern Latin family, j is always pronounced , and g had the same sound (usually ) in front of both front and back vowels; exceptions developed later in some Scandinavian countries. The following table illustrates some of the variation of New Latin consonants found in various countries of Europe, compared to the Classical Latin pronunciation of the 1st centuries BC to AD. In Eastern Europe, the pronunciation of Latin was generally similar to that shown in the table below for German, but usually with for z instead of . Orthography New Latin texts are primarily found in early printed editions, which present certain features of spelling and the use of diacritics distinct from the Latin of antiquity, medieval Latin manuscript conventions, and representations of Latin in modern printed editions. Characters In spelling, New Latin, in all but the earliest texts, distinguishes the letter u from v and i from j. In older texts printed down to c. 1630, v was used in initial position (even when it represented a vowel, e.g. in vt, later printed ut) and u was used elsewhere, e.g. in nouus, later printed novus. By the mid-17th century, the letter v was commonly used for the consonantal sound of Roman V, which in most pronunciations of Latin in the New Latin period was (and not ), as in vulnus "wound", corvus "crow". Where the pronunciation remained , as after g, q and s, the spelling u continued to be used for the consonant, e.g. in lingua, qualis, and suadeo. The letter j generally represented a consonantal sound (pronounced in various ways in different European countries, e.g. , , , ). It appeared, for instance, in jam "already" or jubet "he/she orders" (earlier spelled iam and iubet). It was also found between vowels in the words ejus, hujus, cujus (earlier spelled eius, huius, cuius), and pronounced as a consonant; likewise in such forms as major and pejor. J was also used when the last in a sequence of two or more i'''s, e.g. radij (now spelled radii) "rays", alijs "to others", iij, the Roman numeral 3; however, ij was for the most part replaced by ii by 1700. In common with texts in other languages using the Roman alphabet, Latin texts down to c. 1800 used the letter-form ſ (the long s) for s in positions other than at the end of a word; e.g. ipſiſſimus. The digraphs ae and oe were rarely so written (except when part of a word in all capitals, e.g. in titles, chapter headings, or captions); instead the ligatures æ and œ were used, e.g. Cæsar, pœna. More rarely (and usually in 16th- to early 17th-century texts) the e caudata is found substituting for either. Diacritics Three kinds of diacritic were in common use: the acute accent ´, the grave accent `, and the circumflex accent ˆ. These were normally only marked on vowels (e.g. í, è, â); but see below regarding que. The acute accent marked a stressed syllable, but was usually confined to those where the stress was not in its normal position, as determined by vowel length and syllabic weight. In practice, it was typically found on the vowel in the syllable immediately preceding a final clitic, particularly que "and", ve "or" and ne, a question marker; e.g. idémque "and the same (thing)". Some printers, however, put this acute accent over the q in the enclitic que, e.g. eorumq́ue "and their". The acute accent fell out of favor by the 19th century. The grave accent had various uses, none related to pronunciation or stress. It was always found on the preposition à (variant of ab "by" or "from") and likewise on the preposition è (variant of ex "from" or "out of"). It might also be found on the interjection ò "O". Most frequently, it was found on the last (or only) syllable of various adverbs and conjunctions, particularly those that might be confused with prepositions or with inflected forms of nouns, verbs, or adjectives. Examples include certè "certainly", verò "but", primùm "at first", pòst "afterwards", cùm "when", adeò "so far, so much", unà "together", quàm "than". In some texts the grave was found over the clitics such as que, in which case the acute accent did not appear before them. The circumflex accent represented metrical length (generally not distinctively pronounced in the New Latin period) and was chiefly found over an a representing an ablative singular case, e.g. eâdem formâ "with the same shape". It might also be used to distinguish two words otherwise spelled identically, but distinct in vowel length; e.g. hîc "here" differentiated from hic "this", fugêre "they have fled" (=fūgērunt) distinguished from fugere "to flee", or senatûs "of the senate" distinct from senatus "the senate". It might also be used for vowels arising from contraction, e.g. nôsti for novisti "you know", imperâsse for imperavisse "to have commanded", or dî for dei or dii. Notable works (1500–1900) Literature and biography 1511. Stultitiæ Laus, essay by Erasmus. 1516. Utopia by Thomas More 1525 and 1538. Hispaniola and Emerita, two comedies by Juan Maldonado. 1546. Sintra, a poem by Luisa Sigea de Velasco. 1602. Cenodoxus , a play by Jacob Bidermann. 1608. Parthenica , two books of poetry by Elizabeth Jane Weston. 1621. Argenis, a novel by John Barclay. 1626–1652. Poems by John Milton. 1634. Somnium, a scientific fantasy by Johannes Kepler. 1741. Nicolai Klimii Iter Subterraneum , a satire by Ludvig Holberg. 1761. Slawkenbergii Fabella, short parodic piece in Laurence Sterne's Tristram Shandy. 1767. Apollo et Hyacinthus, intermezzo by Rufinus Widl (with music by Wolfgang Amadeus Mozart). 1835. Georgii Washingtonii, Americæ Septentrionalis Civitatum Fœderatarum Præsidis Primi, Vita, biography of George Washington by Francis Glass. Scientific works 1543. De Revolutionibus Orbium Cœlestium by Nicolaus Copernicus 1545. Ars Magna by Hieronymus Cardanus 1551–58 and 1587. Historia animalium by Conrad Gessner. 1600. De Magnete, Magneticisque Corporibus et de Magno Magnete Tellure by William Gilbert. 1609. Astronomia nova by Johannes Kepler. 1610. Sidereus Nuncius by Galileo Galilei. 1620. Novum Organum by Francis Bacon. 1628. Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus by William Harvey. 1659. Systema Saturnium by Christiaan Huygens. 1673. Horologium Oscillatorium by Christiaan Huygens. Also at Gallica. 1687. Philosophiæ Naturalis Principia Mathematica by Isaac Newton. 1703. Hortus Malabaricus by Hendrik van Rheede. 1735. Systema Naturae by Carl Linnaeus. 1737. Mechanica sive motus scientia analytice exposita by Leonhard Euler. 1738. Hydrodynamica, sive de viribus et motibus fluidorum commentarii by Daniel Bernoulli. 1747. Antilucretius by Cardinal de Polignac 1748. Introductio in analysin infinitorum by Leonhard Euler. 1753. Species Plantarum by Carl Linnaeus. 1758. Systema Naturae (10th ed.) by Carolus Linnaeus. 1791. De viribus electricitatis in motu musculari by Aloysius Galvani. 1801. Disquisitiones Arithmeticae by Carl Gauss. 1810. Prodromus Florae Novae Hollandiae et Insulae Van Diemen by Robert Brown. 1830. Fundamenta nova theoriae functionum ellipticarum by Carl Gustav Jacob Jacobi. 1840. Flora Brasiliensis by Carl Friedrich Philipp von Martius. 1864. Philosophia zoologica by Jan van der Hoeven. 1889. Arithmetices principia, nova methodo exposita by Giuseppe Peano Other technical subjects 1511–1516. De Orbe Novo Decades by Peter Martyr d'Anghiera. 1514. De Asse et Partibus by Guillaume Budé. 1524. De motu Hispaniæ by Juan Maldonado. 1525. De subventione pauperum sive de humanis necessitatibus libri duo by Juan Luis Vives. 1530. Syphilis, sive, De Morbo Gallico by Girolamo Fracastoro(transcription) 1531. De disciplinis libri XX by Juan Luis Vives. 1552. Colloquium de aulica et privata vivendi ratione by Luisa Sigea de Velasco. 1553. Christianismi Restitutio by Michael Servetus. A mainly theological treatise, where the function of pulmonary circulation was first described by a European, more than half a century before Harvey. For the non-trinitarian message of this book Servetus was denounced by Calvin and his followers, condemned by the French Inquisition, and burnt alive just outside Geneva. Only three copies survived. 1554. De naturæ philosophia seu de Platonis et Aristotelis consensione libri quinque by Sebastián Fox Morcillo. 1582. Rerum Scoticarum Historia by George Buchanan (transcription) 1587. Minerva sive de causis linguæ Latinæ by Francisco Sánchez de las Brozas. 1589. De natura Novi Orbis libri duo et de promulgatione euangelii apud barbaros sive de procuranda Indorum salute by José de Acosta. 1597. Disputationes metaphysicæ by Francisco Suárez. 1599. De rege et regis institutione by Juan de Mariana. 1604–1608. Historia sui temporis by Jacobus Augustus Thuanus. 1612. De legibus by Francisco Suárez. 1615. De Christiana expeditione apud Sinas by Matteo Ricci and Nicolas Trigault. 1625. De jure belli ac pacis by Hugo Grotius. (Posner Collection facsimile; Gallica facsimile) 1641. Meditationes de prima philosophia by René Descartes. (The Latin, French and English by John Veitch.) 1642–1658. Elementa Philosophica by Thomas Hobbes. 1652–1654. Œdipus Ægyptiacus by Athanasius Kircher. 1655. Novus Atlas Sinensis by Martino Martini. 1656. Flora Sinensis by Michael Boym. 1657. Orbis Sensualium Pictus by John Amos Comenius. (Hoole parallel Latin/English translation, 1777; Online version in Latin) 1670. Tractatus Theologico-Politicus by Baruch Spinoza. 1677. Ethica, ordine geometrico demonstrata by Baruch Spinoza. 1725. Gradus ad Parnassum by Johann Joseph Fux. An influential treatise on musical counterpoint. 1780. De rebus gestis Caroli V Imperatoris et Regis Hispaniæ and De rebus Hispanorum gestis ad Novum Orbem Mexicumque by Juan Ginés de Sepúlveda. 1891. De primis socialismi germanici lineamentis apud Lutherum, Kant, Fichte et Hegel by Jean Jaurès See also Binomial nomenclature Botanical Latin Classical compound Ludwig Boltzmann Institute for Neo-Latin Studies Romance languages, sometimes called Neo-Latin languages Notes Further reading Black, Robert. 2007. Humanism and Education in Medieval and Renaissance Italy. Cambridge, UK: Cambridge Univ. Press. Bloemendal, Jan, and Howard B. Norland, eds. 2013. Neo-Latin Drama and Theatre in Early Modern Europe. Leiden, The Netherlands: Brill. Burnett, Charles, and Nicholas Mann, eds. 2005. Britannia Latina: Latin in the Culture of Great Britain from the Middle Ages to the Twentieth Century. Warburg Institute Colloquia 8. London: Warburg Institute. Butterfield, David. 2011. "Neo-Latin". In A Blackwell Companion to the Latin Language. Edited by James Clackson, 303–18. Chichester, UK: Wiley-Blackwell. Churchill, Laurie J., Phyllis R. Brown, and Jane E. Jeffrey, eds. 2002. Women Writing in Latin: From Roman Antiquity to Early Modern Europe. Vol. 3, Early Modern Women Writing Latin. New York: Routledge. Coroleu, Alejandro. 2010. "Printing and Reading Italian Neo-Latin Bucolic Poetry in Early Modern Europe". Grazer Beitrage 27: 53–69. de Beer, Susanna, K. A. E. Enenkel, and David Rijser. 2009. The Neo-Latin Epigram: A Learned and Witty Genre. Supplementa Lovaniensia 25. Leuven, Belgium: Leuven Univ. Press. De Smet, Ingrid A. R. 1999. "Not for Classicists? The State of Neo-Latin Studies". Journal of Roman Studies 89: 205–9. Ford, Philip. 2000. "Twenty-Five Years of Neo-Latin Studies". Neulateinisches Jahrbuch 2: 293–301. Ford, Philip, Jan Bloemendal, and Charles Fantazzi, eds. 2014. Brill’s Encyclopaedia of the Neo-Latin World. Two vols. Leiden, The Netherlands: Brill. Godman, Peter, and Oswyn Murray, eds. 1990. Latin Poetry and the Classical Tradition: Essays in Medieval and Renaissance Literature. Oxford: Clarendon. Haskell, Yasmin, and Juanita Feros Ruys, eds. 2010. Latin and Alterity in the Early Modern Period. Arizona Studies in the Middle Ages and Renaissance 30. Tempe: Arizona Univ. Press Helander, Hans. 2001. "Neo-Latin Studies: Significance and Prospects". Symbolae Osloenses 76.1: 5–102. IJsewijn, Jozef with Dirk Sacré. Companion to Neo-Latin Studies. Two vols. Leuven University Press, 1990–1998. Knight, Sarah, and Stefan Tilg, eds.
period cannot be precisely identified; however, the spread of secular education, the acceptance of humanistic literary norms, and the wide availability of Latin texts following the invention of printing, mark the transition to a new era of scholarship at the end of the 15th century. The end of the New Latin period is likewise indeterminate, but Latin as a regular vehicle of communicating ideas became rare after the first few decades of the 19th century, and by 1900 it survived primarily in international scientific vocabulary and taxonomy. The term "New Latin" came into widespread use towards the end of the 1890s among linguists and scientists. New Latin was, at least in its early days, an international language used throughout Catholic and Protestant Europe, as well as in the colonies of the major European powers. This area consisted of most of Europe, including Central Europe and Scandinavia; its southern border was the Mediterranean Sea, with the division more or less corresponding to the modern eastern borders of Finland, the Baltic states, Poland, Slovakia, Hungary and Croatia. Russia's acquisition of Kyiv in the later 17th century introduced the study of Latin to Russia. Nevertheless, the use of Latin in Orthodox eastern Europe did not reach high levels due to their strong cultural links to the cultural heritage of Ancient Greece and Byzantium, as well as Greek and Old Church Slavonic languages. Though Latin and New Latin are considered dead (having no native speakers), large parts of their vocabulary have seeped into English and several Germanic languages. In the case of English, about 60% of the lexicon can trace its origin to Latin, thus many English speakers can recognize New Latin terms with relative ease as cognates are quite common. History Beginnings New Latin was inaugurated as Renaissance Latin by the triumph of the humanist reform of Latin education, led by such writers as Erasmus, More, and Colet. Medieval Latin had been the practical working language of the Roman Catholic Church, taught throughout Europe to aspiring clerics and refined in the medieval universities. It was a flexible language, full of neologisms and often composed without reference to the grammar or style of classical (usually pre-Christian) authors. The humanist reformers sought both to purify Latin grammar and style, and to make Latin applicable to concerns beyond the ecclesiastical, creating a body of Latin literature outside the bounds of the Church. Attempts at reforming Latin use occurred sporadically throughout the period, becoming most successful in the mid-to-late 19th century. Height The Protestant Reformation (1520–1580), though it removed Latin from the liturgies of the churches of Northern Europe, may have advanced the cause of the new secular Latin. The period during and after the Reformation, coinciding with the growth of printed literature, saw the growth of an immense body of New Latin literature, on all kinds of secular as well as religious subjects. The heyday of New Latin was its first two centuries (1500–1700), when in the continuation of the Medieval Latin tradition, it served as the lingua franca of science, education, and to some degree diplomacy in Europe. Classic works such as Thomas More's Utopia and Newton's Principia Mathematica (1687) were written in the language. Throughout this period, Latin was a universal school subject, and indeed, the pre-eminent subject for elementary education in most of Europe and other places of the world that shared its culture. All universities required Latin proficiency (obtained in local grammar schools) to obtain admittance as a student. Latin was an official language of Poland—recognised and widely used between the 9th and 18th centuries, commonly used in foreign relations and popular as a second language among some of the nobility. Through most of the 17th century, Latin was also supreme as an international language of diplomatic correspondence, used in negotiations between nations and the writing of treaties, e.g. the peace treaties of Osnabrück and Münster (1648). As an auxiliary language to the local vernaculars, New Latin appeared in a wide variety of documents, ecclesiastical, legal, diplomatic, academic, and scientific. While a text written in English, French, or Spanish at this time might be understood by a significant cross section of the learned, only a Latin text could be certain of finding someone to interpret it anywhere between Lisbon and Helsinki. As late as the 1720s, Latin was still used conversationally, and was serviceable as an international auxiliary language between people of different countries who had no other language in common. For instance, the Hanoverian king George I of Great Britain (reigned 1714–1727), who had no command of spoken English, communicated in Latin with his Prime Minister Robert Walpole, who knew neither German nor French. Decline By about 1700, the growing movement for the use of national languages (already found earlier in literature and the Protestant religious movement) had reached academia, and an example of the transition is Newton's writing career, which began in New Latin and ended in English (e.g. Opticks, 1704). A much earlier example is Galileo c. 1600, some of whose scientific writings were in Latin, some in Italian, the latter to reach a wider audience. By contrast, while German philosopher Christian Wolff (1679–1754) popularized German as a language of scholarly instruction and research, and wrote some works in German, he continued to write primarily in Latin, so that his works could more easily reach an international audience (e.g., Philosophia moralis, 1750–53). Likewise, in the early 18th century, French replaced Latin as a diplomatic language, due to the commanding presence in Europe of the France of Louis XIV. At the same time, some (like King Frederick William I of Prussia) were dismissing Latin as a useless accomplishment, unfit for a man of practical affairs. The last international treaty to be written in Latin was the Treaty of Vienna in 1738; after the War of the Austrian Succession (1740–48) international diplomacy was conducted predominantly in French. A diminishing audience combined with diminishing production of Latin texts pushed Latin into a declining spiral from which it has not recovered. As it was gradually abandoned by various fields, and as less written material appeared in it, there was less of a practical reason for anyone to bother to learn Latin; as fewer people knew Latin, there was less reason for material to be written in the language. Latin came to be viewed as esoteric, irrelevant, and too difficult. As languages like French, Italian, German, and English became more widely known, use of a 'difficult' auxiliary language seemed unnecessary—while the argument that Latin could expand readership beyond a single nation was fatally weakened if, in fact, Latin readers did not compose a majority of the intended audience. As the 18th century progressed, the extensive literature in Latin being produced at the beginning slowly contracted. By 1800 Latin publications were far outnumbered, and often outclassed, by writings in the modern languages as impact of Industrial Revolution. Latin literature lasted longest in very specific fields (e.g. botany and zoology) where it had acquired a technical character, and where a literature available only to a small number of learned individuals could remain viable. By the end of the 19th century, Latin in some instances functioned less as a language than as a code capable of concise and exact expression, as for instance in physicians' prescriptions, or in a botanist's description of a specimen. In other fields (e.g. anatomy or law) where Latin had been widely used, it survived in technical phrases and terminology. The perpetuation of Ecclesiastical Latin in the Roman Catholic Church through the 20th century can be considered a special case of the technicalizing of Latin, and the narrowing of its use to an elite class of readers. By 1900, creative Latin composition, for purely artistic purposes, had become rare. Authors such as Arthur Rimbaud and Max Beerbohm wrote Latin verse, but these texts were either school exercises or occasional pieces. The last survivals of New Latin to convey non-technical information appear in the use of Latin to cloak passages and expressions deemed too indecent (in the 19th century) to be read by children, the lower classes, or (most) women. Such passages appear in translations of foreign texts and in works on folklore, anthropology, and psychology, e.g. Krafft-Ebing's Psychopathia Sexualis (1886). Crisis and transformation Latin as a language held a place of educational pre-eminence until the second half of the 19th century. At that point its value was increasingly questioned; in the 20th century, educational philosophies such as that of John Dewey dismissed its relevance. At the same time, the philological study of Latin appeared to show that the traditional methods and materials for teaching Latin were dangerously out of date and ineffective. In secular academic use, however, New Latin declined sharply and then continuously after about 1700. Although Latin texts continued to be written throughout the 18th and into the 19th century, their number and their scope diminished over time. By 1900, very few new texts were being created in Latin for practical purposes, and the production of Latin texts had become little more than a hobby for Latin enthusiasts. Around the beginning of the 19th century came a renewed emphasis on the study of Classical Latin as the spoken language of the Romans of the 1st centuries BC and AD. This new emphasis, similar to that of the Humanists but based on broader linguistic, historical, and critical studies of Latin literature, led to the exclusion of Neo-Latin literature from academic studies in schools and universities (except for advanced historical language studies); to the abandonment of New Latin neologisms; and to an increasing interest in the reconstructed Classical pronunciation, which displaced the several regional pronunciations in Europe in the early 20th century. Coincident with these changes in Latin instruction, and to some degree motivating them, came a concern about lack of Latin proficiency among students. Latin had already lost its privileged role as the core subject of elementary instruction; and as education spread to the middle and lower classes, it tended to be dropped altogether. By the mid-20th century, even the trivial acquaintance with Latin typical of the 19th-century student was a thing of the past. Relics Ecclesiastical Latin, the form of New Latin used in the Roman Catholic Church, remained in use throughout the period and after. Until the Second Vatican Council of 1962–65 all priests were expected to have competency in it, and it was studied in Catholic schools. It is today still the official language of the Church, and all Catholic priests of the Latin liturgical rites are required by canon law to have competency in the language. Use of Latin in the Mass, largely abandoned through the later 20th century, has recently seen a resurgence due in large part to Pope Benedict XVI's 2007 motu proprio Summorum Pontificum and its use by traditional Catholic priests and their organizations. New Latin is also the source of the biological system of binomial nomenclature and classification of living organisms devised by Carl Linnaeus, although the rules of the ICZN allow the construction of names that deviate considerably from historical norms. (See also classical compounds.) Another continuation is the use of Latin names for the surface features of planets and planetary satellites (planetary nomenclature), originated in the mid-17th century for selenographic toponyms. New Latin has also contributed a vocabulary for specialized fields such as anatomy and law; some of these words have become part of the normal, non-technical vocabulary of various European languages. Pronunciation New Latin had no single pronunciation, but a host of local variants or dialects, all distinct both from each other and from the historical pronunciation of Latin at the time of the Roman Republic and Roman Empire. As a rule, the local pronunciation of Latin used sounds identical to those of the dominant local language; the result of a concurrently evolving pronunciation in the living languages and the corresponding spoken dialects of Latin. Despite this variation, there are some common characteristics to nearly all of the dialects of New Latin, for instance: The use of a sibilant fricative or affricate in place of a stop for the letters c and sometimes g, when preceding a front vowel. The use of a sibilant fricative or affricate for the letter t when not at the beginning of the first syllable and preceding an unstressed i followed by a vowel. The use of a labiodental fricative for most instances of the letter v (or consonantal u), instead of the classical labiovelar approximant . A tendency for medial s to be voiced to , especially between vowels. The merger of æ and œ with e, and of y with i. The loss of the distinction between short and long vowels, with such vowel distinctions as remain being dependent upon word-stress. The regional dialects of New Latin can be grouped into families, according to the extent to which they share common traits of pronunciation. The major division is between Western and Eastern family of New Latin. The Western family includes most Romance-speaking regions (France, Spain,
element of an ordinal number. That's because ∈ means < , which implies ≠ (because < is strict), which is impossible. p. 75: the above definition of an ordinal number also makes it impossible to have ∈ , where is an ordinal number. That's because ∈ implies = s(). This gives us ∈ = s() = ∈ < }, which implies < , which implies ≠ (because < is strict), which is impossible. Errata p. 4, line 18: “Cain and Abel” should be “Seth, Cain and Abel”. p. 30, line 10: "x onto y" should be "x into y". p. 73, line 19: "for each z in X" should be "for each a in X". p. 75, line 3: "if and only if x ∈ F(n)" should be "if and only if x = {b: S(n, b)}". See also List of publications in mathematics Bibliography Halmos, Paul, Naive Set Theory. Princeton, NJ: D. Van Nostrand Company, 1960. Reprinted by Springer-Verlag, New York, 1974. (Springer-Verlag edition). Reprinted by Martino Fine Books, 2011. (Paperback edition). References External links A list
set, because ≠ ∅ and is not the successor of any natural number. But is not a subset of − {}, contradicting the definition of as a subset of every successor set. p. 47: Halmos proves the lemma that "no natural number is a subset of any of its elements." This lets us prove that no natural number can contain itself. For if ∈ , where is a natural number, then ⊂ ∈ , which contradicts the lemma. p. 75: "An ordinal number is defined as a well ordered set such that for all in ; here is, as before, the initial segment ∈ < }." The well ordering is defined as follows: if and are elements of an ordinal number , then < means ∈ (pp. 75-76). By his choice of the symbol < instead of ≤, Halmos implies that the well ordering < is strict (pp. 55-56). This definition of < makes it impossible to have ∈ , where is an element of an ordinal number. That's because ∈ means < , which implies ≠ (because < is strict), which is impossible. p.
nitrogen fixation or diazotrophy is an important microbially mediated process that converts dinitrogen (N2) gas to ammonia (NH3) using the nitrogenase protein complex (Nif). Nitrogen fixation is essential to life because fixed inorganic nitrogen compounds are required for the biosynthesis of all nitrogen-containing organic compounds, such as amino acids and proteins, nucleoside triphosphates and nucleic acids. As part of the nitrogen cycle, it is essential for agriculture and the manufacture of fertilizer. It is also, indirectly, relevant to the manufacture of all nitrogen chemical compounds, which includes some explosives, pharmaceuticals, and dyes. Nitrogen fixation is carried out naturally in soil by microorganisms termed diazotrophs that include bacteria such as Azotobacter and archaea. Some nitrogen-fixing bacteria have symbiotic relationships with plant groups, especially legumes. Looser non-symbiotic relationships between diazotrophs and plants are often referred to as associative, as seen in nitrogen fixation on rice roots. Nitrogen fixation occurs between some termites and fungi. It occurs naturally in the air by means of NOx production by lightning. All biological reactions involving the process of nitrogen fixation are catalysed by enzymes called nitrogenases. These enzymes contain iron, often with a second metal, usually molybdenum but sometimes vanadium. History Biological nitrogen fixation was discovered by Jean-Baptiste Boussingault in 1838. Later, in 1880, the process by which it happens was discovered by German agronomist Hermann Hellriegel and and was fully described by Dutch microbiologist Martinus Beijerinck. "The protracted investigations of the relation of plants to the acquisition of nitrogen begun by Saussure, Ville, Lawes and Gilbert and others culminated in the discover of symbiotic fixation by Hellriegel and Wilfarth in 1887." "Experiments by Bossingault in 1855 and Pugh, Gilbert & Lawes in 1887 had shown that nitrogen did not enter the plant directly. The discovery of the role of nitrogen fixing bacteria by Herman Hellriegel and Herman Wilfarth in 1886-8 would open a new era of soil science." In 1901 Beijerinck showed that azotobacter chroococcum was able to fix atmospheric nitrogen. This was the first species of the azotobacter genus, so-named by him. It is also the first known diazotroph, species that use diatomic nitrogen as a step in the complete nitrogen cycle. Biological Biological nitrogen fixation (BNF) occurs when atmospheric nitrogen is converted to ammonia by a nitrogenase enzyme. The overall reaction for BNF is: N2 + 16ATP + 16H2O + 8e- + 8H+ -> 2NH3 +H2 + 16ADP + 16 The process is coupled to the hydrolysis of 16 equivalents of ATP and is accompanied by the co-formation of one equivalent of . The conversion of into ammonia occurs at a metal cluster called FeMoco, an abbreviation for the iron-molybdenum cofactor. The mechanism proceeds via a series of protonation and reduction steps wherein the FeMoco active site hydrogenates the substrate. In free-living diazotrophs, nitrogenase-generated ammonia is assimilated into glutamate through the glutamine synthetase/glutamate synthase pathway. The microbial nif genes required for nitrogen fixation are widely distributed in diverse environments. For example, decomposing wood, which generally has a low nitrogen content, has been shown to host a diazotrophic community. The bacteria enrich the wood substrate with nitrogen through fixation, thus enabling deadwood decomposition by fungi. Nitrogenases are rapidly degraded by oxygen. For this reason, many bacteria cease production of the enzyme in the presence of oxygen. Many nitrogen-fixing organisms exist only in anaerobic conditions, respiring to draw down oxygen levels, or binding the oxygen with a protein such as leghemoglobin. Importance of nitrogen Atmospheric nitrogen is inaccessible to most organisms, because its triple covalent bond is very strong. Life takes up fixed nitrogen in various ways. Considering atom acquisition, for every 100 atoms of carbon, roughly 2 to 20 atoms
but a few microorganisms. Biological nitrogen fixation or diazotrophy is an important microbially mediated process that converts dinitrogen (N2) gas to ammonia (NH3) using the nitrogenase protein complex (Nif). Nitrogen fixation is essential to life because fixed inorganic nitrogen compounds are required for the biosynthesis of all nitrogen-containing organic compounds, such as amino acids and proteins, nucleoside triphosphates and nucleic acids. As part of the nitrogen cycle, it is essential for agriculture and the manufacture of fertilizer. It is also, indirectly, relevant to the manufacture of all nitrogen chemical compounds, which includes some explosives, pharmaceuticals, and dyes. Nitrogen fixation is carried out naturally in soil by microorganisms termed diazotrophs that include bacteria such as Azotobacter and archaea. Some nitrogen-fixing bacteria have symbiotic relationships with plant groups, especially legumes. Looser non-symbiotic relationships between diazotrophs and plants are often referred to as associative, as seen in nitrogen fixation on rice roots. Nitrogen fixation occurs between some termites and fungi. It occurs naturally in the air by means of NOx production by lightning. All biological reactions involving the process of nitrogen fixation are catalysed by enzymes called nitrogenases. These enzymes contain iron, often with a second metal, usually molybdenum but sometimes vanadium. History Biological nitrogen fixation was discovered by Jean-Baptiste Boussingault in 1838. Later, in 1880, the process by which it happens was discovered by German agronomist Hermann Hellriegel and and was fully described by Dutch microbiologist Martinus Beijerinck. "The protracted investigations of the relation of plants to the acquisition of nitrogen begun by Saussure, Ville, Lawes and Gilbert and others culminated in the discover of symbiotic fixation by Hellriegel and Wilfarth in 1887." "Experiments by Bossingault in 1855 and Pugh, Gilbert & Lawes in 1887 had shown that nitrogen did not enter the plant directly. The discovery of the role of nitrogen fixing bacteria by Herman Hellriegel and Herman Wilfarth in 1886-8 would open a new era of soil science." In 1901 Beijerinck showed that azotobacter chroococcum was able to fix atmospheric nitrogen. This was the first species of the azotobacter genus, so-named by him. It is also the first known diazotroph, species that use diatomic nitrogen as a step in the complete nitrogen cycle. Biological Biological nitrogen fixation (BNF) occurs when atmospheric nitrogen is converted to ammonia by a nitrogenase enzyme. The overall reaction for BNF is: N2 + 16ATP + 16H2O + 8e- + 8H+ -> 2NH3 +H2 + 16ADP + 16 The process is coupled to the hydrolysis of 16 equivalents of ATP and is accompanied by the co-formation of one equivalent of . The conversion of into ammonia occurs at a metal cluster called FeMoco, an abbreviation for the iron-molybdenum cofactor. The mechanism proceeds via a series of protonation and reduction steps wherein the FeMoco active site hydrogenates the substrate. In free-living diazotrophs, nitrogenase-generated ammonia is assimilated into glutamate through the glutamine synthetase/glutamate synthase pathway. The microbial nif genes required for nitrogen fixation are widely distributed in diverse environments. For example, decomposing wood, which generally has a low nitrogen content, has been shown to host a diazotrophic community. The bacteria enrich the wood substrate with nitrogen through fixation, thus enabling deadwood decomposition by fungi. Nitrogenases are rapidly degraded by oxygen. For this reason, many bacteria cease production of the enzyme in the presence of oxygen. Many nitrogen-fixing organisms exist only in anaerobic conditions, respiring to draw down oxygen levels, or binding the oxygen with a protein such as leghemoglobin. Importance of nitrogen Atmospheric nitrogen is inaccessible to most organisms, because its triple covalent bond is very strong. Life takes up fixed nitrogen in various ways. Considering atom acquisition, for every 100 atoms of carbon, roughly 2 to 20 atoms of nitrogen are assimilated. The atomic ratio of carbon (C) : nitrogen (N) : phosphorus (P) observed on average in planktonic biomass was originally described by Alfred Redfield. The Redfield Ratio, the stoichiometric relationship between C:N:P atoms, is 106:16:1. Nitrogenase The protein complex nitrogenase is responsible for catalyzing the reduction of nitrogen gas (N2) to ammonia (NH3). In Cyanobacteria, this enzyme system is housed in a specialized cell called the heterocyst. The production of the nitrogenase complex is genetically regulated, and the activity of the protein complex is dependent on ambient oxygen concentrations, and intra- and extracellular concentrations of ammonia and oxidized nitrogen species (nitrate and nitrite). Additionally, the combined concentrations of both ammonium and nitrate are thought to inhibit NFix, specifically when intracellular concentrations of 2-oxoglutarate (2-OG) exceed a critical threshold. The specialized heterocyst cell is necessary for the performance of nitrogenase as a result of its sensitivity to ambient oxygen. Nitrogenase consist of two proteins, a catalytic iron-dependent protein, commonly referred to as MoFe protein and a reducing iron-only protein (Fe protein). There are three different iron dependent proteins, molybdenum-dependent, vanadium-dependent, and iron-only, with all three nitrogenase protein variations containing an iron protein component. Molybdenum-dependent nitrogenase is the most commonly present nitrogenase. The different types of nitrogenase can be determined by the specific iron protein component. Nitrogenase is highly conserved. Gene expression through DNA sequencing can distinguish which protein complex is present in the microorganism and potentially being express. Most frequently, the nifH gene is used to identify the presence of molybdenum-dependent nitrogenase, followed by closely related nitrogenase reductases (component II) vnfH and anfH representing vanadium-dependent and iron-only nitrogenase, respectively. In studying the ecology and evolution of nitrogen-fixing bacteria, the nifH gene is the biomarker most widely used. nifH has two similar genes anfH and vnfH that also encode for the nitrogenase reductase component of the nitrogenase complex. Microorganisms Diazotrophs are widespread within domain Bacteria including cyanobacteria (e.g. the highly significant Trichodesmium and Cyanothece), as well as green sulfur bacteria, Azotobacteraceae, rhizobia and Frankia. Several obligately anaerobic bacteria fix nitrogen including many (but not all) Clostridium spp. Some archaea also fix nitrogen, including several methanogenic taxa, which are significant contributors to nitrogen fixation in oxygen-deficient soils. Cyanobacteria, commonly known as blue-green algae, inhabit nearly all illuminated environments on Earth and play key roles in the carbon and nitrogen cycle of the biosphere. In general, cyanobacteria can use various inorganic and organic sources of combined nitrogen, such as nitrate, nitrite, ammonium, urea, or some amino acids. Several cyanobacteria strains are also
(north, south, east, and west) in order to visualize the best pathway from point to point. The use of these more general, external cues as directions is considered part of an allocentric navigation strategy. Allocentric navigation is typically seen in males and is beneficial primarily in large and/or unfamiliar environments. This likely has some basis in evolution when males would have to navigate through large and unfamiliar environments while hunting. The use of allocentric strategies when navigating primarily activates the hippocampus and parahippocampus in the brain. This navigation strategy relies more on a mental, spatial map than visible cues, giving it an advantage in unknown areas but a flexibility to be used in smaller environments as well. The fact that it is mainly males that favor this strategy is likely related to the generalization that males are better navigators than females as it is better able to be applied in a greater variety of settings. Egocentric navigation relies on more local landmarks and personal directions (left/right) to navigate and visualize a pathway. This reliance on more local and well-known stimuli for finding their way makes it difficult to apply in
person and are the basis of the differing navigational strategies. Some people use measures of distance and absolute directional terms (north, south, east, and west) in order to visualize the best pathway from point to point. The use of these more general, external cues as directions is considered part of an allocentric navigation strategy. Allocentric navigation is typically seen in males and is beneficial primarily in large and/or unfamiliar environments. This likely has some basis in evolution when males would have to navigate through large and unfamiliar environments while hunting. The use of allocentric strategies when navigating primarily activates the hippocampus and parahippocampus in the brain. This navigation strategy relies more on a mental, spatial map than visible cues, giving it an advantage in unknown areas but a flexibility to be used in smaller environments as well. The fact that it is mainly males that favor this strategy is likely related to the generalization that males are better navigators than females as it is better able to be applied in a greater variety of settings. Egocentric navigation relies on more local landmarks and personal directions (left/right) to navigate and visualize a pathway. This reliance on more local and well-known stimuli for
in an oasis in the middle of a barren desert; his estate becomes the scene of a family feud that continues for generations. "Whenever someone is depressed, suffering or humiliated, he points to the mansion at the top of the alley at the end opening out to the desert, and says sadly, 'That is our ancestor's house, we are all his children, and we have a right to his property. Why are we starving? What have we done?'" The book was banned throughout the Arab world except in Lebanon until 2006 when it was first published in Egypt. The work was prohibited because of its alleged blasphemy through the allegorical portrayal of God and the monotheistic Abrahamic faiths of Judaism, Christianity, and Islam. In the 1960s, Mahfouz further developed the theme that humanity is moving further away from God in his existentialist novels. In The Thief and the Dogs (1961) he depicted the fate of a Marxist thief who has been released from prison and plans revenge. In the 1960s and 1970s Mahfouz began to construct his novels more freely and often used interior monologues. In Miramar (1967) he employed a form of multiple First-person narratives. Four narrators, among them a Socialist and a Nasserite opportunist, represent different political views. In the center of the story is an attractive servant girl. In Arabian Nights and Days (1979) and in The Journey of Ibn Fatouma (1983) he drew on traditional Arabic narratives as subtexts. Akhenaten: Dweller in Truth (1985) deals with conflict between old and new religious truths. Many of his novels were first published in serialized form, including Children of Gebelawi and Midaq Alley which was also adapted into a Mexican film starring Salma Hayek called El callejón de los milagros. Political influence Most of Mahfouz's writings deal mainly with politics, a fact he acknowledged: "In all my writings, you will find politics. You may find a story which ignores love or any other subject, but not politics; it is the very axis of our thinking". He espoused Egyptian nationalism in many of his works, and expressed sympathies for the post-World-War-era Wafd Party. He was also attracted to socialist and democratic ideals early in his youth. The influence of socialist ideals is strongly reflected in his first two novels, Al-Khalili and New Cairo, as well as many of his later works. Parallel to his sympathy for socialism and democracy was his antipathy towards Islamic extremism. In his youth, Mahfouz had personally known Sayyid Qutb when Qutb was showing a greater interest in literary criticism than in Islamic fundamentalism; Qutb later became a significant influence on the Muslim Brotherhood. In the mid-1940s, Qutb was one of the first critics to recognize Mahfouz's talent, and by the 1960s, near the end of Qutb's life, Mahfouz even visited him in the hospital. But later, in the semi-autobiographical novel Mirrors, Mahfouz drew a negative portrait of Qutb. He was disillusioned with the 1952 revolution and by Egypt's defeat in the 1967 Six-Day War. He had supported the principles of the revolution, but became disenchanted, saying that the practices failed to live up to the original ideals. Mahfouz's writing influenced a new generation of Egyptian lawyers, including Nabil Mounir and Reda Aslan. Reception Mahfouz's translated works received praise from American critics: "The alleys, the houses, the palaces and mosques and the people who live among them are evoked as vividly in Mahfouz's work as the streets of London were conjured by Dickens." —Newsweek "Throughout Naguib Mahfouz's fiction there is a pervasive sense of metaphor, of a literary artist who is using his fiction to speak directly and unequivocally to the condition of his country. His work is imbued with love for Egypt and its people, but it is also utterly honest and unsentimental." —Washington Post "Mahfouz's work is freshly nuanced and hauntingly lyrical. The Nobel Prize acknowledges the universal significance of [his] fiction." —Los Angeles Times "Mr. Mahfouz embodied the essence of what makes the bruising, raucous, chaotic human anthill of Cairo possible." —The Economist Nobel Prize for Literature Mahfouz was awarded the 1988 Nobel Prize in Literature, the only Arab writer to have won the award. Shortly after winning the prize Mahfouz was quoted as saying: The Swedish letter to Mahfouz praised his "rich and complex work": Because Mahfouz found traveling to Sweden difficult at his age, he did not attend the award ceremony. Political involvement Mahfouz did not shrink from controversy outside of his work. As a consequence of his support for Sadat's Camp David peace treaty with Israel in 1978, his books were banned in many Arab countries until after he won the Nobel Prize. Like many Egyptian writers and intellectuals, Mahfouz was on an Islamic fundamentalist "death list". He defended British-Indian writer Salman Rushdie after Ayatollah Ruhollah Khomeini condemned Rushdie to death in a 1989 fatwa, but also criticized Rushdie's novel The Satanic Verses as "insulting" to Islam. Mahfouz believed in freedom of expression, and, although he did not personally agree with Rushdie's work, he spoke out against the fatwa condemning him to death for it. In 1989, after Ayatollah Khomeini's fatwa calling for Rushdie and his publishers to be killed, Mahfouz called Khomeini a terrorist. Shortly after, Mahfouz joined 80 other intellectuals in declaring that "no blasphemy harms Islam and Muslims so much as the call for murdering a writer." Assassination attempt and aftermath The publication of The Satanic Verses revived the controversy surrounding Mahfouz's novel Children of Gebelawi. Death threats against Mahfouz followed, including one from the "blind sheikh," Egyptian-born Omar Abdul-Rahman. Mahfouz was given police protection, but in 1994 an extremist succeeded in attacking the 82-year-old novelist by stabbing him in the neck outside his Cairo home. He survived, permanently affected by damage to nerves of his right upper limb. After the incident Mahfouz was unable to write for more than a few minutes a day and consequently produced fewer and fewer works. Subsequently, he lived under constant bodyguard protection. Finally, in the beginning of 2006, the novel was published in Egypt with a preface written by Ahmad Kamal Aboul-Magd. After the threats, Mahfouz stayed in Cairo with his lawyer, Nabil Mounir Habib. Mahfouz and Mounir would spend most of their time in Mounir's office; Mahfouz used Mounir's library as a reference for most of his books. Mahfouz stayed with Mounir until his death. Personal life Mahfouz remained a bachelor until age 43 because he believed that, with its numerous restrictions and limitations, marriage would hamper his literary future. "I was afraid of marriage . . . especially when I saw how busy my brothers and sisters were with social events because of it. This one went to visit people, that one invited people. I had the impression that married life would take up all my time. I saw myself drowning in visits and parties. No freedom." However, in 1954, he quietly married a Coptic Orthodox woman from Alexandria, Atiyyatallah Ibrahim, with whom he had two daughters, Fatima and Umm Kalthum. The couple initially lived on a houseboat in the Agouza section of Cairo on the west bank of the Nile, then moved to an apartment along the river in the same area. Mahfouz avoided public exposure, especially inquiries into his private life, which might have become, as he put it, "a silly topic in journals and radio programs." Mahfouz distinctly did not like to travel. Belgrade was one of the few cities to which he gladly went and he expressed great respect for Serbia. Works A translation into Arabic of James Baikie's Ancient Egypt (1932) مصر القديمة Whisper of Madness (1938) همس الجنون Mockery of the Fates (1939) عبث الأقدار. His first full-length novel, translated title in English Khufu's Wisdom. Rhadopis of Nubia (1943) رادوبيس The Struggle of Thebes (1944) كفاح طيبة Cairo Modern (1945) القاهرة الجديدة Khan al-Khalili (1945) خان الخليلي Midaq Alley (1947) زقاق المدق The Mirage (1948) السراب The Beginning and the End (1949) بداية ونهاية Palace Walk (1956) بين القصرين (Cairo Trilogy, Part 1) Palace of Desire (1957) قصر الشوق (Cairo Trilogy, Part
He was also attracted to socialist and democratic ideals early in his youth. The influence of socialist ideals is strongly reflected in his first two novels, Al-Khalili and New Cairo, as well as many of his later works. Parallel to his sympathy for socialism and democracy was his antipathy towards Islamic extremism. In his youth, Mahfouz had personally known Sayyid Qutb when Qutb was showing a greater interest in literary criticism than in Islamic fundamentalism; Qutb later became a significant influence on the Muslim Brotherhood. In the mid-1940s, Qutb was one of the first critics to recognize Mahfouz's talent, and by the 1960s, near the end of Qutb's life, Mahfouz even visited him in the hospital. But later, in the semi-autobiographical novel Mirrors, Mahfouz drew a negative portrait of Qutb. He was disillusioned with the 1952 revolution and by Egypt's defeat in the 1967 Six-Day War. He had supported the principles of the revolution, but became disenchanted, saying that the practices failed to live up to the original ideals. Mahfouz's writing influenced a new generation of Egyptian lawyers, including Nabil Mounir and Reda Aslan. Reception Mahfouz's translated works received praise from American critics: "The alleys, the houses, the palaces and mosques and the people who live among them are evoked as vividly in Mahfouz's work as the streets of London were conjured by Dickens." —Newsweek "Throughout Naguib Mahfouz's fiction there is a pervasive sense of metaphor, of a literary artist who is using his fiction to speak directly and unequivocally to the condition of his country. His work is imbued with love for Egypt and its people, but it is also utterly honest and unsentimental." —Washington Post "Mahfouz's work is freshly nuanced and hauntingly lyrical. The Nobel Prize acknowledges the universal significance of [his] fiction." —Los Angeles Times "Mr. Mahfouz embodied the essence of what makes the bruising, raucous, chaotic human anthill of Cairo possible." —The Economist Nobel Prize for Literature Mahfouz was awarded the 1988 Nobel Prize in Literature, the only Arab writer to have won the award. Shortly after winning the prize Mahfouz was quoted as saying: The Swedish letter to Mahfouz praised his "rich and complex work": Because Mahfouz found traveling to Sweden difficult at his age, he did not attend the award ceremony. Political involvement Mahfouz did not shrink from controversy outside of his work. As a consequence of his support for Sadat's Camp David peace treaty with Israel in 1978, his books were banned in many Arab countries until after he won the Nobel Prize. Like many Egyptian writers and intellectuals, Mahfouz was on an Islamic fundamentalist "death list". He defended British-Indian writer Salman Rushdie after Ayatollah Ruhollah Khomeini condemned Rushdie to death in a 1989 fatwa, but also criticized Rushdie's novel The Satanic Verses as "insulting" to Islam. Mahfouz believed in freedom of expression, and, although he did not personally agree with Rushdie's work, he spoke out against the fatwa condemning him to death for it. In 1989, after Ayatollah Khomeini's fatwa calling for Rushdie and his publishers to be killed, Mahfouz called Khomeini a terrorist. Shortly after, Mahfouz joined 80 other intellectuals in declaring that "no blasphemy harms Islam and Muslims so much as the call for murdering a writer." Assassination attempt and aftermath The publication of The Satanic Verses revived the controversy surrounding Mahfouz's novel Children of Gebelawi. Death threats against Mahfouz followed, including one from the "blind sheikh," Egyptian-born Omar Abdul-Rahman. Mahfouz was given police protection, but in 1994 an extremist succeeded in attacking the 82-year-old novelist by stabbing him in the neck outside his Cairo home. He survived, permanently affected by damage to nerves of his right upper limb. After the incident Mahfouz was unable to write for more than a few minutes a day and consequently produced fewer and fewer works. Subsequently, he lived under constant bodyguard protection. Finally, in the beginning of 2006, the novel was published in Egypt with a preface written by Ahmad Kamal Aboul-Magd. After the threats, Mahfouz stayed in Cairo with his lawyer, Nabil Mounir Habib. Mahfouz and Mounir would spend most of their time in Mounir's office; Mahfouz used Mounir's library as a reference for most of his books. Mahfouz stayed with Mounir until his death. Personal life Mahfouz remained a bachelor until age 43 because he believed that, with its numerous restrictions and limitations, marriage would hamper his literary future. "I was afraid of marriage . . . especially when I saw how busy my brothers and sisters were with social events because of it. This one went to visit people, that one invited people. I had the impression that married life would take up all my time. I saw myself drowning in visits and parties. No freedom." However, in 1954, he quietly married
officials). Because a client was beholden to his patron for his position, the client was eager to please his patron by carrying out his policies. The Soviet power structure essentially consisted (according to its critics) of groups of vassals (clients) who had an overlord (the patron). The higher the patron, the more clients the patron had. Patrons protected their clients and tried to promote their careers. In return for the patron's efforts to promote their careers, the clients remained loyal to their patron. Thus, by promoting his clients' careers, the patron could advance his own power. Party's appointment authority The nomenklatura system arose early in Soviet history. Vladimir Lenin wrote that appointments were to take the following criteria into account: reliability, political attitude, qualifications, and administrative ability. Joseph Stalin, who was the first general secretary of the party, was also known as "Comrade File Cabinet" (Tovarishch Kartotekov) for his assiduous attention to the details of the party's appointments. Seeking to make appointments in a more systematic fashion, Stalin built the party's patronage system and used it to distribute his clients throughout the party bureaucracy. Under Stalin's direction in 1922, the party created departments of the Central Committee and other organs at lower levels that were responsible for the registration and appointment of party officials. Known as uchraspred, these organs supervised appointments to important party posts. According to American sovietologist Seweryn Bialer, after Leonid Brezhnev's accession to power in October 1964, the party considerably expanded its appointment authority. However, in the late 1980s, some official statements indicated that the party intended to reduce its appointment authority, particularly in the area of economic management, in line with Mikhail Gorbachev's reform efforts. At the all-union level, the Party Building and Cadre Work Department supervised party nomenklatura appointments. This department maintained records on party members throughout the country, made appointments to positions on the all-union level, and approved nomenklatura appointments on the lower levels of the hierarchy. The head of this department sometimes was a member of the Secretariat and was often a protégé of the general secretary. Every party committee and party organizational department, from the all-union level in Moscow to the district and city levels, prepared two lists according to their needs. The basic (osnovnoi) list detailed positions in the political, administrative, economic, military, cultural, and educational bureaucracies that the committee and its department had responsibility for filling. The registered (uchetnyi) list enumerated the persons suitable for these positions. Patron–client relations An official in the party or government bureaucracy could not advance in the nomenklatura without the
the extent that he placed his clients in positions of power and influence. The ideal for the general secretary, writes Soviet émigré observer Michael Voslensky, "is to be overlord of vassals selected by oneself." Several factors explain the entrenchment of patron–client relations. Firstly, in a centralized government system, promotion in the bureaucratic-political hierarchy was the only path to power. Secondly, the most important criterion for promotion in this hierarchy was approval from one's supervisors, who evaluated their subordinates on the basis of political criteria and their ability to contribute to the fulfillment of the economic plan. Thirdly, political rivalries were present at all levels of the party and state bureaucracies but were especially prevalent at the top. Power and influence decided the outcomes of these struggles, and the number and positions of one's clients were critical components of that power and influence. Fourthly, because fulfillment of the economic plan was decisive, systemic pressures led officials to conspire together and use their ties to achieve that goal. The faction led by Brezhnev provides a good case study of patron–client relations in the Soviet system. Many members of the Brezhnev faction came from Dnipropetrovsk, where Brezhnev had served as first secretary of the provincial party organization. Andrei P. Kirilenko, a Politburo member and Central Committee secretary under Brezhnev, was first secretary of the regional committee of Dnipropetrovsk. Volodymyr Shcherbytsky, named as first secretary of the Ukrainian apparatus under Brezhnev, succeeded Kirilenko in that position. Nikolai Alexandrovich Tikhonov, appointed by Brezhnev as first deputy chairman of the Soviet Union's Council of Ministers, graduated from the Dnipropetrovsk Metallurgical Institute, and presided over the economic council of Dnipropetrovsk Oblast. Finally, Nikolai Shchelokov, minister of internal affairs under Brezhnev, was a former chairman of the Dnipropetrovsk soviet. Patron–client relations had implications for policy making in the party and government bureaucracies. Promotion of trusted subordinates into influential positions facilitated policy formation and policy execution. A network of clients helped to ensure that a patron's policies could be carried out. In addition, patrons relied on their clients to provide an accurate flow of information on events throughout the country. This information assisted policymakers in ensuring that their programs were being implemented. The New Class Milovan Đilas, a critic of Stalin, wrote of the nomenklatura as the "new class" in his book The New Class: An Analysis of the Communist System, and he claimed that it was seen by ordinary citizens as a bureaucratic elite that enjoyed special privileges and had supplanted the earlier wealthy capitalist élites. Criticism Some Marxists, such as Ernest Mandel, have criticised Đilas and the theory of state capitalism: The hypothesis that the Soviet bureaucracy is a new ruling class does not correspond to a serious analysis of the real development and the real contradictions of Soviet society and economy in the last fifty years. Such a hypothesis must imply, from the point of view of historical materialism, that a new exploitative mode of production has
glossary of Topobiology By 1984, Edelman would be ready to answer this question and combine it with his earlier ideas on degeneracy and somatic selection in the nervous system. Edelman would revisit this issue in Topobiology and combine it with an evolutionary approach, seeking a comprehensive theory of body plan formation and evolution. The regulator hypothesis In 1984, Edelman published his regulator hypothesis of CAM and SAM action in the development and evolution of the animal body plan. Edelman would reiterate this hypothesis in his Neural Darwinism book in support of the mechanisms for degenerate neuronal group formation in the primary repertoire. The regulator hypothesis was primarily concerned with the action of CAMs. He would later expand the hypothesis in Topobiology to include a much more diverse and inclusive set of morphoregulatory molecules. The evolutionary question Edelman realized that in order to truly complete Darwin's program, he would need to link the developmental question to the larger issues of evolutionary biology. "How is an answer to the developmental genetic question (q.v.) reconciled with the relatively rapid changes in form occurring in relatively short evolutionary times?" – Gerald M. Edelman, from the glossary of Topobiology The morphoregulator hypothesis Shortly after publishing his regulator hypothesis, Edelman expanded his vision of pattern formation during embryogenesis - and, sought to link it to a broader evolutionary framework. His first and foremost goal is to answer the developmental genetic question followed by the evolutionary question in a clear, consistent, and coherent manner. TNGS – the theory of neuronal group selection Edelman's motivation for developing the theory of neuronal group selection (TNGS) was to resolve "a number of apparent inconsistencies in our knowledge of the development, anatomy, and physiological function of the central nervous system." A pressing issue for Edelman was explaining perceptual categorization without reference to a central observing homunculus or "assuming that the world is prearranged in an informational fashion." To free himself of the demands, requirements, and contradictions of information processing model; Edelman proposes that perceptual categorization operates by the selection of neuronal groups organized into variant networks that are differentially amplified of their responses in conjunction with hedonic feedback over the course of experience, from within a massive population of neuronal groups being confronted by a chaotic array of sensory input of differing degrees of significance and relevance to the organism. Edelman outright rejects the notion of a homunculus, describing it as a "close cousin of the developmental electrician and the neural decoder", artifacts of the observer-centralized top-down design logic of information processing approaches. Edelman properly points out that "it is probably a safe guess that most neurobiologists would view the homunculus as well as dualist solutions (Popper and Eccles 1981) to the problems of subjective report as being beyond scientific consideration." Necessary criteria for a selectionist theory of higher brain function Edelman's first theoretical contribution to neural Darwinism came in 1978, when he proposed his group selection and phasic reentrant signalling. Edelman lays out five necessary requirements that a biological theory of higher brain function must satisfy. The theory should be consistent with the fields of embryology, neuroanatomy, and neurophysiology. The theory should account for learning and memory, and temporal recall in a distributed system. The theory should account how memory is updated on the basis of realtime experience. The theory should account for how higher brain systems mediate experience and action. The theory should account for the necessary, if not sufficient, conditions for the emergence of awareness. Organization of the TNGS theory Neural Darwinism organizes the explanation of TNGS into three parts – somatic selection, epigenetic mechanisms, and global functions. The first two parts are concerned with how variation emerges through the interaction of genetic and epigenetic events at the cellular level in response to events occurring at the level of the developing animal nervous system. The third part attempts to build a temporally coherent model of globally unitary cognitive function and behavior that emerges from the bottom up through the interactions of the neuronal groups in real-time. Edelman organized key ideas of the TNGS theory into three main tenets: Primary repertoire – developmental formation and selection of neuronal groups; Secondary repertoire – behavioral and experiential selection leading to changes in the strength of connections between synaptic populations that bind together neuronal groups; Reentrant signaling – the synchronous entrainment of reciprocally connected neuronal groups within sensorimotor maps into ensembles of coherent global activity. The primary repertoire is formed during the period from the beginning of neurulation to the end of apoptosis. The secondary repertoire extends over the period synaptogenesis and myelination, but will continue to demonstrate developmental plasticity throughout life, albeit in a diminished fashion compared to early development. The two repertoires deal with the issue of the relationship between genetic and epigenetic processes in determining the overall architecture of the neuroanatomy – seeking to reconcile nature, nurture, and variability in the forming the final phenotype of any individual nervous system. There is no point-to-point wiring that carries a neural code through a computational logic circuit that delivers the result to the brain because firstly, the evidence does not lend support to such notion in a manner that is not problematic, secondly, the noise in the system is too great for a neural code to be coherent, and third, the genes can only contribute to, and constrain, developmental processes; not determine them in all their details. Variation is the inevitable outcome of developmental dynamics. Reentrant signalling is an attempt to explain how "coherent temporal correlations of the responses of sensory receptor sheets, motor ensembles, and interacting neuronal groups in different brain regions occur". Primary repertoire- developmental selection The first tenet of TNGS concerns events that are embryonic and run up to the neonatal period. This part of the theory attempts to account for the unique anatomical diversification of the brain even between genetically identical individuals. The first tenet proposes the development of a primary repertoire of degenerate neuronal groups with diverse anatomical connections are established via the historical contingencies of the primary processes of development. It seeks to provide an explanation of how the diversity of neuronal group phenotypes emerge from the organism's genotype via genetic and epigenetic influences that manifest themselves mechano-chemically at the cell surface and determine connectivity. Edelman list the following as vital to the formation of the primary repertoire of neuronal groups but, also contributing to their anatomical diversification and variation: Cell division – there are repeated rounds of cell division in the formation of neuronal populations Cell death – there is extensive amounts of pre-programmed cell death that occurs via apoptosis within the neuronal populations. Process extension and elimination – the exploratory probing of the embryonic environment by developing neurons involve process extension and elimination as the neurons detect molecular gradients on neighboring cell surface membranes and the substrate of the extracellular matrix. CAM & SAM action – the mechanochemistry of cell and surface adhesion molecules plays a key role in the migration and connectivity of neurons as they form neuronal groups within the overall distributed population. Two key questions with respect to this issue that Edelman is seeking to answer "in terms of developmental genetic and epigenetic events" are: "How does a one-dimensional genetic code specify a three-dimensional animal?" "How is the answer to this question consistent with the possibility of relatively rapid morphological change in relatively short periods of evolutionary time?" Secondary repertoire – experiential selection The second tenet of TNGS regards postnatal events that govern the development of a secondary repertoire of synaptic connectivity between higher-order populations of neuronal groups whose formation is driven by behavioral or experiential selection acting on synaptic populations within and between neuronal groups. Edelman's notion of the secondary repertoire heavily borrows from work of Jean-Pierre Changeux, and his associates Philippe Courrège and Antoine Danchin – and, their theory of selective stabilization of synapses. Synaptic modification Once the basic variegated anatomical structure of the primary repertoire of neuronal groups is laid down, it is more or less fixed. But given the numerous and diverse collection of neuronal group networks, there are bound to be functionally equivalent albeit anatomically non-isomorphic neuronal groups and networks capable of responding to certain sensory input. This creates a competitive environment where neuronal groups proficient in their responses to certain inputs are "differentially amplified" through the enhancement of the synaptic efficacies of the selected neuronal group network. This leads to an increased probability that the same network will respond to similar or identical signals at a future time. This occurs through the strengthening of neuron-to-neuron synapses. These adjustments allow for neural plasticity along a fairly quick timetable. Reentry The third, and final, tenet of TNGS is reentry. Reentrant signalling "is based on the existence of reciprocally connected neural maps." These topobiological maps maintain and coordinate the real-time responses of multiple responding secondary repertoire networks, both unimodal and multimodal – and their reciprocal reentrant connections allow them to "maintain and sustain the spatiotemporal continuity in response to real-world signals." The last part of the theory attempts to explain how we experience spatiotemporal consistency in our interaction with environmental stimuli. Edelman called it "reentry" and proposes a model of reentrant signaling whereby a disjunctive, multimodal sampling of the same stimulus event correlated in time that make possible sustained physiological entrainment of distributed neuronal groups into temporally stable global behavioral units of action or perception. Put another way, multiple neuronal groups can be used to sample a given stimulus set in parallel and communicate between these disjunctive groups with incurred latency. The extended theory of neuronal group selection – the dynamic core hypothesis In the aftermath of his publication of Neural Darwinism, Edelman continued to develop and extend his TNGS theory as well as his regulator hypothesis. Edelman would deal with the morphological issues in Topobiology and begin to extend the TNGS theory in The Remembered Present. Periodically over the intervening years, Edelman would release a new update on his theory and the progress made. In The Remembered Present, Edelman would observe that the mammalian central nervous system seemed to have two distinct morphologically organized systems – one the limbic-brain stem system which is primarily dedicated to "appetitive, consumatory, and defensive behavior"; The other system is the highly reentrant thalamocortical system, consisting of the thalamus along with the "primary and secondary sensory areas and association cortex" which are "linked strongly to exteroceptors and is closely and extensively mapped in a polymodal fashion." The limbic-brain stem system - the interior world of signals The neural anatomy of the hedonic feedback system resides in the brain stem, autonomic, endocrine, and limbic systems. This system communicates its evaluation of the visceral state to the rest of the central nervous system. Edelman calls this system the limbic-brain stem system. The thalamocortical system - the exterior world of signals The thalamus is the gateway to the neocortex for all senses except olfactory. The spinothalamic tracts bring sensory information from the periphery to the thalamus, where multimodal sensory information is integrated and triggers the fast response subcortical reflexive motor responses via the amygdala, basal ganglia, hypothalamus and brainstem centers. Simultaneously, each sensory modality is also being sent to the cortex in parallel, for higher-order reflective analysis, multimodal sensorimotor association, and the engagement of the slow modulatory response that will fine-tune the subcortical reflexes. The
TNGS is most commonly referred to as the theory of neural Darwinism, although TNGS has roots going back to Edelman and Mountcastle's 1978 book, The Mindful Brain – Cortical Organization and the Group-selective Theory of Higher Brain Function – where Edelman's colleague, the American neurophysiologist and anatomist Vernon B. Mountcastle (July 15, 1918 – January 11, 2015), describes the columnar structure of the cortical groups within the neocortex, while Edelman develops his argument for selective processes operating among degenerate primary repertoires of neuronal groups. The development of neural Darwinism was deeply influenced by Edelman's work in the fields of immunology, embryology, and neuroscience, as well as his methodological commitment to the idea of selection as the unifying foundation of the biological sciences. Introduction to neural Darwinism Neural Darwinism is really the neural part of the natural philosophical and explanatory framework Edelman employs for much of his work – Somatic selective systems. Neural Darwinism is the backdrop for a comprehensive set of biological hypotheses and theories Edelman, and his team, devised that seek to reconcile vertebrate and mammalian neural morphology, the facts of developmental and evolutionary biology, and the theory of natural selection into a detailed model of real-time neural and cognitive function that is biological in its orientation – and, built from the bottom-up, utilizing the variation that shows up in nature, in contrast to computational and algorithmic approaches that view variation as noise in a system of logic circuits with point-to-point connectivity. The book, Neural Darwinism – The Theory of Neuronal Group Selection (1987), is the first in a trilogy of books that Edelman wrote to delineate the scope and breadth of his ideas on how a biological theory of consciousness and animal body plan evolution could be developed in a bottom-up fashion. In accordance with principles of population biology and Darwin's theory of natural selection – as opposed to the top-down algorithmic and computational approaches that dominated a nascent cognitive psychology at the time. The other two volumes are Topobiology – An Introduction to Molecular Embryology (1988) with its morpho-regulatory hypothesis of animal body plan development and evolutionary diversification via differential expression of cell surface molecules during development; and The Remembered Present – A Biological Theory of Consciousness (1989) – a novel biological approach to understanding the role and function of "consciousness" and its relation to cognition and behavioral physiology. Edelman would write four more books for the general lay public, explaining his ideas surrounding how the brain works and consciousness arises from the physical organization of the brain and body – Bright Air, Brilliant Fire – On the Matter of the Mind (1992), A Universe of Consciousness – How Matter Becomes Imagination (2000) with Giulio Tononi, Wider Than The Sky – The Phenomenal Gift of Consciousness (2004), and Second Nature – Brain Science and Human Knowledge (2006). Neural Darwinism is an exploration of biological thought and philosophy as well as fundamental science; Edelman being well-versed in the history of science, natural philosophy & medicine, as well as robotics, cybernetics, computing & artificial intelligence. In the course of laying out the case for neural Darwinism, or more properly TNGS, Edelman delineates a set of concepts for rethinking the problem of nervous system organization and function – all-the-while, demanding a rigorously scientific criteria for building the foundation of a properly Darwinian, and therefore biological, explanation of neural function, perception, cognition, and global brain function capable of supporting primary and higher-order consciousness. Population thinking – somatic selective systems Edelman was inspired by the successes of fellow Nobel laureate Frank MacFarlane Burnet and his clonal selection theory (CST) of acquired antigen immunity by differential amplification of pre-existing variation within the finite pool of lymphocytes in the immune system. The population of variant lymphocytes within the body mirrored the variant populations of organisms in the ecology. Pre-existing diversity is the engine of adaption in the evolution of populations. "It is clear from both evolutionary and immunological theory that in facing an unknown future, the fundamental requirement for successful adaption is preexisting diversity". – Gerald M. Edelman (1978) Edelman recognizes the explanatory range of Burnet's utilization of Darwinian principles in describing the operations of the immune system - and, generalizes the process to all cell populations of the organism. He also comes to view the problem as one of recognition and memory from a biological perspective, where the distinction and preservation of self vs. non-self is vital to organismal integrity. Neural Darwinism, as TNGS, is a theory of neuronal group selection that retools the fundamental concepts of Darwin and Burnet's theoretical approach. Neural Darwinism describes the development and evolution of the mammalian brain and its functioning by extending the Darwinian paradigm into the body and nervous system. Antibodies and NCAM – the emerging understanding of somatic selective systems Edelman was a medical researcher, physical chemist, immunologist, and aspiring neuroscientist when he was awarded the 1972 Nobel Prize in Physiology or Medicine (shared with Rodney Porter of Great Britain). Edelman's part of the prize was for his work revealing the chemical structure of the vertebrate antibody by cleaving the covalent disulfide bridges that join the component chain fragments together, revealing a pair of two-domain light chains and four-domain heavy chains. Subsequent analysis revealed the terminal domains of both chains to be variable domains responsible for antigen recognition. The work of Porter and Edelman revealed the molecular and genetic foundations underpinning how antibody diversity was generated within the immune system. Their work supported earlier ideas about pre-existing diversity in the immune system put forward by the pioneering Danish immunologist Niels K. Jerne (December 23, 1911 – October 7, 1994); as well as supporting the work of Frank MacFarlane Burnet describing how lymphocytes capable of binding to specific foreign antigens are differentially amplified by clonal multiplication of the selected preexisting variants following antigen discovery. Edelman would draw inspiration from the mechano-chemical aspects of antigen/antibody/lymphocyte interaction in relation to recognition of self-nonself; the degenerate population of lymphocytes in their physiological context; and the bio-theoretical foundations of this work in Darwinian terms. By 1974, Edelman felt that immunology was firmly established on solid theoretical grounds descriptively, was ready for quantitative experimentation, and could be an ideal model for exploring evolutionary selection processes within an observable time period. His studies of immune system interactions developed in him an awareness of the importance of the cell surface and the membrane-embedded molecular mechanisms of interactions with other cells and substrates. Edelman would go on to develop his ideas of topobiology around these mechanisms – and, their genetic and epigenetic regulation under the environmental conditions. During a foray into molecular embryology and neuroscience, in 1975, Edelman and his team went on to isolate the first neural cell-adhesion molecule (N-CAM), one of the many molecules that hold the animal nervous system together. N-CAM turned out to be an important molecule in guiding the development and differentiation of neuronal groups in the nervous system and brain during embryogenesis. To the amazement of Edelman, genetic sequencing revealed that N-CAM was the ancestor of the vertebrate antibody produced in the aftermath of a set of whole genome duplication events at the origin of vertebrates that gave rise to the entire super-family of immunoglobulin genes. Edelman reasoned that the N-CAM molecule which is used for self-self recognition and adherence between neurons in the nervous system gave rise to their evolutionary descendants, the antibodies, who evolved self-nonself recognition via antigen-adherence at the origins of the vertebrate antibody-based immune system. If clonal selection was the way the immune system worked, perhaps it was ancestral and more general – and, operating in the embryo and nervous system. Variation in biological systems – degeneracy, complexity, robustness, and evolvability Degeneracy, and its relationship to variation, is a key concept in neural Darwinism. The more we deviate from an ideal form, the more we are tempted to describe the deviations as imperfections. Edelman, on the other hand, explicitly acknowledges the structural and dynamic variability of the nervous system. He likes to contrast the differences between redundancy in an engineered system and degeneracy in a biological system. He proceeds to demonstrate how the "noise" of the computational and algorithmic approach is actually beneficial to a somatic selective system by providing a wide, and degenerate, array of potential recognition elements. Edelman's argument is that in an engineered system, a known problem is confronted a logical solution is devised an artifice is constructed to implement the resolution to the problem To insure the robustness of the solution, critical components are replicated as exact copies. Redundancy provides a fail-safe backup in the event of catastrophic failure of an essential component but it is the same response to the same problem once the substitution has been made. If the problem is predictable and known ahead of time, redundancy works optimally. But biological systems face an open and unpredictable arena of spacetime events of which they have no foreknowledge of. It is here where redundancy fails – when the designed answer is to the wrong problem... Variation fuels degeneracy – and degeneracy provides somatic selective systems with more than one way to solve a problem; as well as, the ability to solve more than one problem the same way. This property of degeneracy has the effect of making the system more adaptively robust in the face of unforeseen contingencies, such as when one particular solution fails unexpectedly – there are still other unaffected pathways that can be engaged to result in the comparable final outcome. Early on, Edelman spends considerable time contrasting degeneracy vs. redundancy, bottom-up vs. top-down processes, and selectionist vs. instructionist explanations of biological phenomena. Rejection of computational models, codes, and point-to-point wiring Edelman was well aware of the earlier debate in immunology between the instructionists, who believed the lymphocytes of the immune system learned or was instructed about the antigen and then devised a response; and the selectionists, who believed that the lymphocytes already contained the response to the antigen within the existing population that was differentially amplified within the population upon contact with the antigen. And, he was well aware that the selectionist had the evidence on their side. Edelman's theoretical approach in Neural Darwinism was conceived of in opposition to top-down algorithmic, computational, and instructionist approaches to explaining neural function. Edelman seeks to turn the problems of that paradigm to advantage instead; thereby highlighting the difference between bottom-up processes like we see in biology vis a vis top-down processes like we see in engineering algorithms. He sees neurons as living organisms working in cooperative and competitive ways within their local ecology and rejects models that see the brain in terms of computer chips or logic gates in a algorithmically organized machine. Edelman's commitment to the Darwinian underpinnings of biology, his emerging understanding of the evolutionary relationships between the two molecules he had worked with, and his background in immunology lead him to become increasingly critical and dissatisfied with attempts to describe the operation of the nervous system and brain in computational or algorithmic terms. Edelman explicitly rejects computational approaches to explaining biology as non-biological. Edelman acknowledges that there is a conservation of phylogenetic organization and structure within the vertebrate nervous system, but also points out that locally natural diversity, variation and degeneracy abound. This variation within the nervous system is disruptive for theories based upon strict point-to-point connectivity, computation, or logical circuits based upon codes. Attempts to understand this noise present difficulties for top-down algorithmic approaches - and, deny the fundamental facts of the biological nature of the problem. Edelman perceived that the problematic and annoying noise of the computational circuit-logic paradigm could be reinterpreted from a population biology perspective – where that variation in the signal or architecture was actually the engine of ingenuity and robustness from a selectionist perspective. Completing Darwin's program – the problems of evolutionary and developmental morphology In Topobiology, Edelman reflects upon Darwin's search for the connections between morphology and embryology in his theory of natural selection. He identifies four unresolved problems in the development and evolution of morphology that Darwin thought important: Explaining the finite number of body plans manifested since the Precambrian. Explaining large-scale morphological changes over relatively short periods of geological time. Understanding body size and the basis of allometry. How adaptive fitness can explain selection that leads to emergence of complex body structures. Later, In Bright Air, Brilliant Fire, Edelman describes what he calls Darwin's Program for obtaining a complete understanding of the rules of behavior and form in evolutionary biology. He identifies four necessary requirements: An account of the effects of heredity on behavior – and behavior, on heredity. An account of how selection influences behavior – and, how behavior influences selection. An account of how behavior is enabled and constrained by morphology. An account of how morphogenesis occurs in development and evolution. It is important to notice that these requirements are not directly stated in terms of genes, but heredity instead. This is understandable considering that Darwin himself appears to not be directly aware of the importance Mendelian genetics. Things had changed by the early 1900s, the Neodarwinian synthesis had unified the population biology of Mendelian inheritance with Darwinian natural selection. By the 1940s, the theories had been shown to be mutually consistent and coherent with paleontology and comparative morphology. The theory came to be known as the modern synthesis on the basis of the title of the 1942 book Evolution: The Modern Synthesis by Julian Huxley. The modern synthesis really took off with the discovery of the structural basis of heredity in the form of DNA. The modern synthesis was greatly accelerated and expanded with the rise of the genomic sciences, molecular biology, as well as, advances in computational techniques and the power to model population dynamics. But, for evolutionary-developmental biologists, there was something very important missing... – and, that was the incorporation of one of the founding branches of biology, embryology. A clear understanding of the pathway from germ to zygote to embryo to juvenile and adult was the missing component of the synthesis. Edelman, and his team, were positioned in time and space to fully capitalize on these technical developments and scientific challenges – as his research progressed deeper and deeper into the cellular and molecular underpinnings of the neurophysiological aspects of behavior and cognition from a Darwinian perspective. Edelman reinterprets the goals of "Darwin's program" in terms of the modern understanding about genes, molecular biology, and other sciences that weren't available to Darwin. One of his goals is reconciling the
The Great Frog on Carnaby Street. While in London, he came across the writings of novelist and Objectivist Ayn Rand. Rand's writings became a significant early philosophical influence on Peart, as he found many of her writings on individualism and Objectivism inspiring. References to Rand's philosophy can be found in his early lyrics, most notably "Anthem" from 1975's Fly by Night and "2112" from 1976's 2112. After eighteen months Peart became disillusioned by his lack of progress in the music business; he placed his aspiration of becoming a professional musician on hold and returned to Canada. Upon returning to St. Catharines, he worked for his father selling tractor parts at Dalziel Equipment. Joining Rush After returning to Canada, Peart was recruited to play drums for a St. Catharines band known as J R Flood, who played on the Southern Ontario bar circuit. Soon after, a mutual acquaintance convinced Peart to audition for the Toronto-based band Rush, which needed a replacement for its original drummer John Rutsey. Geddy Lee and Alex Lifeson oversaw the audition. His future bandmates describe his arrival that day as somewhat humorous, as he arrived in shorts, driving a battered old Ford Pinto with his drums stored in trashcans. Peart felt the entire audition was a complete disaster. While Lee and Peart hit it off on a personal level (both sharing similar tastes in books and music), Lifeson had a less favourable impression of Peart. After some discussion between Lee and Lifeson, Peart officially joined the band on July 29, 1974, two weeks before the group's first US tour. Peart procured a silver Slingerland kit which he played at his first gig with the band, opening for Uriah Heep and Manfred Mann's Earth Band in front of over 11,000 people at the Civic Arena in Pittsburgh on August 14, 1974. Peart soon settled into his new position, also becoming the band's primary lyricist. Before joining Rush he had written a few songs, but, with the other members largely uninterested in writing lyrics, Peart's previously underutilized writing became as noticed as his musicianship. The band was working hard to establish themselves as a recording act, and Peart, along with the rest of the band, began to undertake extensive touring. His first recording with the band, 1975's Fly by Night, was fairly successful, winning the Juno Award for most promising new act, but the follow-up, Caress of Steel, for which the band had high hopes, was greeted with hostility by both fans and critics. In response to this negative reception, most of which was aimed at the B side-spanning epic "The Fountain of Lamneth", Peart responded by penning "2112" on their next album of the same name in 1976. The album, despite record company indifference, became their breakthrough and gained a following in the United States. The supporting tour culminated in a three-night stand at Massey Hall in Toronto, a venue Peart had dreamed of playing in his days on the Southern Ontario bar circuit and where he was introduced as "The Professor on the drum kit" by Lee. Peart returned to England for Rush's Northern European Tour and the band stayed in the United Kingdom to record the next album, 1977's A Farewell to Kings in Rockfield Studios in Wales. They returned to Rockfield to record the follow-up, Hemispheres, in 1978, which they wrote entirely in the studio. The recording of five studio albums in four years, coupled with as many as 300 gigs a year, convinced the band to take a different approach thereafter. Peart has described his time in the band up to this point as "a dark tunnel". Playing style reinvention In 1991, Peart was invited by Buddy Rich's daughter, Cathy Rich, to play at the Buddy Rich Memorial Scholarship Concert in New York City. Peart accepted and performed for the first time with the Buddy Rich Big Band. Peart remarked that he had little time to rehearse, and noted that he was embarrassed to find the band played a different arrangement of the song than the one he had learned. Feeling that his performance left much to be desired, Peart decided to produce and play on two Buddy Rich tribute albums titled Burning for Buddy: A Tribute to the Music of Buddy Rich in 1994 and 1997 in order to regain his aplomb. While producing the first Buddy Rich tribute album, Peart was struck by the tremendous improvement in ex-Journey drummer Steve Smith's playing, and asked him his "secret". Smith responded he had been studying with drum teacher Freddie Gruber. In early 2007, Peart and Cathy Rich again began discussing yet another Buddy tribute concert. At the recommendation of bassist Jeff Berlin, Peart decided to once again augment his swing style with formal drum lessons, this time under the tutelage of another pupil of Freddie Gruber, Peter Erskine, himself an instructor of Steve Smith. On October 18, 2008, Peart once again performed at the Buddy Rich Memorial Concert at New York's Hammerstein Ballroom. The concert has since been released on DVD. Family deaths and recovery On August 10, 1997, soon after the conclusion of Rush's Test for Echo Tour, Peart's daughter (and then, his only child) Selena Taylor, 19, was killed in a single-car crash on Highway 401 near the town of Brighton, Ontario. His wife of 23 years, Jacqueline Taylor, subsequently died of cancer on June 20, 1998. Peart attributed her death to the result of a "broken heart" and called it "a slow suicide by apathy. She just didn't care." In his book Ghost Rider: Travels on the Healing Road, Peart wrote that he told his bandmates at Selena's funeral, "consider me retired". Peart took a long sabbatical to mourn and reflect, and travelled extensively throughout North and Central America on his motorcycle, covering . After his journey, Peart decided to return to the band. Peart wrote the book as a chronicle of his geographical and emotional journey. Peart was introduced to photographer Carrie Nuttall in Los Angeles by long-time Rush photographer Andrew MacNaughtan. They married on September 9, 2000. In early 2001, Peart announced to his bandmates that he was ready to return to recording and performing. The product of the band's return was the 2002 album Vapor Trails. At the start of the ensuing tour in support of the album, it was decided amongst the band members that Peart would not take part in the daily grind of press interviews and "meet and greet" sessions upon their arrival in a new city that typically monopolize a touring band's daily schedule. Peart always shied away from these types of in-person encounters, and it was decided that exposing him to a lengthy stream of questions about the tragic events of his life was not necessary. After the release of Vapor Trails and his reunion with bandmates, Peart returned to work as a full-time musician. In the June 2009 edition of Peart's website's News, Weather, and Sports, titled "Under the Marine Layer", he announced that he and Nuttall were expecting their first child. Olivia Louise Peart was born later that year. In the mid-2010s, Peart acquired U.S. citizenship. Retirement from touring Peart described himself as a "retired drummer" in an interview in December 2015: However, Geddy Lee clarified his bandmate was quoted out of context, and suggested Peart was simply taking a break, "explaining his reasons for not wanting to tour, with the toll that it's taking on his body." Peart had been suffering from chronic tendinitis and shoulder problems. In January 2018, Alex Lifeson confirmed that Rush is "basically done". Peart remained friends with his former bandmates. Death Peart died from glioblastoma, an aggressive form of brain cancer, on January 7, 2020, in Santa Monica, California. He had been diagnosed three and a half years earlier, and the illness was a closely guarded secret in Peart's inner circle until his death. His family made the announcement on January 10. From the official Rush website: Peart's death was widely lamented by fans and fellow musicians alike, who considered it a substantial loss for popular music. Neil's father, Glen, also died of cancer on June 12, 2021. Musicianship Style and influences Peart's drumming skill and technique are well-regarded by fans, fellow musicians, and music journalists. His influences were eclectic, ranging from Pete Thomas, John Bonham, Michael Giles, Ginger Baker, Phil Collins, Chris Sharrock, Steve Gadd, Stewart Copeland, Michael Shrieve and Keith Moon, to fusion and jazz drummers Billy Cobham, Buddy Rich, Bill Bruford and Gene Krupa. The Who was the first group that inspired him to write songs and play the drums. Peart had long played matched grip but shifted to traditional as part of his style reinvention in the mid-1990s under the tutelage of jazz coach Freddie Gruber. He played traditional grip throughout his first instructional DVD A Work in Progress and on Rush's Test for Echo studio album. Peart went back to using primarily matched, though he continued to switch to traditional at times when playing songs from Test for Echo and during moments when traditional grip felt more appropriate, such as during the rudimental snare drum section of his drum solo. He discussed the details of these switches in the DVD Anatomy of a Drum Solo. Variety wrote: "Widely considered one of the most innovative drummers in rock history, Peart was famous for his state-of-the-art drum kits – more than 40 different drums were not out of the norm – precise playing style and on stage showmanship." USA Todays writers compared him favorably with other top shelf rock drummers. He was "considered one of the best rock drummers of all time, alongside John Bonham of Led Zeppelin; Ringo Starr of The Beatles; Keith Moon of The Who; Ginger Baker of Cream and Stewart Copeland of The Police." Being "known for his technical proficiency", the Modern Drummer Hall of Fame inducted him in 1983. Music critic Amanda Petrusich in The New Yorker wrote: "Watching Peart play the drums gave the impression that he might possess several phantom limbs. The sound was merciless." Equipment With Rush, Peart played Slingerland, Tama, Ludwig, and Drum Workshop drums, in that order. Fly By Night and Caress of Steel were recorded with a 5x14 Rogers Dynasonic; chrome over brass with10 lugs. From 2112 to Counterparts, he used a 5 1/2 x 14 inch Slingerland "Artist" snare model (3-ply shell with 8 lugs). For the recording of Presto, he used a Ludwig and Solid Percussion piccolo snare drum. Peart played Zildjian A-series cymbals and Wuhan china cymbals until the early 2000s when he switched to Paragon, a line created for him by Sabian. In concert starting in 1984 on the Grace Under Pressure Tour, Peart used an elaborate 360-degree drum kit that would rotate as he played different sections of the kit. During the late 1970s, Peart augmented his acoustic setup with diverse percussion instruments, including orchestra bells, tubular bells, wind chimes, crotales, timbales, timpani, gong, temple blocks, bell tree, triangle, and melodic cowbells. From the mid-1980s, Peart replaced several of these pieces with MIDI trigger pads. This was done in order to trigger sounds sampled from various pieces of acoustic percussion that would otherwise consume far too much stage area. Some purely electronic non-instrumental sounds were also used. One classic MIDI pad used is the Malletkat Express, which is a two-octave electronic MIDI device that resembles a xylophone or piano. The Malletkat Express is composed of rubber pads for the "keys" so that any stick can be used. Beginning with 1984's Grace Under Pressure, he used Simmons electronic drums in conjunction with Akai digital samplers. Peart performed several songs primarily using the electronic portion of his drum kit. (e.g. "Red Sector A", "Closer to the Heart" on A Show of Hands and "Mystic Rhythms" on R30.) Shortly after making the choice to include electronic drums and triggers, Peart added what became another trademark of his kit: a rotating drum riser. During live Rush shows, the riser allowed Peart to swap the prominent portions of the kit (traditional acoustic in front, electronic in back). A staple of Peart's live drum solos was the in-performance rotation-and-swap of the front and back kits as part of the solo, a special effect that provided a symbolic transition of drum styles within the solo. In the early 2000s, Peart began taking full advantage of the advances in electronic drum technology, primarily incorporating Roland V-Drums and continued use of samplers with his existing set of acoustic percussion. His digitally-sampled library of both traditional and exotic sounds expanded over the years with his music. In April 2006, Peart took delivery of his third DW set, configured similarly to the R30 set, in a Tobacco Sunburst finish over curly maple exterior ply, with chrome hardware. He referred to this set, which he used primarily in Los Angeles, as the "West Coast kit". Besides using it on recordings with Vertical Horizon, he played it while composing parts for Rush's album, Snakes & Arrows. It featured a custom 23-inch bass drum; all other sizes remained the same as the R30 kit. On March 20, 2007, Peart revealed that Drum Workshop prepared a new set of red-painted DW maple shells with black hardware and gold "Snakes & Arrows" logos for him to play on the Snakes & Arrows Tour. Peart also designed his own signature series drumstick with Pro-Mark, the Promark PW747W, Neil Peart Signature drumsticks, made of Japanese white oak. During the 2010–11 Time Machine Tour Peart used a new DW kit which was outfitted with copper-plated hardware and time machine designs to match the tour's steampunk themes. Matching Paragon cymbals with clock imagery were also used. Solos Peart was noted for his distinctive in-concert drum solos, characterized by exotic percussion instruments and long, intricate passages in odd time signatures. His complex arrangements sometimes result in complete separation of upper- and lower-limb patterns; an ostinato dubbed "The Waltz" is a typical example. His solos were featured on every live album released by the band. On the early live albums (All the World's a Stage & Exit... Stage Left), the drum solo was included as part of a song. On all subsequent live albums through Time Machine 2011: Live in Cleveland, the drum solo has been included as a separate track. The Clockwork Angels Tour album includes three short solos instead of a single long one: two interludes played during other songs and one standalone. Similarly, the R40 Live album includes two short solos performed as interludes. Peart's instructional DVD Anatomy of a Drum Solo is an in-depth examination of how he constructs a solo that is musical rather than indulgent, using his solo from the 2004 R30 30th anniversary tour as an example. Lyricism Peart was the main lyricist for Rush. Literature heavily influenced his writings. In his early days with Rush, much of his lyrical output was influenced by fantasy, science fiction, mythology, and philosophy. The 1980 album Permanent Waves saw Peart cease to use fantasy and mythological themes. 1981's Moving Pictures showed that Peart was still interested in heroic, mythological figures, but now placed firmly in a modern, realistic context. The song "Limelight" from the same album is an autobiographical account of Peart's reservations regarding his own popularity and the pressures associated with fame. From Permanent Waves onward, most of Peart's lyrics began to revolve around social, emotional, and humanitarian issues, usually from an objective standpoint and employing the use of metaphors and symbolic representation. 1984's Grace Under Pressure strung together such despondent topics as the Holocaust ("Red Sector A") and the death of close friends ("Afterimage"). Starting with 1987's Hold Your Fire and including 1989's Presto, 1991's Roll the Bones, and 1993's Counterparts, Peart would continue to explore diverse lyrical motifs, even addressing the topic of love and relationships, ("Open Secrets", "Ghost of a Chance", "Speed of Love", "Cold Fire", "Alien Shore") a subject which he purposefully avoided in the past, out of fear of using clichés. 2002's Vapor Trails was heavily devoted to Peart's personal issues, along with other humanitarian topics such as the 9/11 terrorist attacks ("Peaceable Kingdom"). The album Snakes & Arrows dealt primarily and vociferously with Peart's opinions regarding faith and
the band. On the early live albums (All the World's a Stage & Exit... Stage Left), the drum solo was included as part of a song. On all subsequent live albums through Time Machine 2011: Live in Cleveland, the drum solo has been included as a separate track. The Clockwork Angels Tour album includes three short solos instead of a single long one: two interludes played during other songs and one standalone. Similarly, the R40 Live album includes two short solos performed as interludes. Peart's instructional DVD Anatomy of a Drum Solo is an in-depth examination of how he constructs a solo that is musical rather than indulgent, using his solo from the 2004 R30 30th anniversary tour as an example. Lyricism Peart was the main lyricist for Rush. Literature heavily influenced his writings. In his early days with Rush, much of his lyrical output was influenced by fantasy, science fiction, mythology, and philosophy. The 1980 album Permanent Waves saw Peart cease to use fantasy and mythological themes. 1981's Moving Pictures showed that Peart was still interested in heroic, mythological figures, but now placed firmly in a modern, realistic context. The song "Limelight" from the same album is an autobiographical account of Peart's reservations regarding his own popularity and the pressures associated with fame. From Permanent Waves onward, most of Peart's lyrics began to revolve around social, emotional, and humanitarian issues, usually from an objective standpoint and employing the use of metaphors and symbolic representation. 1984's Grace Under Pressure strung together such despondent topics as the Holocaust ("Red Sector A") and the death of close friends ("Afterimage"). Starting with 1987's Hold Your Fire and including 1989's Presto, 1991's Roll the Bones, and 1993's Counterparts, Peart would continue to explore diverse lyrical motifs, even addressing the topic of love and relationships, ("Open Secrets", "Ghost of a Chance", "Speed of Love", "Cold Fire", "Alien Shore") a subject which he purposefully avoided in the past, out of fear of using clichés. 2002's Vapor Trails was heavily devoted to Peart's personal issues, along with other humanitarian topics such as the 9/11 terrorist attacks ("Peaceable Kingdom"). The album Snakes & Arrows dealt primarily and vociferously with Peart's opinions regarding faith and religion. The song "2112" focuses on the struggle of an individual against the collectivist forces of a totalitarian state. This became the band's breakthrough release, but also brought unexpected criticism, mainly because of the credit of inspiration Peart gave to Ayn Rand in the liner notes. "There was a remarkable backlash, especially from the English press, this being the late seventies, when collectivism was still in style, especially among journalists", Peart said. "They were calling us 'Junior fascists' and 'Hitler lovers'. It was a total shock to me". Regarding his seeming ideological fealty to Rand's philosophy of Objectivism, Peart said, "For a start, the extent of my influence by the writings of Ayn Rand should not be overstated. I am no one's disciple." The lyrics of "Faithless" exhibit a life stance which has been closely identified with secular humanism. Peart explicitly discussed his religious views in The Masked Rider: Cycling in West Africa, in which he wrote: "I'm a linear thinking agnostic, but not an atheist, folks." In 2007, Peart was ranked No. 2 (after Sting) on the now defunct magazine Blender'''s list of "worst lyricists in rock". In contrast, Allmusic called him "one of rock's most accomplished lyricists". Political views For most of his career, Peart had never publicly identified with any political party or organization in Canada or the United States. Even so, his political and philosophical views have often been analyzed through his work with Rush and through other sources. In October 1993, shortly before that year's Canadian federal election, Peart appeared with then-Liberal Party leader Jean Chrétien in an interview broadcast in Canada on MuchMusic, but stated in that interview that he was an undecided voter. Peart has often been categorized as an Objectivist and an admirer of Ayn Rand. This is largely based on his work with Rush in the 1970s, particularly the song "Anthem" and the album 2112; the latter specifically credited Rand's work. However, in his 1994 Rush Backstage Club Newsletter, while contending the "individual is paramount in matters of justice and liberty," Peart specifically distanced himself from a strictly Objectivist line. In a June 2012 Rolling Stone interview, when asked if Rand's words still speak to him, Peart replied, "Oh, no. That was forty years ago. But it was important to me at the time in a transition of finding myself and having faith that what I believed was worthwhile." Although Peart was sometimes assumed to be a "Conservative" or "Republican" rock star, he criticized the US Republican Party by stating that the philosophy of the party is "absolutely opposed to Christ's teachings." In 2005, he described himself as a "left-leaning libertarian", and is often cited as a libertarian celebrity. In a 2015 interview with Rolling Stone, Peart stated that he saw the US Democratic Party as the lesser evil: "For a person of my sensibility, you’re only left with the Democratic party." Peart was a member of the Canadian charity Artists Against Racism and worked with them on a radio public service announcement. Bibliography Nonfiction Peart authored seven non-fiction books, the latest released in September 2016. Peart's first book, titled The Masked Rider: Cycling in West Africa, was written in 1996 about a month-long bicycling tour through Cameroon in November 1988. The book details Peart's travels through towns and villages with four fellow riders. The original had a limited print run, but after the critical and commercial success of Peart's second book, Masked Rider was re-issued by ECW Press and remains in print. After losing his wife and (at the time) only daughter, Peart embarked on a lengthy motorcycle road trip spanning North America. His experiences were penned in Ghost Rider: Travels on the Healing Road. Peart and the rest of the band were always able to keep his private life at a distance from his public image in Rush. However, Ghost Rider is a first-person narrative of Peart on the road on a BMW R1100GS motorcycle, in an effort to put his life back together as he embarked on an extensive journey. Years later, after his marriage to Nuttall, Peart took another road trip, this time by car. In his third book, Traveling Music: Playing Back the Soundtrack to My Life and Times, he reflects on his life, his career, his family, and music. As with his previous two books, it is a first-person narrative. Three decades after Peart joined Rush, the band found itself on its 30th anniversary tour. Released in September 2006, Roadshow: Landscape with Drums – A Concert Tour by Motorcycle details the tour both from behind Neil's drumkit and on his BMW R1150GS and R1200GS motorcycles. Peart's next book, Far and Away: A Prize Every Time, was published by ECW Press in May 2011. This book, which he worked on for two years, is formed around his traveling in North and South America. It tells how he found in a Brazilian town a unique combination of West African and Brazilian music. In 2014, a follow-up book, Far and Near: On Days like These, was published by ECW. It covers travels in North America and Europe. Another book, Far and Wide: Bring That Horizon to Me!, was published in 2016 and is based on his travels between stops on the R40 Live Tour of 2015. Nonfiction works include: The Masked Rider: Cycling in West Africa (1996, Pottersfield Press, ) Ghost Rider: Travels on the Healing Road (2002, ECW Press, ) Traveling Music: Playing Back the Soundtrack to My Life and Times (2004, ECW Press, ) Roadshow: Landscape with Drums – A Concert Tour by Motorcycle (2006, Rounder Books, ) Far and Away: A Prize Every Time (2011, ECW Press, ) Far and Near: On Days like These (2014, ECW Press, ) Far and Wide: Bring That Horizon to Me! (2016, ECW Press, ) Fiction Peart worked with science fiction author Kevin J. Anderson to develop a novelization of Rush's 2012 album Clockwork Angels; the book was published by ECW Press and debuted at #18 on the New York Times hardcover fiction best seller. The two collaborated again on a loose sequel, Clockwork Lives, published in 2015, which won the 2016 Colorado Book Award in the science fiction category. Snippets of the band's lyrics can be found throughout both stories. Graphic novels of the first two Clockwork were created in 2015 and 2019, respectively. During the years before his death, Peart worked with Anderson on Clockwork Destiny, which will be published in April 2022 through ECW Press. Fiction works include: "Drumbeats" with Kevin J. Anderson, short story published in Shock Rock II edited by Jeff Gelb (1994, Pocket Books, ).Drumbeats (September 2020, WordFire Press, , illustrated and expanded edition)Clockwork series: Clockwork Angels, written by Kevin J. Anderson, based on the story and lyrics by Neil Peart (2012, ECW Press, ) Clockwork Angels – The Graphic Novel, written by Kevin J. Anderson and Neil Peart, artwork by Nick Robles (2015, Boom! Studios, ) Clockwork Lives with Kevin J. Anderson (2015, ECW Press, ) Clockwork Lives – The Graphic Novel with Kevin J. Anderson (2019, Insight Editions, ) Clockwork Destiny with Kevin J. Anderson (April 2022, ECW Press, )) Side projects Jeff Berlin's 1985 album Champion, played drums on two songs, the title track "Champion", and "Marabi". Vertical Horizon's 2009 album Burning the Days – drums on 3 songs including "Save Me from Myself", "Welcome to the Bottom", and "Even Now", and co-wrote "Even Now" with Matt Scannell Vertical Horizon's 2013 album Echoes from the Underground – drums on 2 songs including "Instamatic" and "South for the Winter"Burning for Buddy: A Tribute to the Music of Buddy Rich ASIN: B001208NUQBurning for Buddy: A Tribute to the Music of Buddy Rich, Vol. 2 ASIN: B000002JD4 Peart had a brief cameo in the 2007 film Aqua Teen Hunger Force Colon Movie Film for Theaters, in which samples of his drumming were played. Peart also had a brief cameo in the 2008 film Adventures of Power and in the DVD extra does a drum-off competition. Peart appeared in concert with Rush in the 2009 film I Love You, Man, as well as a Funny or Die web short in which the film's main characters sneak into the band's dressing room. DVDs Apart from Rush's video
area through preservation of peace and security in accordance with the Charter of the United Nations. Article 4 Article 4 is generally considered the starting point for major NATO operations, and therefore is intended for either emergencies or situations of urgency. It officially calls for consultation over military matters when "the territorial integrity, political independence or security of any of the parties is threatened." Upon its invocation, the issue is discussed in the NAC, and can formally lead into a joint decision or action (logistic, military, or otherwise) on behalf of the Alliance. It has been invoked seven times since the alliance's creation. There have also been instances where Article 4 was not formally invoked, but instead threatened. In fact, this was viewed as one of the original intentions for Article 4: as a means to elevate issues and provide member nations a means of deterrence. For example, in November of 2021, the Polish foreign ministry briefly considered triggering article 4 due to the Belorussian migrant crisis, but it was not formally requested. Article 5 The key section of the treaty is Article 5. Its commitment clause defines the casus foederis. It commits each member state to consider an armed attack against one member state, in Europe or North America, to be an armed attack against them all. It has been invoked only once in NATO history: by the United States after the September 11 attacks in 2001. The invocation was confirmed on 4 October 2001, when NATO determined that the attacks were indeed eligible under the terms of the North Atlantic Treaty. The eight official actions taken by NATO in response to the 9/11 attacks included Operation Eagle Assist and Operation Active Endeavour, a naval operation in the Mediterranean which was designed to prevent the movement of terrorists or weapons of mass destruction, as well as enhancing the security of shipping in general. Active Endeavour began on 4 October 2001. In April 2012, Turkish Prime Minister Tayyip Erdoğan considered invoking Article 5 of the NATO treaty to protect Turkish national security in a dispute over the Syrian Civil War. The alliance responded quickly and a spokesperson said the alliance was "monitoring the situation very closely and will continue to do so" and "takes it very seriously protecting its members." On 17 April, Turkey said it would raise the issue quietly in the next NATO ministerial meeting. On 29 April, the Syrian foreign ministry wrote that it had received Erdoğan's message, which he had repeated a few days before, loud and clear. On 25 June, the Turkish Deputy Prime Minister said that he intended to raise Article 5 at a specially-convened NATO meeting because of the downing of an "unarmed" Turkish military jet which was "13 sea miles" from Syria over "international waters" on a "solo mission to test domestic radar systems". A Syrian Foreign Ministry spokesman insisted that the plane was "flying at an altitude of 100 meters inside the Syrian airspace in a clear breach of Syrian sovereignty" and that the "jet was shot down by anti-aircraft fire," the bullets of which "only have a range of 2.5 kilometers (1.5 miles)" rather than by radar-guided missile. On 5 August, Erdoğan stated, "The tomb of Suleyman Shah [in Syria] and the land surrounding it is our territory. We cannot ignore any unfavorable act against that monument, as it would be an attack on our territory, as well as an attack on NATO land... Everyone knows his duty, and will continue to do what is necessary." NATO Secretary-General Rasmussen later said in advance of the October 2012 ministerial meeting that the alliance was prepared to defend Turkey, and acknowledged that this border dispute concerned the alliance, but underlined the alliance's hesitancy over a possible intervention: "A military intervention can have unpredicted repercussions. Let me be very clear. We have no intention to interfere militarily [at present with Syria]." On 27 March 2014, recordings were released on YouTube of a conversation purportedly involving then Turkish foreign minister Ahmet Davutoğlu, Foreign Ministry Undersecretary Feridun Sinirlioğlu, then National Intelligence Organization (MİT) head Hakan Fidan, and Deputy Chief of General Staff General Yaşar Güler. The recording has been reported as being probably recorded at Davutoğlu's office at the Foreign Ministry on 13 March. Transcripts of the conversation reveal that as well as exploring the options for Turkish forces engaging in false flag operations inside Syria, the meeting involved a discussion about using the threat to the tomb as an excuse for Turkey to intervene militarily inside Syria. Davutoğlu stated that Erdoğan told him that he saw the threat to the tomb as an "opportunity". Prior to the meeting of Defence Ministers and recently appointed Secretary-General Jens Stoltenberg at Brussels in late June 2015, it was stated by a journalist, who referenced an off-the-record interview with an official source, that "Entirely legal activities, such as running a pro-Moscow TV station, could become a broader assault on a country that would require a NATO response under Article Five of the Treaty... A final strategy is expected in October 2015." In another report, the journalist reported that "as part of the hardened stance, the UK has committed £750,000 of its money to support a counter-propaganda unit at NATO's headquarters in Brussels." Article 6 Article 6 states that the treaty covers only member states' territories in Europe and North America, Turkey and islands in the North Atlantic north of the Tropic of Cancer, plus French Algeria. It was the opinion in August 1965 of the US State Department, the US Defense Department and the legal division of NATO that an attack on the U.S. state of Hawaii would not trigger the treaty, but an attack on the other 49 would. The Spanish cities of Ceuta and Melilla on the North African shore are thus not under NATO protection in spite of Moroccan claims to them. Legal experts have interpreted that other articles could cover the Spanish North African cities but this take has not been tested in practice. On 16 April 2003, NATO agreed to take command of the International
June, the Turkish Deputy Prime Minister said that he intended to raise Article 5 at a specially-convened NATO meeting because of the downing of an "unarmed" Turkish military jet which was "13 sea miles" from Syria over "international waters" on a "solo mission to test domestic radar systems". A Syrian Foreign Ministry spokesman insisted that the plane was "flying at an altitude of 100 meters inside the Syrian airspace in a clear breach of Syrian sovereignty" and that the "jet was shot down by anti-aircraft fire," the bullets of which "only have a range of 2.5 kilometers (1.5 miles)" rather than by radar-guided missile. On 5 August, Erdoğan stated, "The tomb of Suleyman Shah [in Syria] and the land surrounding it is our territory. We cannot ignore any unfavorable act against that monument, as it would be an attack on our territory, as well as an attack on NATO land... Everyone knows his duty, and will continue to do what is necessary." NATO Secretary-General Rasmussen later said in advance of the October 2012 ministerial meeting that the alliance was prepared to defend Turkey, and acknowledged that this border dispute concerned the alliance, but underlined the alliance's hesitancy over a possible intervention: "A military intervention can have unpredicted repercussions. Let me be very clear. We have no intention to interfere militarily [at present with Syria]." On 27 March 2014, recordings were released on YouTube of a conversation purportedly involving then Turkish foreign minister Ahmet Davutoğlu, Foreign Ministry Undersecretary Feridun Sinirlioğlu, then National Intelligence Organization (MİT) head Hakan Fidan, and Deputy Chief of General Staff General Yaşar Güler. The recording has been reported as being probably recorded at Davutoğlu's office at the Foreign Ministry on 13 March. Transcripts of the conversation reveal that as well as exploring the options for Turkish forces engaging in false flag operations inside Syria, the meeting involved a discussion about using the threat to the tomb as an excuse for Turkey to intervene militarily inside Syria. Davutoğlu stated that Erdoğan told him that he saw the threat to the tomb as an "opportunity". Prior to the meeting of Defence Ministers and recently appointed Secretary-General Jens Stoltenberg at Brussels in late June 2015, it was stated by a journalist, who referenced an off-the-record interview with an official source, that "Entirely legal activities, such as running a pro-Moscow TV station, could become a broader assault on a country that would require a NATO response under Article Five of the Treaty... A final strategy is expected in October 2015." In another report, the journalist reported that "as part of the hardened stance, the UK has committed £750,000 of its money to support a counter-propaganda unit at NATO's headquarters in Brussels." Article 6 Article 6 states that the treaty covers only member states' territories in Europe and North America, Turkey and islands in the North Atlantic north of the Tropic of Cancer, plus French Algeria. It was the opinion in August 1965 of the US State Department, the US Defense Department and the legal division of NATO that an attack on the U.S. state of Hawaii would not trigger the treaty, but an attack on the other 49 would. The Spanish cities of Ceuta and Melilla on the North African shore are thus not under NATO protection in spite of Moroccan claims to them. Legal experts have interpreted that other articles could cover the Spanish North African cities but this take has not been tested in practice. On 16 April 2003, NATO agreed to take command of the International Security Assistance Force (ISAF) in Afghanistan, which includes troops from 42 countries. The decision came at the request of Germany and the Netherlands, the two states leading ISAF at the time of the agreement, and all nineteen NATO ambassadors approved it unanimously. The handover of control to NATO took place on 11 August, and marked the first time in NATO's history that it took charge of a mission outside the north Atlantic area. Changes since signing Three official footnotes have been released to reflect the changes made since the treaty was written: The definition of the territories to which Article 5 applies was revised by Article 2 of the Protocol to the North Atlantic Treaty on the accession of Greece and Turkey signed on 22 October 1951. Regarding Article 6: On 16 January 1963, the North Atlantic Council noted that insofar
very hygroscopic compounds. The solid form of dinitrogen pentoxide, , actually consists of nitronium and nitrate ions, so it is an ionic compound, , not a molecular solid. However, dinitrogen pentoxide in liquid or gaseous state is molecular and does not contain nitronium ions. Related species The compounds nitryl fluoride, , and nitryl chloride, , are not nitronium salts but molecular compounds, as shown by their low boiling points (−72 °C and −6 °C respectively) and short nitrogen–halogen bond lengths (N–F 135 pm, N–Cl 184 pm). Addition of one electron forms the neutral nitryl radical, ; in fact, this is fairly
, or the protonation of nitric acid (with removal of ). It is stable enough to exist in normal conditions, but it is generally reactive and used extensively as an electrophile in the nitration of other substances. The ion is generated in situ for this purpose by mixing concentrated sulfuric acid and concentrated nitric acid according to the equilibrium: Structure The nitronium ion is isoelectronic with carbon dioxide and nitrous oxide, and has the same linear structure and bond angle of 180°. For this reason it has a similar vibrational spectrum to carbon dioxide. Historically, the nitronium ion was detected by Raman spectroscopy, because its symmetric stretch is Raman-active but infrared-inactive. The Raman-active symmetrical stretch was first used to identify the ion in nitrating mixtures. Salts A few stable nitronium salts with anions of weak nucleophilicity can be isolated. These include nitronium perchlorate ,
the Neo Geo X handheld and home system. The Neo Geo was a very powerful system when released, more powerful than any video game console at the time, and many arcade systems such as rival Capcom's CPS, which did not surpass it until the CP System II in 1993. The Neo Geo MVS was a success during the 1990s due to the cabinet's low cost, multiple cartridge slots, and compact size. Several successful video game series were released for the platform, such as Fatal Fury, Art of Fighting, Samurai Shodown, World Heroes, The King of Fighters and Metal Slug. The AES had a very niche market in Japan, though sales were very low in the U.S. due to its high price for both the hardware and software, but it has since gained a cult following and is now considered a collectable. Neo Geo hardware production lasted seven years, being discontinued in 1997, whereas game software production lasted until 2004, making Neo Geo the longest supported arcade system of all time. The AES console was succeeded by the Neo Geo CD and the MVS arcade by the Hyper Neo Geo 64. As of March 1997, the Neo Geo and the Neo Geo CD combined had sold 980,000 units worldwide. In 2009, the Neo Geo was ranked 19th out of the 25 best video game consoles of all time by video game website IGN. History The Neo Geo hardware was an evolution of an older SNK/Alpha Denshi M68000 arcade platform that was used in Time Soldiers in 1987, further developed in the SNK M68000 hardware platform as used for P.O.W.: Prisoners of War in 1988. Contrary to other popular arcade hardware of the time, the SNK/Alpha Denshi hardware used sprite strips instead of the more common tilemap-based backgrounds. The Neo Geo hardware was essentially developed by Alpha Denshi's Eiji Fukatsu, adding sprite scaling through the use of scaling tables stored in ROM as well as support for a much higher amount of data on cartridges and better sound hardware. The system's hardware specifications were finalized in December 1989. Takashi Nishiyama left Capcom, where he had created the fighting game Street Fighter (1987), to join SNK after they invited him to join the company. There, he was involved in developing the Neo Geo. He proposed the concept of an arcade system that uses ROM cartridges like a game console, and also proposed a home console version of the system. His reasons for these proposals was to make the system cheaper for markets such as China, Hong Kong, Taiwan, Southeast Asia, Central America, and South America, where it was difficult to sell dedicated arcade games due to piracy. Nishiyama also created the Fatal Fury fighting game franchise, as a spiritual successor to the original Street Fighter. He also worked on the fighting game franchises Art of Fighting and The King of Fighters, as well as the run-and-gun shooter series Metal Slug. The Neo Geo was announced and demonstrated on January 31, 1990, in Osaka, Japan. SNK exhibited several Neo Geo games at Japan's Amusement Machine Operators' Union (AOU) show in February 1990, including NAM-1975, Magician Lord, Baseball Stars Professional, Top Player's Golf and Riding Hero. The Neo Geo then made its overseas debut at Chicago's American Coin Machine Exposition (ACME) in March 1990, with several games demonstrated. The system was then released in Japan on April 26, 1990. Initially, the AES home system was only available for rent to commercial establishments, such as hotel chains, bars and restaurants. When customer response indicated that some gamers were willing to buy a console, SNK expanded sales and marketing into the home console market in 1991. Neo Geo's graphics and sound are largely superior to other contemporary home consoles, computers (such as the Sharp X68000) and even some arcade systems. Unlike earlier systems, the Neo Geo AES was intended to reproduce the same quality of game as the arcade MVS system. The MVS was one of the most powerful arcade units at the time, allowing the game ROM to be loaded from interchangeable cartridges instead of using custom, dedicated hardware cabinets for each game. In the United States, the console's debut price was planned to be and included two joystick controllers and a game: either Baseball Stars Professional or NAM-1975. However, the price was raised and its American launch debuted as the Gold System at (). Later, the Gold System was bundled with
high demand and it came into the market as a luxury console. As of 2013 it was the most expensive home video game console ever released, costing US$649.99 (). The AES had the same raw specs as the MVS and had full compatibility, allowing home users to play the games exactly as they were in the arcades. The Neo Geo was revived along with the brand overall in December 2012 through the introduction of the Neo Geo X handheld and home system. The Neo Geo was a very powerful system when released, more powerful than any video game console at the time, and many arcade systems such as rival Capcom's CPS, which did not surpass it until the CP System II in 1993. The Neo Geo MVS was a success during the 1990s due to the cabinet's low cost, multiple cartridge slots, and compact size. Several successful video game series were released for the platform, such as Fatal Fury, Art of Fighting, Samurai Shodown, World Heroes, The King of Fighters and Metal Slug. The AES had a very niche market in Japan, though sales were very low in the U.S. due to its high price for both the hardware and software, but it has since gained a cult following and is now considered a collectable. Neo Geo hardware production lasted seven years, being discontinued in 1997, whereas game software production lasted until 2004, making Neo Geo the longest supported arcade system of all time. The AES console was succeeded by the Neo Geo CD and the MVS arcade by the Hyper Neo Geo 64. As of March 1997, the Neo Geo and the Neo Geo CD combined had sold 980,000 units worldwide. In 2009, the Neo Geo was ranked 19th out of the 25 best video game consoles of all time by video game website IGN. History The Neo Geo hardware was an evolution of an older SNK/Alpha Denshi M68000 arcade platform that was used in Time Soldiers in 1987, further developed in the SNK M68000 hardware platform as used for P.O.W.: Prisoners of War in 1988. Contrary to other popular arcade hardware of the time, the SNK/Alpha Denshi hardware used sprite strips instead of the more common tilemap-based backgrounds. The Neo Geo hardware was essentially developed by Alpha Denshi's Eiji Fukatsu, adding sprite scaling through the use of scaling tables stored in ROM as well as support for a much higher amount of data on cartridges and better sound hardware. The system's hardware specifications were finalized in December 1989. Takashi Nishiyama left Capcom, where he had created the fighting game Street Fighter (1987), to join SNK after they invited him to join the company. There, he was involved in developing the Neo Geo. He proposed the concept of an arcade system that uses ROM cartridges like a game console, and also proposed a home console version of the system. His reasons for these proposals was to make the system cheaper for markets such as China, Hong Kong, Taiwan, Southeast Asia, Central America, and South America, where it was difficult to sell dedicated arcade games due to piracy. Nishiyama also created the Fatal Fury fighting game franchise, as a spiritual successor to the original Street Fighter. He also worked on the fighting game franchises Art of Fighting and The King of Fighters, as well as the run-and-gun shooter series Metal Slug. The Neo Geo was announced and demonstrated on January 31, 1990, in Osaka, Japan. SNK exhibited several Neo Geo games at Japan's Amusement Machine Operators' Union (AOU) show in February 1990, including NAM-1975, Magician Lord, Baseball Stars Professional, Top Player's Golf and Riding Hero. The Neo Geo then made its overseas debut at Chicago's American Coin Machine Exposition (ACME) in March 1990, with several games demonstrated. The system was then released in Japan on April 26, 1990. Initially, the AES home system was only available for rent to commercial establishments, such as hotel chains, bars and restaurants. When customer response indicated that some gamers were willing to buy a console, SNK expanded sales and marketing into the home console market in 1991. Neo Geo's graphics and sound are largely superior to other contemporary home consoles, computers (such as the Sharp X68000) and even some arcade systems. Unlike earlier systems, the Neo Geo AES was intended to reproduce the same quality of game as the arcade MVS system. The MVS was one of the most powerful arcade units at the time, allowing the game ROM to be loaded from interchangeable cartridges instead of using custom, dedicated hardware cabinets for each game. In the United States, the console's debut price was planned to be and included two joystick controllers and a game: either Baseball Stars Professional or NAM-1975. However, the price was raised and its American launch debuted as the Gold System at (). Later, the Gold System was bundled with Magician Lord and Fatal Fury. The Silver System package, launched at , included one joystick controller and no pack-in game. Other games were launched at about and up. At double or quadruple the price of the competition, the console and its games were accessible only to a niche market. However, its full compatibility meant that no additional money was being spent on porting or marketing for the AES, since the MVS' success was automatically feeding the AES, making the console profitable for SNK. In January 1991, Romstar released an arcade conversion kit version of the Neo Geo in the United States, allowing the conversion of an arcade cabinet into a Neo Geo system. The same month, the Neo Geo home console version made its North American debut at the Consumer Electronics Show (CES). SNK also announced that there would generally be a roughly six-month gap between the arcade and home releases of Neo Geo games. When real-time 3D graphics became the norm in the arcade industry, the Neo Geo's 2D hardware was unable to do likewise. Despite this, Neo Geo arcade games retained profitability through the mid-1990s, and the system was one of three 1995 recipients of the American Amusement Machine Association's Diamond Awards (which are based strictly on sales achievements). SNK developed a new console in 1994, called the Neo Geo CD. A new arcade was also made in 1997, called Hyper Neo Geo 64. However these two systems had low popularity and only a few games. While it ceased manufacturing home consoles by the end of 1997, SNK continued making software for the original 2D Neo Geo. Despite being very aged by the end of the decade, the Neo Geo continued getting popular releases, such as the critically acclaimed The King of Fighters 2002. The last official game by SNK for the Neo Geo system, Samurai Shodown V Special, was released in 2004, 14 years after the system's introduction. On August 31, 2007, SNK stopped offering maintenance and repairs to Neo Geo home consoles, handhelds, and games. The Neo Geo X, an officially licensed device with a collection of Neo Geo games pre-installed, was first released in 2012 by TOMMO Inc. After just one year and a lukewarm reception due to its price and poor quality of the emulation, on October 2, 2013, SNK Playmore terminated the license agreement and demanded an immediate cease and desist of distribution and sales of all licensed products. Reception The Neo Geo MVS was a worldwide commercial success upon release in arcades, becoming one of the highest-earning machines at various arcades across markets such as North America and Australia in 1990. In North America, three Neo Geo games were later among the ten top-grossing arcade software conversion kits in December 1992: Art of Fighting at number one, World Heroes at number two, and King of the Monsters 2 at number ten. The Neo Geo MVS received Diamond awards from the American Amusement Machine Association (AAMA) two years in a row, for being among America's top four best-selling arcade machines of 1992 (with Street Fighter II: Champion Edition, Mortal
CD had met with limited success due to it being plagued with slow loading times that could vary from 30 to 60 seconds between loads, depending on the game. In response to criticism of the Neo Geo CD's long load times, SNK planned to produce a model with a double speed CD-ROM drive for North America, compared to the single speed drive of the Japanese and European models. However, the system missed its planned North American launch date of October 1995, and while SNK declined to give a specific reason for the delay, in their announcement of the new January 1996 launch date they stated that they had decided against using a double speed drive. Their Japanese division had produced an excess number of single speed units and found that modifying these units to double speed was more expensive than they had initially thought, so SNK opted to sell them as they were, postponing production of a double speed model until they had sold off the stock of single speed units. The CDZ was only officially sold in Japan during its production. However, its faster loading times, lack of a "region lock", and the fact that it could play older CD software, made it a popular import item for enthusiasts in both Europe and North America. The system's technical specs are identical to the previous models except that it includes a double-speed CD-ROM drive. In response to reader inquiries about Neo Geo CD software, GamePro reported in an issue cover dated May 1997 that SNK had quietly discontinued the console by this time. Reception Criticism of the system's generally long loading times began even before launch; a report in Electronic Gaming Monthly on the Neo Geo CD's unveiling noted, "At the show, they were showing a demo of Fatal Fury 2. The prototype of the machine that they showed was single speed, and the load time was 14-28 seconds between rounds. You can see that the screen[shot] on the right is a load screen." Approximately one month after launch, SNK reported that they had sold the Neo Geo CD's entire initial shipment of 50,000 units. Reviewing the Neo Geo CD in late 1995, Next Generation noted SNK's reputation for fun games but argued that their failure to upgrade the Neo Geo system with 3D capabilities would keep them from producing any truly "cutting edge" games, and limit the console to the same small cult following as the Neo Geo AES system although with less expensive games. They gave it 1 1/2 out of 5 stars. Technical specifications Main Processor: Motorola 68000 running at 12 MHz Although the original 68000 CPU was designed by Motorola, there are many clones of this CPU found in the Neo Geo hardware. The most common CPU is the TMP68HC000 manufactured by Toshiba. Coprocessor: Zilog Z80 running at 4 MHz Colors On Screen: 4,096 Colors Available: 65,536 Resolution: 304 x 224 Max Sprites: 384 Max Sprite Size: 16 x 512 Number of Planes: 3 (128 sprites per plane as the Neo Geo does not use bitmaps for its planes like with most game systems at the time) The system is also capable of reading Redbook standard compact disc audio. In addition to the multi-AV port (nearly
September 9, 1994, four years after its cartridge-based equivalent. This is the same platform, converted to the cheaper CD format retailing at $49 to $79 per title, compared to the $300 cartridges. The system was originally priced at US$399, or £399 in the UK. The system can also play Audio CDs. All three versions of the system have no region-lock. The Neo Geo CD was launched bundled with a control pad instead of a joystick like the AES version. However, the original AES joystick can be used with all three Neo Geo CD models. As of March 1997, there had been 570,000 Neo Geo CD units sold worldwide. History The Neo Geo CD was first unveiled at the 1994 Tokyo Toy Show. The console uses the same CPU set-up as the arcade and cartridge-based Neo Geo systems, facilitating conversions. SNK planned to release Neo Geo CD versions of every Neo Geo game still in the arcades. Three versions of the Neo Geo CD were released: A front-loading version, only distributed in Japan, with 25,000 total units built. A top-loading version, marketed worldwide, as the most common model. The Neo Geo CDZ, an upgraded, faster-loading version, released in Japan only. The front-loading version is the original console design, with the top-loading version having been developed shortly before the Neo Geo CD launch as a scaled-down, cheaper alternative model. The CDZ was released on December 29, 1995 as the Japanese market replacement for SNK's previous efforts (the "front loader" and the "top loader"). The Neo Geo CD had met with limited success due to it being plagued with slow loading times that could vary from 30 to 60 seconds between loads, depending on the game. In response to criticism of the Neo Geo CD's long load times, SNK planned to produce a model with a double speed CD-ROM drive for North America, compared to the single speed drive of the Japanese and European models. However, the system missed its planned North American launch date of October 1995, and while SNK declined to give a specific reason for the delay, in their announcement of the new January 1996 launch date they stated that they had decided against using a double speed drive. Their Japanese division had produced an excess number of single speed units and found that modifying these units to double speed was more expensive than they had initially thought, so SNK opted to sell them as they were, postponing production of a double speed model until they had sold off the stock of single speed units. The CDZ was only officially sold in Japan during its production. However, its faster loading times, lack of a "region lock", and the fact that it could play older CD software, made it a popular import item for enthusiasts in both Europe and North America. The system's technical specs are identical to the previous models except that it includes a double-speed CD-ROM drive. In response to reader inquiries about Neo Geo CD software, GamePro reported in an issue cover dated May 1997 that SNK had quietly discontinued the console by this time. Reception Criticism of the system's generally long loading times began even before launch; a report in Electronic Gaming Monthly on the Neo Geo
them. To comply with COPPA, users under 13 years of age cannot access any of the site's communication features without sending in parental consent via fax. The main features include: NeoMail, a personal in-game communication system like regular email. Users can write messages to other users and restrict who can contact them through NeoMail. Neoboards, public discussion boards for on-topic discussions. Users can enter their own "neoHTML", a restricted form of BBCode, to customise their posts and signatures. Guilds, groups of users with similar interests and their own message board. Discussions through these features are restricted and may not involve topics such as dating and romance or controversial topics like politics and religion. Continuous moderation is performed by paid Neopets staff members, and users can help moderate the site by reporting messages they believe are inappropriate or offensive. Messages are also automatically filtered to prevent users from posting messages with profanity or lewd content. History and development Creation and growth Neopets was conceived in 1997 by Adam Powell, a British student at the University of Nottingham at the time. He shared this idea with Donna Williams and the two started work on the site in September 1999, with Powell responsible for the programming and the database and Williams the web design and art. Their original office was located in Guildford. With the help of two friends, the site launched on 15 November 1999. Powell stated that the original goal was to "keep university students entertained, and possibly make some cash from banner advertising". The site contained popular culture references, such as a Neopet that was simply a picture of Bruce Forsyth. The user base grew by word of mouth and by Christmas 1999, Neopets was logging 600,000 page views daily and sought investors to cover the high cost of running the site. Later in the month, American businessman Doug Dohring was introduced to the creators of the site and, along with other investors, bought a majority share in January of the following year. Dohring founded Neopets, Inc. in February 2000 and began business on 28 April. Dohring used Scientology's Org Board to manage the company. Adam and Donna were unaware of the Scientology connections until googling the employees at the newly formed company six months later but did not address this until the company hired a woman to introduce Scientology to Neopets. Adam and Donna stopped the addition of any Scientology education to Neopets and ensured such content never made it into anything site-related. With the new company, intellectual property that did not belong to Neopets was removed but the site kept the British spellings. The website made money from the first paying customers using an advertising method trademarked as "immersive advertising" and touted as "an evolutionary step forward in the traditional marketing practice of product placement" in television and film. In 2004, Neopets released a premium version and started showing advertisements on the basic site that were not shown to premium members. Viacom era Viacom, the American conglomerate that owns Nickelodeon, purchased Neopets, Inc. on 20 June 2005 for $160 million and announced plans to focus more on the use of banner ads over the site's existing immersive advertising. Adam Powell and Donna Williams left Neopets, Inc. shortly after the purchase due to creative differences. The website was redesigned on 27 April 2007 and included changes to the user interface and the ability to customise Neopets. In June, Viacom promoted Neopets through minishows on its Nickelodeon channel. Promotions included the second Altador Cup and led to an increase in traffic through the site. The first Altador Cup was released as an international online gaming event to coincide with the 2006 FIFA World Cup to improve interactivity between users and had 10.4 million participants the first year. On 17 July, the NC Mall was launched in a partnership with Korean gaming company Nexon Corporation. It allowed users to use real money to purchase Neocash to buy exclusive virtual items. On 17 June 2008, Viacom formed the Nickelodeon Kids & Family Virtual Worlds Group to "encompass all paid and subscription gaming initiatives across all relevant platforms", including Neopets. By June 2011, Neopets announced that the website had logged 1 trillion page views since its creation. In July 2009, the Neopets site was the target of an identity theft hacking scheme that attempted to trick users into clicking a link that would allow them to gain items or Neopoints. Upon doing so, malware was installed onto the user's computer. According to reports, the hack was aimed not at child players' Neopets accounts, but at using the malware to steal the financial data and identities of their parents. Viacom stated that it was investigating the issue, and that the hack was a version of social engineering rather than an "indictment of Neopets security practices". In an on-site newsletter for players, Neopets denied the report and claimed that the site's security measures prevented the posting of such links. JumpStart era JumpStart Games acquired Neopets from Viacom in March 2014. Server migration began in September. JumpStart-owned Neopets was immediately characterized by glitches and site lag. On 6 March 2015, much of the Neopets Team remaining from Viacom were laid off. Then-CEO of JumpStart David Lord assured the community that there were no plans to shut down Neopets and instead, resources were allocated to develop new content and address improve lag and site stability with plans to expand to other platforms including mobile and Facebook. The Neopets team started developing in-universe plots again in 2017 for the first time since the JumpStart acquisition. On 3 July 2017, Chinese company NetDragon acquired JumpStart Games. With the support for Adobe Flash ending in 2020, the Neopets Team announced in 2019 that it planned to transition Flash elements of the site to HTML5 by the end of 2020. The team prioritized popular features and some parts of the site are not functional. The Neopets Team also started working on developing a mobile-friendly version of the site as well as a mobile app. During the weekend of 27–28 June 2015, the site's chat filters stopped working. The site's forums were flooded with age-inappropriate messages. In a statement on Facebook, JumpStart apologized, explaining that the issue was due to a "facility move," and that during that move, the moderation team was not able to access the Neopets community. In 2016, Motherboard reported that the account information of an alleged 70 million of Neopets accounts had been compromised. The hack contained usernames, passwords, email addresses, birth dates, gender, and country from 2012 (prior to JumpStart's acquisition), but did not contain credit card information or physical addresses. Neopets responded by posting about the leak on their official Facebook page and sent emails out to all affected players telling them to change their passwords. Metaverse On September 22, 2021, the Neopets Metaverse Collection NFT was revealed in collaboration with JumpStart, Cherrypicks, and Moonvault. Users could purchase NMC tokens on the Neopets Metaverse Collection website from November 12, 2021 - November 15, 2021 that could then be exchanged for a randomly generated Neopet NFT on the Solana blockchain from November 15, 2021 - November 18, 2021. 4,233 NMC tokens were sold for a total of 8,708 SOLs which resulted in 4225 pieces minted for the genesis collection. Shortly after the project was announced a unique visual glitch (certain attributes were layered and colored incorrectly) revealed that at least one of images used for promotion on the Neopets Metaverse Collection website was generated using the Dress to Impress fan site, the Neopets Metaverse team replaced the image in question shortly after it was noticed and haven't responded to allegations they used Dress to Impress to generate the entire NFT collection. The Neopets Metaverse project has received a significant amount of criticism from within the Neopets community, citing general concerns about NFTs, as well as details specific to the Neopets Metaverse. Reception Described as an online cross of Pokémon and Tamagotchi, Neopets has received both praise and criticism. It has been praised for having educational content. Children can learn HTML to edit their own pages. They can also learn how to handle money by participating in the economy. Reviews from About.com and MMO Hut considered the multitude of possible activities a positive aspect. Most of the users are female, higher than in other massively multiplayer online games (MMOGs) but equivalent to social-networking-driven communities. Lucy Bradshaw, a vice president of Electronic Arts, attributes the popularity among girls to the openness of the site and said, "Games that have a tendency to satisfy on more than one dimension have a tendency to have a broader appeal and attract girls". Luck & chance games draw criticism from parents as they introduce children
2016, Motherboard reported that the account information of an alleged 70 million of Neopets accounts had been compromised. The hack contained usernames, passwords, email addresses, birth dates, gender, and country from 2012 (prior to JumpStart's acquisition), but did not contain credit card information or physical addresses. Neopets responded by posting about the leak on their official Facebook page and sent emails out to all affected players telling them to change their passwords. Metaverse On September 22, 2021, the Neopets Metaverse Collection NFT was revealed in collaboration with JumpStart, Cherrypicks, and Moonvault. Users could purchase NMC tokens on the Neopets Metaverse Collection website from November 12, 2021 - November 15, 2021 that could then be exchanged for a randomly generated Neopet NFT on the Solana blockchain from November 15, 2021 - November 18, 2021. 4,233 NMC tokens were sold for a total of 8,708 SOLs which resulted in 4225 pieces minted for the genesis collection. Shortly after the project was announced a unique visual glitch (certain attributes were layered and colored incorrectly) revealed that at least one of images used for promotion on the Neopets Metaverse Collection website was generated using the Dress to Impress fan site, the Neopets Metaverse team replaced the image in question shortly after it was noticed and haven't responded to allegations they used Dress to Impress to generate the entire NFT collection. The Neopets Metaverse project has received a significant amount of criticism from within the Neopets community, citing general concerns about NFTs, as well as details specific to the Neopets Metaverse. Reception Described as an online cross of Pokémon and Tamagotchi, Neopets has received both praise and criticism. It has been praised for having educational content. Children can learn HTML to edit their own pages. They can also learn how to handle money by participating in the economy. Reviews from About.com and MMO Hut considered the multitude of possible activities a positive aspect. Most of the users are female, higher than in other massively multiplayer online games (MMOGs) but equivalent to social-networking-driven communities. Lucy Bradshaw, a vice president of Electronic Arts, attributes the popularity among girls to the openness of the site and said, "Games that have a tendency to satisfy on more than one dimension have a tendency to have a broader appeal and attract girls". Luck & chance games draw criticism from parents as they introduce children to gambling. In Australia, a cross-promotion with McDonald's led to controversy with Neopets''' luck/chance games in October 2004. Australian tabloid television show Today Tonight featured a nine-year-old boy who claimed the site requires one to gamble in order to earn enough Neopoints to feed one's Neopet or else it would be sent to the pound. While gambling is not required, nor are pets sent to the pound if unfed, the website includes games of chance based on real games such as blackjack and lottery scratchcards. After this incident, Neopets prohibited users under the age of 13 from playing Neopets's casino-style games. Rise in popularity and decline In the 2000s, Neopets was consistently noted as one of the "stickiest" sites for children's entertainment. Stickiness is a measure of the average amount of time spent on a website. A press release from Neopets in 2001 stated that Neopets.com led in site "stickiness" in May and June, with the average user spending 117 minutes a week. Neopets also led in the average number of hours spent per user per month in December 2003 with an average of 4 hours and 47 minutes. A 2004 article stated that Nielsen//NetRatings reported that people were spending around three hours a month on Neopets, more than any other site in its Nielsen category. By May 2005, a Neopets-affiliated video game producer cited about 35 million unique users, 11 million unique IP addresses per month, and 4 billion web page views per month. This producer also described 20% of the users as 18 or older, with the median of the remaining 80% at about 14. Neopets was consistently ranked among the top ten "stickiest" sites by both Nielsen//NetRatings and comScore Media Metrix in 2005 and 2006. According to Nielsen//NetRatings, in 2007, Neopets lost about 15% of its audience over the previous year. In February 2008, comScore ranked it as the stickiest kids entertainment site with the average user spending 2 hours and 45 minutes per month. In January 2017, Neopets then-JumpStart CEO David Lord estimated 100,000 active daily users. In January 2020, Neopets only logged 3.4 million views per month, a significant decline from its peak. In June 2020, JumpStart CEO Jim Czulewicz estimated Neopets had 100,000 daily active users and 1.5 million monthly active players. Immersive advertising Immersive advertising is a trademarked term for the way Neopets displayed advertisements to generate profit after Doug Dohring bought the site. Unlike pop-up and banner ads, immersive ads integrate advertisements into the site's content in interactive forms, including games and items. Players could earn Neopoints from them by playing advergames and taking part in online marketing surveys. Prior to the arrival of the NC Mall, it contributed to 60% of the revenue from the site with paying Fortune 1000 companies including Disney, General Mills, and McDonald's. It was a contentious issue with the site with regard to the ethics of marketing to children. It drew criticism from parents, psychologists, and consumer advocates who argued that children may not know that they are being advertised to, as it blurred the line between site content and advertisement. Children under eight had difficulty recognizing ads and half a million of the 25 million users were under the age of eight in 2005. Dohring responded to such criticism stating that of the 40 percent of
American man, was lynched in a spectacle murder in front of a European American mob of 10,000 in Nashville. His lynching was described by journalist Ida B. Wells as: "A naked, bloody example of the blood-thirstiness of the nineteenth century civilization of the Athens of the South." His brother, Henry Grizzard, had been lynched and hanged on April 24, 1892, in nearby Goodlettsville as a suspect in the same assault incident. From 1877 to 1950, a total of six lynchings of Blacks were conducted in Davidson County, four before the turn of the century. Earlier 20th century By the turn of the century, Nashville had become the cradle of the Lost Cause of the Confederacy. The first chapter of the United Daughters of the Confederacy was founded here, and the Confederate Veteran magazine was published here. Most "guardians of the Lost Cause" lived Downtown or in the West End, near Centennial Park. At the same time, Jefferson Street became the historic center of the African American community, with similar districts developing in the Black neighborhoods in East and North Nashville. In 1912, the Tennessee Agricultural and Industrial and Normal School was moved to Jefferson Street. The first Prince's Hot Chicken Shack originated at the corner of Jefferson Street and 28th Avenue in 1945. Jefferson Street became a destination for jazz and blues musicians, and remained so until the federal government split the area by construction of Interstate 40 in the late 1960s. In 1950 the state legislature approved a new city charter that provided for the election of city council members from single-member districts, rather than at-large voting. This change was supported because at-large voting required candidates to gain a majority of votes from across the city. The previous system prevented the minority population, which then tended to support Republican candidates, from being represented by candidates of their choice; apportionment under single-member districts meant that some districts in Nashville had Black majorities. In 1951, after passage of the new charter, African American attorneys Z. Alexander Looby and Robert E. Lillard were elected to the city council. With the United States Supreme Court ruling in 1954 that public schools had to desegregate with "all deliberate speed", the family of student Robert Kelley filed a lawsuit in 1956, arguing that Nashville administrators should open all-White East High School to him. A similar case was filed by Reverend Henry Maxwell due to his children having to take a 45-minute bus ride from South Nashville to the north end of the city. These suits caused the courts to announce what became known as the "Nashville Plan", where the city's public schools would desegregate one grade per year beginning in the fall of 1957. Urban redevelopment accelerated over the next several decades, and the city grew increasingly segregated. An interstate was placed on the edge of East Nashville while another highway was built through Edgehill, a lower-income, predominantly minority community. Postwar development to present Rapid suburbanization occurred during the years immediately after World War II, as new housing was being built outside city limits. This resulted in a demand for many new schools and other support facilities, which the county found difficult to provide. At the same time, suburbanization led to a declining tax base in the city, although many suburban residents used unique city amenities and services that were supported financially only by city taxpayers. After years of discussion, a referendum was held in 1958 on the issue of consolidating city and county government. It failed to gain approval although it was supported by the then-elected leaders of both jurisdictions, County Judge Beverly Briley and Mayor Ben West. Following the referendum's failure, Nashville annexed some 42 square miles of suburban jurisdictions to expand its tax base. This increased uncertainty among residents, and created resentment among many suburban communities. Under the second charter for metropolitan government, which was approved in 1962, two levels of service provision were proposed: the General Services District and the Urban Services District, to provide for a differential in tax levels. Residents of the Urban Services District had a full range of city services. The areas that made up the General Services District, however, had a lower tax rate until full services were provided. This helped reconcile aspects of services and taxation among the differing jurisdictions within the large metro region. In the early 1960s, Tennessee still had racial segregation of public facilities, including lunch counters and department store fitting rooms. Hotels and restaurants were also segregated. Between February 13 and May 10, 1960, a series of sit-ins were organized at lunch counters in downtown Nashville by the Nashville Student Movement and Nashville Christian Leadership Council, and were part of a broader sit-in movement in the southeastern United States as part of an effort to end racial segregation of public facilities. On April 19, 1960, the house of Z. Alexander Looby, an African American attorney and council member, was bombed by segregationists. Protesters marched to the city hall the next day. Mayor Ben West said he supported the desegregation of lunch counters, which civil rights activists had called for. In 1963, Nashville consolidated its government with Davidson County, forming a metropolitan government. The membership on the Metro Council, the legislative body, was increased from 21 to 40 seats. Of these, five members are elected at-large and 35 are elected from single-member districts, each to serve a term of four years. In 1957 Nashville desegregated its school system using an innovative grade a year plan, in response to a class action suit Kelly vs. Board of Education of Nashville. By 1966 the Metro Council abandoned the grade a year plan and completely desegregated the entire school system at one time. Congress passed civil rights legislation in 1964 and 1965, but tensions continued as society was slow to change. On April 8, 1967, a riot broke out on the college campuses of Fisk University and Tennessee State University, historically Black colleges, after Stokely Carmichael spoke about Black Power at Vanderbilt University. Although it was viewed as a "race riot", it had classist characteristics. In 1979, the Ku Klux Klan burnt crosses outside two African American sites in Nashville, including the city headquarters of the NAACP. Since the 1970s the city and county have undergone tremendous growth, particularly during the economic boom of the 1990s under the leadership of then-Mayor and later-Tennessee Governor, Phil Bredesen. Making urban renewal a priority, Bredesen fostered the construction or renovation of several city landmarks, including the Country Music Hall of Fame and Museum, the downtown Nashville Public Library, the Bridgestone Arena, and Nissan Stadium. Nissan Stadium (formerly Adelphia Coliseum and LP Field) was built after the National Football League's (NFL) Houston Oilers agreed to move to the city in 1995. The NFL team debuted in Nashville in 1998 at Vanderbilt Stadium, and Nissan Stadium opened in the summer of 1999. The Oilers changed their name to the Tennessee Titans and finished the season with the Music City Miracle and a close Super Bowl game. The St. Louis Rams won in the last play of the game. In 1997, Nashville was awarded a National Hockey League expansion team; this was named the Nashville Predators. Since the 2003–04 season, the Predators have made the playoffs in all but three seasons. In 2017, they made the Stanley Cup Finals for the first time in franchise history, but ultimately fell to the Pittsburgh Penguins, 4games to 2, in the best-of-seven series. 21st century On January 22, 2009, residents rejected Nashville Charter Amendment 1, which sought to make English the official language of the city. Between May 1 and 7, 2010, much of Nashville was extensively flooded as part of a series of 1,000 year floods throughout Middle and West Tennessee. Much of the flooding took place in areas along the Cumberland and Harpeth Rivers and Mill Creek, and caused extensive damage to the many buildings and structures in the city, including the Grand Ole Opry House, Gaylord Opryland Resort & Convention Center, Opry Mills Mall, Schermerhorn Symphony Center, Bridgestone Arena, and Nissan Stadium. Sections of Interstate 24 and Briley Parkway were also flooded. Eleven people died in the Nashville area as a result of the flooding, and damages were estimated to be over $2 billion. The city recovered after the Great Recession. In March 2012, a Gallup poll ranked Nashville in the top five regions for job growth. In 2013, Nashville was described as "Nowville" and "It City" by GQ, Forbes, and The New York Times. Nashville elected its first female mayor, Megan Barry, on September 25, 2015. As a council member, Barry had officiated at the city's first same-sex wedding on June 26, 2015. In 2017, Nashville's economy was deemed the third fastest-growing in the nation, and the city was named the "hottest housing market in the US" by Freddie Mac realtors. In May 2017, census estimates showed Nashville had passed Memphis to become most populated city in Tennessee. Nashville has also made national headlines for its "homelessness crisis". Rising housing prices and the opioid crisis have resulted in more people being out on the streets: , between 2,300 and 20,000 Nashvillians are homeless. On March 6, 2018, due to felony charges filed against Mayor Barry relating to the misuse of public funds, she resigned before the end of her term. A special election was called. Following a ruling by the Tennessee Supreme Court, the Davidson County Election Commission set the special election for May 24, 2018, to meet the requirement of 75 to 80 days from the date of resignation. David Briley, who was Vice Mayor during the Barry administration and Acting Mayor after her resignation, won the special election with just over 54% of the vote, becoming the 70th mayor of Nashville. On May 1, 2018, voters rejected Let's Move Nashville, a referendum which would have funded construction of an $8.9 billion mass transit system under the Nashville Metropolitan Transit Authority, by a 2 to 1 margin. On September 28, 2019, John Cooper became the 9th mayor of Metropolitan Government of Nashville and Davidson County. On March 3, 2020, a tornado tracked west to east, just north of the downtown Nashville area, killing at least 25 people and leaving tens of thousands without electricity. Neighborhoods impacted included North Nashville, Germantown, East Nashville, Donelson, and Hermitage. On December 25, 2020, a vehicle exploded on Second Avenue, killing the perpetrator and injuring eight others. Geography Topography Nashville lies on the Cumberland River in the northwestern portion of the Nashville Basin. Nashville's elevation ranges from its lowest point, above sea level at the Cumberland River, to its highest point, above sea level in the Radnor Lake State Natural Area. Nashville also sits at the start of the Highland Rim, a geophysical region of very hilly land. Because of this, Nashville is very hilly. Nashville also has some stand alone hills around the city such as the hill on which the Tennessee State Capitol building sits. According to the United States Census Bureau, the city has a total area of , of which of it is land and of it (4.53%) is water. Cityscape Nashville's downtown area features a diverse assortment of entertainment, dining, cultural and architectural attractions. The Broadway and 2nd Avenue areas feature entertainment venues, night clubs and an assortment of restaurants. North of Broadway lie Nashville's central business district, Legislative Plaza, Capitol Hill and the Tennessee Bicentennial Mall. Cultural and architectural attractions can be found throughout the city. Three major interstate highways (I-40, I-65 and I-24) converge near the core area of downtown, and many regional cities are within a day's driving distance. Nashville's first skyscraper, the Life & Casualty Tower, was completed in 1957 and launched the construction of other high rises in downtown Nashville. After the construction of the AT&T Building (commonly referred to by locals as the "Batman Building") in 1994, the downtown area saw little construction until the mid-2000s. The Pinnacle, a high rise office building, opened in 2010, the first Nashville skyscraper completed in more than 15 years. Ten more skyscrapers have since been constructed or are under construction. Since 2000, Nashville has seen two urban construction booms (one prior to the Great Recession and the other after) that have yielded multiple high-rises (defined by Emporis as buildings of a minimum of 115 feet tall). Of the city's 37 towers of 280 feet tall or taller, 24 have been completed since 2000. Many civic and infrastructure projects are being planned, in progress, or recently completed. A new MTA bus hub was recently completed in downtown Nashville, as was the Music City Star pilot project. Several public parks have been constructed, such as the Public Square. Riverfront Park is scheduled to be extensively updated. The Music City Center opened in May 2013. It is a 1,200,000 square foot (110,000 m2) convention center with 370,000 square feet (34,000 m2) of exhibit space. Neighborhoods Flora The nearby city of Lebanon is notable and even named for its so-called "cedar glades", which occur on soils too poor to support most trees and are instead dominated by Virginian juniper. Blackberry bushes, Virginia pine, loblolly pine, sassafras, red maple, river birch, American beech, river cane, mountain laurel and sycamore are all common native trees, along with many others. In addition to the native forests, the combination of hot summers, abundant rainfall and mild winters permit a wide variety of both temperate and subtropical plants to be cultivated easily. Southern magnolia and cherry blossom trees are commonly cultivated here, with the city having an annual cherry blossom festival. Crepe myrtles and yew bushes are also commonly grown throughout Metro Nashville, and the winters are mild enough that sweetbay magnolia is evergreen whenever it is cultivated. The pansy flower is popular to plant during the autumn, and some varieties will flower overwinter in Nashville's subtropical climate. However, many hot-weather plants like petunia and even papyrus thrive as annuals, and Japanese banana will die aboveground during winter but re-sprout after the danger of frost is over. Unbeknownst to most Tennesseans, even cold-hardy palms, particularly needle palm and dwarf palmetto, are grown uncommonly but often successfully. High desert plants like Colorado spruce and prickly pear cactus are also grown somewhat commonly, as are Yucca filamentosa. Climate Nashville International Airport in Donelson has a humid subtropical climate (Köppen Cfa, Trewartha Cf), with hot, humid summers and generally cool winters typical of the Upper South. Snowfall occurs during the winter months, but it is usually not heavy. Average annual snowfall is about , falling mostly in January and February and occasionally in March, November and December. The largest snow event since 2003 was on January 22, 2016, when Nashville received of snow in a single storm; the largest overall was , received on March 17, 1892, during the St. Patrick's Day Snowstorm. Rainfall is typically greater in solar spring (Feb-Apr) and summer (May-Jul), while the solar autumn months (Aug-Oct) are the driest on average. Spring and fall are prone to severe thunderstorms, which may bring tornadoes, large hail, flash floods and damaging wind, with recent major events on April 16, 1998; April 7, 2006; February 5, 2008; April 10, 2009; May 1–2, 2010; and March 3, 2020. Relative humidity in Nashville averages 83% in the mornings and 60% in the afternoons, which is considered moderate for the Southeastern United States. In recent decades, due to urban development, Nashville has developed an urban heat island; especially on cool, clear nights, temperatures are up to warmer in the heart of the city than in rural outlying areas. The Nashville region lies within USDA Plant Hardiness Zone 7a. Nashville's long springs and autumns combined with a diverse array of trees and grasses can often make it uncomfortable for allergy sufferers. In 2008, Nashville was ranked as the 18th-worst spring allergy city in the U.S. by the Asthma and Allergy Foundation of America. The coldest temperature ever officially recorded in Nashville was on January 21, 1985, and the hottest was on June 29, 2012. Nashville allegedly had a low of on January 26, 1832, but this was decades before record-keeping began and isn't counted as the official record low. Donelson The mean annual temperature at Nashville International Airport is . Monthly averages range from in January to in July, with a diurnal temperature variation of . Diurnal temperature variation is highest in April and lowest in December, but it's also relatively high in October and relatively low in January. Donelson's climate classifications are Köppen Cfa and Trewartha CFak thanks to its very hot summers (average over ), mild winters (average over ) and long (8+ months) growing seasons (average over ). Precipitation is abundant year-round without any major difference, but there is still slight variation. The wet season runs from February through July, reaching its zenith in May with 128 mm of rain. The dry season runs from August through January with an October nadir of 85 mm and secondary December peak of 113 mm. Old Hickory The mean annual temperature at Old Hickory Dam is . Monthly averages range from in January to in August, with a diurnal temperature variation of . Diurnal temperature variation is highest in April and lowest in January. Old Hickory's climate classifications are Köppen Cfa and Trewartha DOak thanks to its very hot summers (average over ), mild winters (average over ) and mediocre (4–7 months) growing seasons (average over ). Precipitation is abundant year-round without any major difference, but there is still slight variation. The wet season runs from February through July, reaching its zenith in April with 120 mm of rain. The dry season runs from August through January with an October/November nadir of 85 mm and secondary December peak of 113 mm. Data for record temperatures is spotty before June 2007, but temperatures in Old Hickory have been known to range from in January 1966 to in June and July 2012. Demographics As of the 2020 United States census, there were 689,447 people, 279,545 households, and 146,241 families residing in the city. The population increase of 88,225, or 14.67% over the 2010 figure of 601,222 residents, represented the largest net population increase in the city's history. The population density was . In 2010, there were 254,651 households and 141,469 families (55.6% of households). Of households with families, 37.2% had married couples living together, 14.1% had a female householder with no husband present, and 4.2% had a male householder with no wife present. 27.9% of all households had children under the age of 18, and 18.8% had at least one member 65 years of age or older. Of the 44.4% of households that are non-families, 36.2% were individuals, and 8.2% had someone living alone who was 65 years of age or older. The average household size was 2.38 and the average family size was 3.16. The age distribution was 22.2% under 18, 10.3% from 18 to 24, 32.8% from 25 to 44, 23.9% from 45 to 64, and 10.7% who were 65 or older. The median age was 34.2 years. For every 100 females, there were 94.1 males. For every 100 females age 18 and over, there were 91.7 males. The median income for a household in the city was $46,141, and the median income for a family was $56,377. Males with a year-round, full-time job had a median income of $41,017 versus $36,292 for females. The per capita income for the city was $27,372. About 13.9% of families and 18.2% of the population were below the poverty line, including 29.5% of those under age 18 and 9.9% of those age 65 or over. Of residents 25 or older, 33.4% have a bachelor's degree or higher. Because of its relatively low cost of living and large job market, Nashville has become a popular city for immigrants. Nashville's foreign-born population more than tripled in size between 1990 and 2000, increasing from 12,662 to 39,596. The city's largest immigrant groups include Mexicans, Kurds, Vietnamese, Laotians, Arabs, and Somalis. There are also smaller communities of Pashtuns from Afghanistan and Pakistan concentrated primarily in Antioch. Nashville has the largest Kurdish community in the United States, numbering approximately 15,000. In 2009, about 60,000 Bhutanese refugees were being admitted to the U.S., and some were expected to resettle in Nashville. During the Iraqi election of 2005, Nashville was one of the few international locations where Iraqi expatriates could vote. The American Jewish community in Nashville dates back over 150 years, and numbered about 8,000 in 2015, plus 2,000 Jewish college students. Metropolitan area , Nashville has the largest metropolitan area in the state of Tennessee, with a population 1,989,519. The Nashville metropolitan area encompasses 13 of 41 Middle Tennessee counties: Cannon, Cheatham, Davidson, Dickson, Macon, Maury, Robertson, Rutherford, Smith, Sumner, Trousdale, Williamson, and Wilson. The 2020 population of the Nashville-Davidson–Murfreesboro–Columbia combined statistical area was 2,118,233. Religion 59.6% of people in Nashville claim religious affiliation according to information compiled by Sperling's BestPlaces. The dominant religion in Nashville is Christianity, comprising 57.7% of the population. The Christian population is broken down into 20.6% Baptists, 6.2% Catholics, 5.6% Methodists, 3.4% Pentecostals, 3.4% Presbyterians, 0.8% Mormons, and 0.5% Lutherans. 15.7% identify with other forms of Christianity, including the Orthodox Church and Disciples of Christ. Islam is the second largest religion, comprising 0.8% of the population. 0.6% of the population adhere to eastern religions such as Buddhism, Sikhism, Jainism and Hinduism, and 0.3% follow Judaism. Economy In the 21st century's second decade, Nashville was described as a "southern boomtown" by numerous publications. In 2017, it had the third-fastest-growing metropolitan economy in the United States and "adds an average of 100 people a day to its net population increase". The Nashville region was also said to be the "Number One" Metro Area for Professional and Business Service Jobs in America,; Zillow said it had the "hottest Housing market in America". In 2013, the city ranked No. 5 on Forbes list of the Best Places for Business and Careers. In 2015, Forbes put Nashville as the 4th Best City for White Collar Jobs. In 2015, Business Facilities' 11th Annual Rankings report named Nashville the number one city for Economic Growth Potential. Fortune 500 companies with offices within Nashville include BNY Mellon, Bridgestone Americas, Ernst & Young, Community Health Systems, Dell, Deloitte, Dollar General, Hospital Corporation of America, Nissan North America, Philips, Tractor Supply Company, and UBS. Of these, Community Health Systems, Dollar General, SmileDirectClub, Hospital Corporation of America, and Tractor Supply Company are headquartered in the city. Many popular food companies are based in Nashville including Captain D's, Hunt Brothers Pizza, O'Charley's, Logan's Roadhouse, J. Alexander's, and Stoney River Legendary Steaks. As the "home of country music", Nashville has become a major music recording and production center. The Big Three record labels, as well as numerous independent labels, have offices in Nashville, mostly in the Music Row area. Nashville has been the headquarters of guitar company Gibson since 1984. Since the 1960s, Nashville has been the second-largest music production center (after New York City) in the United States. Nashville's music industry is estimated to have a total economic impact of about $10billion per year and to contribute about 56,000 jobs to the Nashville area. The area's largest industry is health care. Nashville is home to more than 300 health care companies, including Hospital Corporation of America (HCA), the world's largest private operator of hospitals. , it was estimated the health care industry contributes per year and 200,000 jobs to the Nashville-area economy. CoreCivic, formerly known as Corrections Corporation of America and one of the largest private corrections company in the United States, was founded in Nashville in 1983, but moved out of the city in 2019. Vanderbilt University was one of its investors before the company's initial public offering. The City of Nashville's pension fund included "a $921,000 stake" in the company in 2017. The Nashville Scene notes that, "A drop in CoreCivic stock value, however minor, would have a direct impact on the pension fund that represents nearly 25,000 current and former Metro employees." The automotive industry is also becoming important for the Middle Tennessee region. Nissan North America moved its corporate headquarters in 2006 from Gardena, California (Los Angeles County) to Franklin, a suburb south of Nashville. Nissan's largest North American manufacturing plant is in Smyrna, another suburb of Nashville. Largely as a result of the increased development of Nissan and other Japanese economic interests in the region, Japan moved its former New Orleans consulate-general to Nashville's Palmer Plaza. General Motors operates an assembly plant in Spring Hill, about south of Nashville. Automotive parts manufacturer Bridgestone has its their North American headquarters in Nashville and manufacturing plants and a distribution center in nearby counties. Other major industries in Nashville include insurance, finance, and publishing (especially religious publishing). The city hosts headquarters operations for several Protestant denominations, including the United Methodist Church, Southern Baptist Convention, National Baptist Convention USA, and the National Association of Free Will Baptists. Nashville is known for Southern confections, including Goo Goo Clusters, which have been made in Nashville since 1912. In May 2018, AllianceBernstein pledged to build a private client office in the city by mid-2019 and to move its headquarters from New York City to Nashville by 2024. The technology sector is an important and growing aspect of Nashville's economy. In November 2018, Amazon announced its plans to build an operations center in the Nashville Yards development to serve as the hub for their Retail Operations division. In April 2021, Oracle Corporation announced that it would construct a $1.2 billion campus in Nashville, which is expected to employ 8,500 by 2031. In December 2019, iHeartMedia selected Nashville as the site of its second digital headquarters. Real estate is becoming a driver for the city's economy. Based on a survey of nearly 1,500 real estate industry professionals conducted by PricewaterhouseCoopers and the Urban Land Institute, Nashville ranked 7th nationally in terms of attractiveness to real estate investors for 2016. , according to city figures, there is more than $2 billion in real estate projects underway or projected to start in 2016. Due to high yields available to investors, Nashville has been attracting a lot of capital from out-of-state. A key factor that has been attributed to the increase in investment is the adjustment to the city's zoning code. Developers can easily include a combination of residential, office, retail and entertainment space into their projects. Additionally, the city has invested heavily into public parks. Centennial Park is undergoing extensive renovations. The change in the zoning code and the investment in public space is consistent with the millennial generation's preference for walkable urban neighborhoods. Top employers According to the Nashville Business Journal,
Nashville, Donelson, and Hermitage. On December 25, 2020, a vehicle exploded on Second Avenue, killing the perpetrator and injuring eight others. Geography Topography Nashville lies on the Cumberland River in the northwestern portion of the Nashville Basin. Nashville's elevation ranges from its lowest point, above sea level at the Cumberland River, to its highest point, above sea level in the Radnor Lake State Natural Area. Nashville also sits at the start of the Highland Rim, a geophysical region of very hilly land. Because of this, Nashville is very hilly. Nashville also has some stand alone hills around the city such as the hill on which the Tennessee State Capitol building sits. According to the United States Census Bureau, the city has a total area of , of which of it is land and of it (4.53%) is water. Cityscape Nashville's downtown area features a diverse assortment of entertainment, dining, cultural and architectural attractions. The Broadway and 2nd Avenue areas feature entertainment venues, night clubs and an assortment of restaurants. North of Broadway lie Nashville's central business district, Legislative Plaza, Capitol Hill and the Tennessee Bicentennial Mall. Cultural and architectural attractions can be found throughout the city. Three major interstate highways (I-40, I-65 and I-24) converge near the core area of downtown, and many regional cities are within a day's driving distance. Nashville's first skyscraper, the Life & Casualty Tower, was completed in 1957 and launched the construction of other high rises in downtown Nashville. After the construction of the AT&T Building (commonly referred to by locals as the "Batman Building") in 1994, the downtown area saw little construction until the mid-2000s. The Pinnacle, a high rise office building, opened in 2010, the first Nashville skyscraper completed in more than 15 years. Ten more skyscrapers have since been constructed or are under construction. Since 2000, Nashville has seen two urban construction booms (one prior to the Great Recession and the other after) that have yielded multiple high-rises (defined by Emporis as buildings of a minimum of 115 feet tall). Of the city's 37 towers of 280 feet tall or taller, 24 have been completed since 2000. Many civic and infrastructure projects are being planned, in progress, or recently completed. A new MTA bus hub was recently completed in downtown Nashville, as was the Music City Star pilot project. Several public parks have been constructed, such as the Public Square. Riverfront Park is scheduled to be extensively updated. The Music City Center opened in May 2013. It is a 1,200,000 square foot (110,000 m2) convention center with 370,000 square feet (34,000 m2) of exhibit space. Neighborhoods Flora The nearby city of Lebanon is notable and even named for its so-called "cedar glades", which occur on soils too poor to support most trees and are instead dominated by Virginian juniper. Blackberry bushes, Virginia pine, loblolly pine, sassafras, red maple, river birch, American beech, river cane, mountain laurel and sycamore are all common native trees, along with many others. In addition to the native forests, the combination of hot summers, abundant rainfall and mild winters permit a wide variety of both temperate and subtropical plants to be cultivated easily. Southern magnolia and cherry blossom trees are commonly cultivated here, with the city having an annual cherry blossom festival. Crepe myrtles and yew bushes are also commonly grown throughout Metro Nashville, and the winters are mild enough that sweetbay magnolia is evergreen whenever it is cultivated. The pansy flower is popular to plant during the autumn, and some varieties will flower overwinter in Nashville's subtropical climate. However, many hot-weather plants like petunia and even papyrus thrive as annuals, and Japanese banana will die aboveground during winter but re-sprout after the danger of frost is over. Unbeknownst to most Tennesseans, even cold-hardy palms, particularly needle palm and dwarf palmetto, are grown uncommonly but often successfully. High desert plants like Colorado spruce and prickly pear cactus are also grown somewhat commonly, as are Yucca filamentosa. Climate Nashville International Airport in Donelson has a humid subtropical climate (Köppen Cfa, Trewartha Cf), with hot, humid summers and generally cool winters typical of the Upper South. Snowfall occurs during the winter months, but it is usually not heavy. Average annual snowfall is about , falling mostly in January and February and occasionally in March, November and December. The largest snow event since 2003 was on January 22, 2016, when Nashville received of snow in a single storm; the largest overall was , received on March 17, 1892, during the St. Patrick's Day Snowstorm. Rainfall is typically greater in solar spring (Feb-Apr) and summer (May-Jul), while the solar autumn months (Aug-Oct) are the driest on average. Spring and fall are prone to severe thunderstorms, which may bring tornadoes, large hail, flash floods and damaging wind, with recent major events on April 16, 1998; April 7, 2006; February 5, 2008; April 10, 2009; May 1–2, 2010; and March 3, 2020. Relative humidity in Nashville averages 83% in the mornings and 60% in the afternoons, which is considered moderate for the Southeastern United States. In recent decades, due to urban development, Nashville has developed an urban heat island; especially on cool, clear nights, temperatures are up to warmer in the heart of the city than in rural outlying areas. The Nashville region lies within USDA Plant Hardiness Zone 7a. Nashville's long springs and autumns combined with a diverse array of trees and grasses can often make it uncomfortable for allergy sufferers. In 2008, Nashville was ranked as the 18th-worst spring allergy city in the U.S. by the Asthma and Allergy Foundation of America. The coldest temperature ever officially recorded in Nashville was on January 21, 1985, and the hottest was on June 29, 2012. Nashville allegedly had a low of on January 26, 1832, but this was decades before record-keeping began and isn't counted as the official record low. Donelson The mean annual temperature at Nashville International Airport is . Monthly averages range from in January to in July, with a diurnal temperature variation of . Diurnal temperature variation is highest in April and lowest in December, but it's also relatively high in October and relatively low in January. Donelson's climate classifications are Köppen Cfa and Trewartha CFak thanks to its very hot summers (average over ), mild winters (average over ) and long (8+ months) growing seasons (average over ). Precipitation is abundant year-round without any major difference, but there is still slight variation. The wet season runs from February through July, reaching its zenith in May with 128 mm of rain. The dry season runs from August through January with an October nadir of 85 mm and secondary December peak of 113 mm. Old Hickory The mean annual temperature at Old Hickory Dam is . Monthly averages range from in January to in August, with a diurnal temperature variation of . Diurnal temperature variation is highest in April and lowest in January. Old Hickory's climate classifications are Köppen Cfa and Trewartha DOak thanks to its very hot summers (average over ), mild winters (average over ) and mediocre (4–7 months) growing seasons (average over ). Precipitation is abundant year-round without any major difference, but there is still slight variation. The wet season runs from February through July, reaching its zenith in April with 120 mm of rain. The dry season runs from August through January with an October/November nadir of 85 mm and secondary December peak of 113 mm. Data for record temperatures is spotty before June 2007, but temperatures in Old Hickory have been known to range from in January 1966 to in June and July 2012. Demographics As of the 2020 United States census, there were 689,447 people, 279,545 households, and 146,241 families residing in the city. The population increase of 88,225, or 14.67% over the 2010 figure of 601,222 residents, represented the largest net population increase in the city's history. The population density was . In 2010, there were 254,651 households and 141,469 families (55.6% of households). Of households with families, 37.2% had married couples living together, 14.1% had a female householder with no husband present, and 4.2% had a male householder with no wife present. 27.9% of all households had children under the age of 18, and 18.8% had at least one member 65 years of age or older. Of the 44.4% of households that are non-families, 36.2% were individuals, and 8.2% had someone living alone who was 65 years of age or older. The average household size was 2.38 and the average family size was 3.16. The age distribution was 22.2% under 18, 10.3% from 18 to 24, 32.8% from 25 to 44, 23.9% from 45 to 64, and 10.7% who were 65 or older. The median age was 34.2 years. For every 100 females, there were 94.1 males. For every 100 females age 18 and over, there were 91.7 males. The median income for a household in the city was $46,141, and the median income for a family was $56,377. Males with a year-round, full-time job had a median income of $41,017 versus $36,292 for females. The per capita income for the city was $27,372. About 13.9% of families and 18.2% of the population were below the poverty line, including 29.5% of those under age 18 and 9.9% of those age 65 or over. Of residents 25 or older, 33.4% have a bachelor's degree or higher. Because of its relatively low cost of living and large job market, Nashville has become a popular city for immigrants. Nashville's foreign-born population more than tripled in size between 1990 and 2000, increasing from 12,662 to 39,596. The city's largest immigrant groups include Mexicans, Kurds, Vietnamese, Laotians, Arabs, and Somalis. There are also smaller communities of Pashtuns from Afghanistan and Pakistan concentrated primarily in Antioch. Nashville has the largest Kurdish community in the United States, numbering approximately 15,000. In 2009, about 60,000 Bhutanese refugees were being admitted to the U.S., and some were expected to resettle in Nashville. During the Iraqi election of 2005, Nashville was one of the few international locations where Iraqi expatriates could vote. The American Jewish community in Nashville dates back over 150 years, and numbered about 8,000 in 2015, plus 2,000 Jewish college students. Metropolitan area , Nashville has the largest metropolitan area in the state of Tennessee, with a population 1,989,519. The Nashville metropolitan area encompasses 13 of 41 Middle Tennessee counties: Cannon, Cheatham, Davidson, Dickson, Macon, Maury, Robertson, Rutherford, Smith, Sumner, Trousdale, Williamson, and Wilson. The 2020 population of the Nashville-Davidson–Murfreesboro–Columbia combined statistical area was 2,118,233. Religion 59.6% of people in Nashville claim religious affiliation according to information compiled by Sperling's BestPlaces. The dominant religion in Nashville is Christianity, comprising 57.7% of the population. The Christian population is broken down into 20.6% Baptists, 6.2% Catholics, 5.6% Methodists, 3.4% Pentecostals, 3.4% Presbyterians, 0.8% Mormons, and 0.5% Lutherans. 15.7% identify with other forms of Christianity, including the Orthodox Church and Disciples of Christ. Islam is the second largest religion, comprising 0.8% of the population. 0.6% of the population adhere to eastern religions such as Buddhism, Sikhism, Jainism and Hinduism, and 0.3% follow Judaism. Economy In the 21st century's second decade, Nashville was described as a "southern boomtown" by numerous publications. In 2017, it had the third-fastest-growing metropolitan economy in the United States and "adds an average of 100 people a day to its net population increase". The Nashville region was also said to be the "Number One" Metro Area for Professional and Business Service Jobs in America,; Zillow said it had the "hottest Housing market in America". In 2013, the city ranked No. 5 on Forbes list of the Best Places for Business and Careers. In 2015, Forbes put Nashville as the 4th Best City for White Collar Jobs. In 2015, Business Facilities' 11th Annual Rankings report named Nashville the number one city for Economic Growth Potential. Fortune 500 companies with offices within Nashville include BNY Mellon, Bridgestone Americas, Ernst & Young, Community Health Systems, Dell, Deloitte, Dollar General, Hospital Corporation of America, Nissan North America, Philips, Tractor Supply Company, and UBS. Of these, Community Health Systems, Dollar General, SmileDirectClub, Hospital Corporation of America, and Tractor Supply Company are headquartered in the city. Many popular food companies are based in Nashville including Captain D's, Hunt Brothers Pizza, O'Charley's, Logan's Roadhouse, J. Alexander's, and Stoney River Legendary Steaks. As the "home of country music", Nashville has become a major music recording and production center. The Big Three record labels, as well as numerous independent labels, have offices in Nashville, mostly in the Music Row area. Nashville has been the headquarters of guitar company Gibson since 1984. Since the 1960s, Nashville has been the second-largest music production center (after New York City) in the United States. Nashville's music industry is estimated to have a total economic impact of about $10billion per year and to contribute about 56,000 jobs to the Nashville area. The area's largest industry is health care. Nashville is home to more than 300 health care companies, including Hospital Corporation of America (HCA), the world's largest private operator of hospitals. , it was estimated the health care industry contributes per year and 200,000 jobs to the Nashville-area economy. CoreCivic, formerly known as Corrections Corporation of America and one of the largest private corrections company in the United States, was founded in Nashville in 1983, but moved out of the city in 2019. Vanderbilt University was one of its investors before the company's initial public offering. The City of Nashville's pension fund included "a $921,000 stake" in the company in 2017. The Nashville Scene notes that, "A drop in CoreCivic stock value, however minor, would have a direct impact on the pension fund that represents nearly 25,000 current and former Metro employees." The automotive industry is also becoming important for the Middle Tennessee region. Nissan North America moved its corporate headquarters in 2006 from Gardena, California (Los Angeles County) to Franklin, a suburb south of Nashville. Nissan's largest North American manufacturing plant is in Smyrna, another suburb of Nashville. Largely as a result of the increased development of Nissan and other Japanese economic interests in the region, Japan moved its former New Orleans consulate-general to Nashville's Palmer Plaza. General Motors operates an assembly plant in Spring Hill, about south of Nashville. Automotive parts manufacturer Bridgestone has its their North American headquarters in Nashville and manufacturing plants and a distribution center in nearby counties. Other major industries in Nashville include insurance, finance, and publishing (especially religious publishing). The city hosts headquarters operations for several Protestant denominations, including the United Methodist Church, Southern Baptist Convention, National Baptist Convention USA, and the National Association of Free Will Baptists. Nashville is known for Southern confections, including Goo Goo Clusters, which have been made in Nashville since 1912. In May 2018, AllianceBernstein pledged to build a private client office in the city by mid-2019 and to move its headquarters from New York City to Nashville by 2024. The technology sector is an important and growing aspect of Nashville's economy. In November 2018, Amazon announced its plans to build an operations center in the Nashville Yards development to serve as the hub for their Retail Operations division. In April 2021, Oracle Corporation announced that it would construct a $1.2 billion campus in Nashville, which is expected to employ 8,500 by 2031. In December 2019, iHeartMedia selected Nashville as the site of its second digital headquarters. Real estate is becoming a driver for the city's economy. Based on a survey of nearly 1,500 real estate industry professionals conducted by PricewaterhouseCoopers and the Urban Land Institute, Nashville ranked 7th nationally in terms of attractiveness to real estate investors for 2016. , according to city figures, there is more than $2 billion in real estate projects underway or projected to start in 2016. Due to high yields available to investors, Nashville has been attracting a lot of capital from out-of-state. A key factor that has been attributed to the increase in investment is the adjustment to the city's zoning code. Developers can easily include a combination of residential, office, retail and entertainment space into their projects. Additionally, the city has invested heavily into public parks. Centennial Park is undergoing extensive renovations. The change in the zoning code and the investment in public space is consistent with the millennial generation's preference for walkable urban neighborhoods. Top employers According to the Nashville Business Journal, the top employers in the city are: Culture Much of the city's cultural life has revolved around its large university community. Particularly significant in this respect were two groups of critics and writers who were associated with Vanderbilt University in the early 20th century: the Fugitives and the Agrarians. Popular destinations include Fort Nashborough and Fort Negley, the former being a reconstruction of the original settlement, the latter being a semi-restored Civil War battle fort; the Tennessee State Museum; and The Parthenon, a full-scale replica of the original Parthenon in Athens. The Tennessee State Capitol is one of the oldest working state capitol buildings in the nation. The Hermitage, the former home of President Andrew Jackson, is one of the largest presidential homes open to the public, and is also one of the most visited. Dining Some of the more popular types of local cuisine include hot chicken, hot fish, barbecue, and meat and three. Entertainment and performing arts Nashville has a vibrant music and entertainment scene spanning a variety of genres. With a long history in the music scene it is no surprise that city was nicknamed 'Music City.' The Tennessee Performing Arts Center is the major performing arts center of the city. It is the home of the Nashville Repertory Theatre, the Nashville Opera, the Music City Drum and Bugle Corps, and the Nashville Ballet. In September 2006, the Schermerhorn Symphony Center opened as the home of the Nashville Symphony. As the city's name itself is a metonym for the country music industry, many popular attractions involve country music, including the Country Music Hall of Fame and Museum, Belcourt Theatre, and Ryman Auditorium. Hence, the city became known as America's 'Country Music Capital.' The Ryman was home to the Grand Ole Opry until 1974 when the show moved to the Grand Ole Opry House, east of downtown. The Opry plays there several times a week, except for an annual winter run at the Ryman. Many music clubs and honky-tonk bars are in downtown Nashville, particularly the area encompassing Lower Broadway, Second Avenue, and Printer's Alley, which is often referred to as "the District". Each June, the CMA Music Festival (formerly known as Fan Fair) brings thousands of country fans to the city. The Tennessee State Fair is also held annually in September. Nashville was once home of television shows such as Hee Haw and Pop! Goes the Country, as well as The Nashville Network and later, RFD-TV. Country Music Television and Great American Country currently operate from Nashville. The city was also home to the Opryland USA theme park, which operated from 1972 to 1997 before being closed by its owners (Gaylord Entertainment Company) and soon after demolished to make room for the Opry Mills mega-shopping mall. The Contemporary Christian music industry is based along Nashville's Music Row, with a great influence in neighboring Williamson County. The Christian record companies include EMI Christian Music Group, Provident Label Group and Word Records. Music Row houses many gospel music and Contemporary Christian music companies centered around 16th and 17th Avenues South. On River Road, off Charlotte Pike in West Nashville, the CabaRay opened its doors on January 18, 2018. The performing venue of Ray Stevens, it offers a Vegas-style dinner and a show atmosphere. There is also a piano bar and a gift shop. Although Nashville was never known as a major jazz town, it did have many great jazz bands, including The Nashville Jazz Machine led by Dave Converse and its current version, the Nashville Jazz Orchestra, led by Jim Williamson, as well as The Establishment, led by Billy Adair. The Francis Craig Orchestra entertained Nashvillians from 1929 to 1945 from the Oak Bar and Grille Room in the Hermitage Hotel. Craig's orchestra was also the first to broadcast over local radio station WSM-AM and enjoyed phenomenal success with a 12-year show on the NBC Radio Network. In the late 1930s, he introduced a newcomer, Dinah Shore, a local graduate of Hume Fogg High School and Vanderbilt University. Radio station WMOT-FM in nearby Murfreesboro, which formerly programmed jazz, aided significantly in the recent revival of the city's jazz scene, as has the non-profit Nashville Jazz Workshop, which holds concerts and classes in a renovated building in the north Nashville neighborhood of Germantown.
subject and object The standard word order in Novial is subject-verb-object, as in English. Therefore, the object need not be marked to distinguish it from the subject, and nominative (I, he, she and so on) and oblique (me, him, her) pronouns are identical: The accusative (direct object) is therefore most often identical to the nominative (subject). However, in case of an ambiguity problem, an optional accusative ending, -m (-em after a consonant), is available but is rarely used. The preposition em is equivalent to this ending. The personal possessive adjectives are formed from the pronouns by adding -n or after a consonant -en. This is in fact the genitive (possessive) of the pronoun so men means both "my" and "mine" ("of me"): The possessive pronouns are thus men, vun, len etc., lun and nusen, vusen, lesen etc. and lusen. Possession may also be expressed with the preposition de: de me, de vu, and so on. The reflexive pronoun is se: lo admira se – he admires himself. The impersonal pronoun one (one/they/you) is on, with the possessive form onen. Verbs Verb forms never change with person or number. Most verb tenses, moods and voices are expressed with auxiliary verbs preceding the root form of the main verb. The auxiliaries follow the same word order as the English equivalent. The following phrases give examples of the verb forms: Present active participle: protektent – "protecting" Past passive participle: protektet – "protected" Novial clearly distinguishes the passive of becoming and the passive of being. In English the forms are often the same, using the auxiliary verb to be followed by the past participle. However, the passive of becoming is also often expressed with the verb to get which is used in the examples below. The passive voice of becoming is formed with the auxiliary bli followed by the root verb form. It can then be conjugated into the previously mentioned forms, for example: The passive voice of being is formed with the auxiliary es followed by the past passive participle (stem + -t). For example: Articles The definite article is li which is invariant. It is used as in English. There is no indefinite article, although un (one) can be used. Nouns The plural noun is formed by adding –s to the singular (-es after a consonant). The accusative case is generally identical to the nominative but can optionally be marked with the ending -m (-em after a consonant) with the plural being -sem (-esem after a consonant) or with the preposition em. The genitive is formed with the ending -n (-en after a consonant) with the plural being -sen (-esen after a consonant) or with the preposition de. Other cases are formed with prepositions. Adjectives All adjectives end in -i, but this may be dropped if it is easy enough to pronounce and no confusion will be caused. Adjectives precede the noun qualified. Adjectives do not agree with the noun but may be given noun endings if there is no noun present to receive them. Comparative adjectives are formed by placing various particles (plu, tam, and min) in front of the adjective receiving the comparison. Likewise, the superlative particles (maxim and minim) precede the adjective. The adjective does not receive an inflection to its ending. Adverbs An adjective is converted to a corresponding adverb by adding -m after the -i ending of the adjective. Comparative and superlative adverbs are formed in the same manner as comparative and superlative adjectives: by placing a specific particle before the adverb receiving the comparison. Vocabulary Affixes See the Table of Prefixes and Table of Suffixes at the Novial Wikibook. Novial compared to Esperanto and Ido Jespersen was a professional linguist, unlike Esperanto's creator. He disliked the arbitrary and artificial character that he found in Esperanto and Ido. Additionally, he objected to those languages' inflectional systems, which he found needlessly complex. He sought to make Novial at
in constructed languages brought on by the Internet, some people rediscovered Novial. Phonology Consonants Vowels Stress The basic rule is: stress the vowel before the last consonant. However, consonantal flexional endings (ie. -d, -m, -n, -s) do not count for this (eg. "bóni" but "bónim", not "boním"; "apérta" but "apértad", not "apertád") so perhaps it is better to say that the vowel before the final consonant of the stem takes the stress. Orthography The digraphs ch and sh represent or , depending on the speaker. For example, chokolate would be pronounced either or . Grammar Like many constructed IALs, Novial has a simple and regular grammar. The main word order is SVO, which removes the need for marking the object of a sentence with accusative (since the position normally tells what word is the object). There is however a way to mark accusative. There is no grammatical gender (but the sex or gender of referrents can be marked). Verbs are not conjugated according to person or number, and have a regular conjugation. Nouns mainly end in e, a, o, u or um in singular. There is definite forms of nouns marked with an article, and singular and plural forms, where plural is marked with the suffix -s after vowels or -es after consonants. There is also a form for indefinite number (like in Mandarin chinese and Japanese, for example), expressed by removing the ending of the noun in singular (leone – lion, leon es kruel – a/the lion is cruel, or lions are cruel). If a noun refers to a living being, then the form ending in -e is neutral in regards to sex, the one ending in -a female, and the one ending in -o male. If the noun is based on an adjective, nouns referring to living beings can be made with the previously mentioned rule, and furthermore nouns referring to concrete objects with -u, and abstractions with -um. The third person pronouns follows the same rule, together with the definite article. In the case of a noun that refers to an instrument – a tool or a means – the word that ends in -e is the tool or the means itself, -a the verb describing usage of the tool and so on, and -o the noun describing the act of that using: Personal pronouns, subject and object The standard word order in Novial is subject-verb-object, as in English. Therefore, the object need not be marked to distinguish it from the subject, and nominative (I, he, she and so on) and oblique (me, him, her) pronouns are identical: The accusative (direct object) is therefore most often identical to the nominative (subject). However, in case of an ambiguity problem, an optional accusative ending, -m (-em after a consonant), is available but is rarely used. The preposition em is equivalent to this ending. The personal possessive adjectives are formed from the pronouns by adding -n or after a consonant -en. This is in fact the genitive (possessive) of the pronoun so men means both "my" and "mine" ("of me"): The possessive pronouns are thus men, vun, len etc., lun and nusen, vusen, lesen etc. and lusen. Possession may also be expressed with the preposition de: de me, de vu, and so on. The reflexive pronoun is se: lo admira se – he admires himself. The impersonal pronoun one (one/they/you) is on, with the possessive form onen. Verbs Verb forms never change with person or number. Most verb tenses, moods and voices are expressed with auxiliary verbs preceding the root form of the main verb. The auxiliaries follow the same word order as the English equivalent. The following phrases give examples of the verb forms: Present active participle: protektent – "protecting" Past passive participle: protektet – "protected" Novial clearly distinguishes the passive of becoming and the passive of being. In English the forms are often the same, using the auxiliary verb to be followed by the past participle. However, the passive of becoming is also
have the same meaning as in English, although they are called B, Bes, and Beses instead of B, B flat and B double flat. Denmark also uses H, but uses Bes instead of Heses for B. 12-tone chromatic scale The following chart lists the names used in different countries for the 12 notes of a chromatic scale built on C. The corresponding symbols are shown within parenthesis. Differences between German and English notation are highlighted in bold typeface. Although the English and Dutch names are different, the corresponding symbols are identical. Note designation in accordance with octave name The table below shows each octave and the frequencies for every note of pitch class A. The traditional (Helmholtz) system centers on the great octave (with capital letters) and small octave (with lower case letters). Lower octaves are named "contra" (with primes before), higher ones "lined" (with primes after). Another system (scientific) suffixes a number (starting with 0, or sometimes −1). In this system A4 is nowadays standardised at 440 Hz, lying in the octave containing notes from C4 (middle C) to B4. The lowest note on most pianos is A0, the highest C8. The MIDI system for electronic musical instruments and computers uses a straight count starting with note 0 for C−1 at 8.1758 Hz up to note 127 for G9 at 12,544 Hz. Written notes A written note can also have a note value, a code that determines the note's relative duration. In order of halving duration, they are: double note (breve); whole note (semibreve); half note (minim); quarter note (crotchet); eighth note (quaver); sixteenth note (semiquaver).; thirty-second note (demisemiquaver), sixty-fourth note (hemidemisemiquaver), and hundred twenty-eighth note. In a score, each note is assigned a specific vertical position on a staff position (a line or space) on the staff, as determined by the clef. Each line or space is assigned a note name. These names are memorized by musicians and allow them to know at a glance the proper pitch to play on their instruments. The staff above shows the notes C, D, E, F, G, A, B, C and then in reverse order, with no key signature or accidentals. Note frequency (in hertz) Music can be composed of notes at any arbitrary physical frequency. Since the physical causes of music are vibrations, they are often measured in hertz (Hz), with 1 Hz meaning one vibration per second. For historical and other reasons, especially in Western music, only twelve notes of fixed frequencies are used. These fixed frequencies are mathematically related to each other, and are defined around the central note, A4. The current "standard pitch" or modern "concert pitch" for this note is 440 Hz, although this varies in actual practice (see History of pitch standards). The note-naming convention specifies a letter, any accidentals, and an octave number. Each note is an integer number of half-steps away from concert A (A4). Let this distance be denoted . If the note is above A4, then is positive; if it is below A4, then is negative. The frequency of the note () (assuming equal temperament) is then: For example, one can find the frequency of C5, the first C above A4. There are 3 half-steps between A4 and C5 (A4 → A4 → B4 → C5), and the note is above A4, so = 3. The note's frequency is: To find the frequency of a note below A4, the value of is negative. For example, the F below A4 is F4. There are 4 half-steps (A4 → A4 → G4 → G4 → F4), and the note is below A4, so = −4. The note's frequency is: Finally, it can be seen from this formula that octaves automatically yield powers of two times the original frequency, since is a multiple of 12 (12, where is the number of octaves up or down), and so the formula reduces to: yielding a factor of 2. In fact, this is the means by which this formula is derived, combined with the notion of equally-spaced intervals. The distance of an equally tempered semitone is divided into 100 cents. So 1200 cents are equal to one octave – a frequency ratio of 2:1. This means that a cent is precisely equal to , which is approximately . For use with the MIDI (Musical Instrument Digital Interface) standard, a frequency mapping is defined by: where is the MIDI note number (and 69 is the number of semitones between C−1 (note 0) and A4). And in the opposite direction, to obtain the frequency from a MIDI note , the formula is defined as: For notes in an A440 equal temperament, this formula delivers the standard MIDI note number (). Any other frequencies fill the space between the whole numbers evenly. This lets MIDI instruments be tuned accurately in any microtuning scale, including non-western traditional tunings. Note names and their history Music notation systems have used letters of the alphabet for centuries. The 6th-century philosopher Boethius is known to have used the first fourteen letters of the classical Latin alphabet (the letter J did not exist until the 16th century), A B C D E F G H I K L M N O, to signify the notes of the two-octave range that was in use at the time and in modern scientific pitch notation are represented as A2 B2 C3 D3 E3 F3 G3 A3 B3 C4 D4 E4 F4 G4. Though it is not known whether this was his devising or common usage at the time, this is nonetheless called Boethian notation. Although Boethius is the first author known to use this nomenclature in the literature, Ptolemy wrote of the two-octave range five centuries before, calling it the perfect system or complete system – as opposed to other, smaller-range note systems that did not contain all possible species of octave (i.e., the seven octaves starting from A, B, C, D, E, F, and G). Following this, the range (or compass) of used notes was extended to three octaves, and the system of repeating letters A–G in each octave was introduced, these being written as lower-case for the second octave (a–g) and double lower-case letters for the third (aa–gg). When the range was extended down by one note, to a G, that note was denoted using the Greek letter gamma (Γ). (It is from this that the French word for scale, gamme derives, and the English word gamut, from "Gamma-Ut", the lowest note in Medieval music notation.) The remaining five notes of the chromatic scale (the black keys on a piano keyboard) were added gradually; the first being B, since B was flattened in certain modes to avoid the dissonant tritone interval. This change was not always shown in notation, but when written, B (B-flat) was written as a Latin, round "b", and B (B-natural) a Gothic script (known as Blackletter) or "hard-edged" b. These evolved into the modern flat () and natural () symbols respectively. The sharp symbol arose from a barred b, called the "cancelled b". In parts of Europe, including Germany, the Czech Republic, Slovakia, Poland, Hungary, Norway, Denmark, Serbia, Croatia, Slovenia, Finland and Iceland (and Sweden before the 1990s), the Gothic b transformed into the letter H (possibly for hart, German for hard, or just because the Gothic b resembled an H). Therefore, in German music notation, H is used instead of B (B-natural), and B instead of B (B-flat). Occasionally, music written in German for international use will use H for B-natural and Bb for B-flat (with a modern-script lower-case b instead of a flat sign). Since a Bes or B in Northern Europe (i.e., a B elsewhere) is both rare and unorthodox (more likely to be expressed as Heses), it is generally clear what this notation means. In Italian, Portuguese, Spanish, French, Romanian, Greek, Albanian,
Notes are the building blocks of much written music: discretizations of musical phenomena that facilitate performance, comprehension, and analysis. The term note can be used in both generic and specific senses: one might say either "the piece 'Happy Birthday to You' begins with two notes having the same pitch", or "the piece begins with two repetitions of the same note". In the former case, one uses note to refer to a specific musical event; in the latter, one uses the term to refer to a class of events sharing the same pitch. (See also: Key signature names and translations.) Two notes with fundamental frequencies in a ratio equal to any integer power of two (e.g., half, twice, or four times) are perceived as very similar. Because of that, all notes with these kinds of relations can be grouped under the same pitch class. In European music theory, most countries use the solfège naming convention do–re–mi–fa–sol–la–si, including for instance Italy, Portugal, Spain, France, Romania, most Latin American countries, Greece, Albania, Bulgaria, Turkey, Russia, Arabic-speaking and Persian-speaking countries. However, in English- and Dutch-speaking regions, pitch classes are typically represented by the first seven letters of the Latin alphabet (A, B, C, D, E, F and G). Several European countries, including Germany, adopt an almost identical notation, in which H is substituted for B (see below for details). Byzantium used the names Pa–Vu–Ga–Di–Ke–Zo–Ni (Πα–Βου–Γα–Δι–Κε–Ζω–Νη). In traditional Indian music, musical notes are called svaras and commonly represented using the seven notes, Sa, Re, Ga, Ma, Pa, Dha and Ni. The eighth note, or octave, is given the same name as the first, but has double its frequency. The name octave is also used to indicate the span between a note and another with double frequency. To differentiate two notes that have the same pitch class but fall into different octaves, the system of scientific pitch notation combines a letter name with an Arabic numeral designating a specific octave. For example, the now-standard tuning pitch for most Western music, 440 Hz, is named a′ or A4. There are two formal systems to define each note and octave, the Helmholtz pitch notation and the scientific pitch notation. Accidentals Letter names are modified by the accidentals. The sharp sign raises a note by a semitone or half-step, and a flat lowers it by the same amount. In modern tuning a half step has a frequency ratio of , approximately 1.0595. The accidentals are written after the note name: so, for example, F represents F-sharp, B is B-flat, and C is C natural (or C). Additional accidentals are the double-sharp , raising the frequency by two semitones, and double-flat , lowering it by that amount. In musical notation, accidentals are placed before the note symbols. Systematic alterations to the seven lettered pitches in the scale can be indicated by placing the symbols in the key signature, which then apply implicitly to all occurrences of corresponding notes. Explicitly noted accidentals can be used to override this effect for the remainder of a bar. A special accidental, the natural symbol , is used to indicate a pitch unmodified by the alterations in the key signature. Effects of key signature and local accidentals do not accumulate. If the key signature indicates G, a local flat before a G makes it G (not G), though often this type of rare accidental is expressed as a natural, followed by a flat () to make this clear. Likewise (and more commonly), a double sharp sign on a key signature with a single sharp indicates only a double sharp, not a triple sharp. Assuming enharmonicity, many accidentals will create equivalences between pitches that are written differently. For instance, raising the note B to B is equal to the note C. Assuming all such equivalences, the complete chromatic scale adds five additional pitch classes to the original seven lettered notes for a total of 12 (the 13th note completing the octave), each separated by a half-step. Notes that belong to the diatonic scale relevant in the context are sometimes called diatonic notes; notes
nephrologists participate in career-long professional and personal development through the Royal Australasian College of Physicians and other bodies such as the Australian and New Zealand Society of Nephrology and the Transplant Society of Australia and New Zealand. United Kingdom In the United Kingdom, nephrology (often called renal medicine) is a subspecialty of general medicine. A nephrologist has completed medical school, foundation year posts (FY1 and FY2) and core medical training (CMT), specialist training (ST) and passed the Membership of the Royal College of Physicians (MRCP) exam before competing for a National Training Number (NTN) in renal medicine. The typical Specialty Training (when they are called a registrar, or an ST) is five years and leads to a Certificate of Completion of Training (CCT) in both renal medicine and general (internal) medicine. In those five years, they usually rotate yearly between hospitals in a region (known as a deanery). They are then accepted on to the Specialist Register of the General Medical Council (GMC). Specialty trainees often interrupt their clinical training to obtain research degrees (MD/PhD). After achieving CCT, the registrar (ST) may apply for a permanent post as Consultant in Renal Medicine. Subsequently, some Consultants practice nephrology alone. Others work in this area, and in Intensive Care (ICU), or General (Internal) or Acute Medicine. United States Nephrology training can be accomplished through one of two routes. The first pathway is through an internal medicine pathway leading to an Internal Medicine/Nephrology specialty, and sometimes known as "adult nephrology". The second pathway is through Pediatrics leading to a speciality in Pediatric Nephrology. In the United States, after medical school adult nephrologists complete a three-year residency in internal medicine followed by a two-year (or longer) fellowship in nephrology. Complementary to an adult nephrologist, a pediatric nephrologist will complete a three-year pediatric residency after medical school or a four-year Combined Internal Medicine and Pediatrics residency. This is followed by a three-year fellowship in Pediatic Nephrology. Once training is satisfactorily completed, the physician is eligible to take the American Board of Internal Medicine (ABIM) or American Osteopathic Board of Internal Medicine (AOBIM) nephrology examination. Nephrologists must be approved by one of these boards. To be approved, the physician must fulfill the requirements for education and training in nephrology in order to qualify to take the board's examination. If a physician passes the examination, then he or she can become a nephrology specialist. Typically, nephrologists also need two to three years of training in an ACGME or AOA accredited fellowship in nephrology. Nearly all programs train nephrologists in continuous renal replacement therapy; fewer than half in the United States train in the provision of plasmapheresis. Only pediatric trained physicians are able to train in pediatric nephrology, and internal medicine (adult) trained physicians may enter general (adult) nephrology fellowships. Diagnosis History and physical examination are central to the diagnostic workup in nephrology. The history typically includes the present illness, family history, general medical history, diet, medication use, drug use and occupation. The physical examination typically includes an assessment of volume state, blood pressure, heart, lungs, peripheral arteries, joints, abdomen and flank. A rash may be relevant too, especially as an indicator of autoimmune disease. Examination of the urine (urinalysis) allows a direct assessment for possible kidney problems, which may be suggested by appearance of blood in the urine (hematuria), protein in the urine (proteinuria), pus cells in the urine (pyuria) or cancer cells in the urine. A 24-hour urine collection used to be used to quantify daily protein loss (see proteinuria), urine output, creatinine clearance or electrolyte handling by the renal tubules. It is now more common to measure protein loss from a small random sample of urine. Basic blood tests can be used to check the concentration of hemoglobin, white count, platelets, sodium, potassium, chloride, bicarbonate, urea, creatinine, albumin, calcium, magnesium, phosphate, alkaline phosphatase and parathyroid hormone (PTH) in the blood. All of these may be affected by kidney problems. The serum creatinine concentration is the most important blood test as it is used to estimate the function of the kidney, called the creatinine clearance or estimated glomerular filtration rate (GFR). It is a good idea for patients with longterm kidney disease to know an up-to-date list of medications, and their latest blood tests, especially the blood creatinine level. In the United Kingdom, blood tests can monitored online by the patient, through a website called RenalPatientView. More specialized tests can be ordered to discover or link certain systemic diseases to kidney failure such as infections (hepatitis B, hepatitis C), autoimmune conditions (systemic lupus erythematosus, ANCA vasculitis), paraproteinemias (amyloidosis, multiple myeloma) and metabolic diseases (diabetes, cystinosis). Structural abnormalities of the kidneys are identified with imaging tests. These may include Medical ultrasonography/ultrasound, computed axial tomography (CT), scintigraphy (nuclear
advocated preserving the use of renal and nephro as appropriate including in "nephrology" and "renal replacement therapy", respectively. Nephrology also studies systemic conditions that affect the kidneys, such as diabetes and autoimmune disease; and systemic diseases that occur as a result of kidney disease, such as renal osteodystrophy and hypertension. A physician who has undertaken additional training and become certified in nephrology is called a nephrologist. The term "nephrology" was first used in about 1960, according to the French "néphrologie" proposed by Pr. Jean Hamburger in 1953, from the Greek νεφρός / nephrós (kidney). Before then, the specialty was usually referred to as "kidney medicine". Scope Nephrology concerns the diagnosis and treatment of kidney diseases, including electrolyte disturbances and hypertension, and the care of those requiring renal replacement therapy, including dialysis and renal transplant patients. The word 'dialysis' is from the mid 19th century: via Latin from the Greek word 'dialusis'; from 'dialuein' (split, separate), from 'dia' (apart) and 'luein' (set free). In other words, dialysis replaces the primary (excretory) function of the kidney, which separates (and removes) excess toxins and water from the blood, placing them in the urine. Many diseases affecting the kidney are systemic disorders not limited to the organ itself, and may require special treatment. Examples include acquired conditions such as systemic vasculitides (e.g. ANCA vasculitis) and autoimmune diseases (e.g., lupus), as well as congenital or genetic conditions such as polycystic kidney disease. Patients are referred to nephrology specialists after a urinalysis, for various reasons, such as acute kidney injury, chronic kidney disease, hematuria, proteinuria, kidney stones, hypertension, and disorders of acid/base or electrolytes. Nephrologist A nephrologist is a physician who specializes in the care and treatment of kidney disease. Nephrology requires additional training to become an expert with advanced skills. Nephrologists may provide care to people without kidney problems and may work in general/internal medicine, transplant medicine, immunosuppression management, intensive care medicine, clinical pharmacology, perioperative medicine, or pediatric nephrology. Nephrologists may further sub-specialise in dialysis, kidney transplantation, chronic kidney disease, cancer-related kidney diseases (Onconephrology), procedural nephrology or other non-nephrology areas as described above. Procedures a nephrologist may perform include native kidney and transplant kidney biopsy, dialysis access insertion (temporary vascular access lines, tunnelled vascular access lines, peritoneal dialysis access lines), fistula management (angiographic or surgical fistulogram and plasty), and bone biopsy. Bone biopsies are now unusual. Training India To become a nephrologist in India, one has to complete an MBBS (5 and 1/2 years) degree, followed by an MD/DNB (3 years) either in medicine or paediatrics, followed by a DM/DNB (3 years) course in either nephrology or paediatric nephrology. Australia and New Zealand Nephrology training in Australia and New Zealand typically includes completion of a medical degree (Bachelor of Medicine, Bachelor of Surgery: 4–6 years), internship (1 year), Basic Physician Training (3 years minimum), successful completion of the Royal Australasian College of Physicians written and clinical examinations, and Advanced Physician Training in Nephrology (2–3 years). The training pathway is overseen and accredited by the Royal Australasian College of Physicians. Increasingly, nephrologists may additionally complete of a post-graduate degree (usually a PhD) in a nephrology research interest (3–4 years). Finally, all Australian and New Zealand nephrologists participate in career-long professional and personal development through the Royal Australasian College of Physicians and other bodies such as the Australian and New Zealand Society of Nephrology and the Transplant Society of Australia and New Zealand. United Kingdom In the United Kingdom, nephrology (often called renal medicine) is a subspecialty of general medicine. A nephrologist has completed medical school, foundation year posts (FY1 and FY2) and core medical training (CMT), specialist training (ST) and passed the Membership of the Royal College of Physicians (MRCP) exam before competing for a National Training Number (NTN) in renal medicine. The typical Specialty Training (when they are called a registrar, or an ST) is five years and leads to a Certificate of Completion of Training (CCT) in both renal medicine and general (internal) medicine. In those five years, they usually rotate yearly between hospitals in a region (known as a deanery). They are then accepted on to the Specialist Register of the General Medical Council (GMC). Specialty trainees often interrupt their clinical training to obtain research degrees (MD/PhD). After achieving CCT, the registrar (ST) may apply for a permanent post as Consultant in Renal Medicine. Subsequently, some Consultants practice nephrology alone. Others work in this area, and in Intensive Care (ICU), or General (Internal) or Acute Medicine. United States Nephrology training can be accomplished through one of two routes. The first pathway is through an internal medicine pathway leading to an Internal Medicine/Nephrology specialty, and sometimes known as "adult nephrology". The second pathway is through Pediatrics leading to a speciality in Pediatric Nephrology. In the United States, after medical school adult nephrologists complete a three-year residency in internal medicine followed by a two-year (or longer) fellowship in nephrology. Complementary to an adult nephrologist, a pediatric nephrologist will complete a three-year pediatric residency after medical school or a four-year Combined Internal Medicine
verb 'stop doing something', komenci + verb 'start doing something', ankoraŭ 'still', and kaj poste 'and then'; but even then, usage was not as common as equivalents in the adstrate language. -Iĝi was, however, used on adjectival roots: Malheliĝas kaj ili ankoraŭ estas ĉe la plaĝo. – It's becoming dark and they are still on the beach. The word order was mostly SVO. OSV order was also attested, but half of all instances were with the Swiss German-speaking child, and Swiss German allows preposing the object. Related to the fixed word order, there is evidence that the accusative case has become redundant. Usage closely reflects the role of case in the adstrate language, being used only where consistent with the other language, but not always even there. Usage ranged from ≈100% with the Slovak-speaking children, to 0% with the French-speaking child, despite the fact that the French mother consistently used the accusative case in her own speech. Slovak has an accusative case on nouns, French does not. Other children used the accusative in only some of the contexts required by standard Esperanto, largely reflecting usage in their other language. There were other patterns to emerge as well. The Croatian child, for example, used the accusative only on personal pronouns immediately following a verb (underlined): En la sepa, unu infano prenis lian ŝtrumpo. (Standard: lian ŝtrumpon) – At seven o'clock, a child took his sock. butPoste li iris kaj poste li prenis en unu mano lia simio. (Standard: lian simion) – Then he went and then he took in one hand his monkey. Among children that do use the accusative, its usage may be regularized from adult usage, at least at young ages. For example, when a screw dropped out of a lock, a young (≤ 5-year-old) child said it malvenis la pordon. Besides the novel use of mal- with veni 'to come' to mean 'come away from', the accusative is not used in adult speech for motion away, but only motion towards. However, in this case the child generalized the usage of the accusative for direct objects. Lindstedt, on the other hand, referencing Bergen's study, contends that "it is difficult to find convincing examples of changes introduced by the process of nativisation. All examples proposed seem rather to be due to (1) transfers from the children’s other native languages, (2) differences between the spoken and written register of Esperanto and, in some cases, (3) incomplete acquisition." Some of the features, such as phonological reduction, can be found in the speech of some fluent non-native speakers, while some other, such as the attrition of the accusative, are completely absent from the speech of some native-speaking children. Word derivation Native-speaking children, especially at a young age, may coin words that do not exist in the speech of their parents, often for concepts for which Esperanto has a word they do not yet know, by exploiting the morphology of the language. This is analogous to what adult speakers do for concepts where Esperanto lacks a word, and indicates that some of the grammatical alterations that adult learners may find difficult come easily to native-speaking children. For example, Antonyms in mal- The prefix mal- is extremely productive, and children extend it beyond the usage they hear: malmiksi 'to separate' (miksi to mix) malpluvi 'to stop raining' (pluvi to rain) malscias 'is ignorant of' (scias knows) malnuna 'past' (nuna present) malfari 'to break (un-make)' (fari to make) maltie 'here' (tie there) malstartas 'turn off (an engine)' (startas 'starts', standard Esperanto ŝaltas 'switches on') malĝustigis 'broke' (ĝustigis repaired, made right) malsandviĉiĝis 'became (a shape) which isn't a sandwich anymore' (sandviĉ-iĝis 'became a sandwich', of a brother playing with cushions) malstelita 'not surrounded by stars' (of the moon; from stelita 'starred') malmateno 'evening' (mateno morning) malio 'nothing' (io 'something'; standard Esperanto nenio 'nothing') malinterne 'externally' (interne internally) malgraveda 'no longer pregnant' (graveda pregnant) Containers in -ujo elektrujo 'a battery' (elektro electricity) Tendencies in -ema ventrema 'fat' (tending to belly-ness, from ventro 'belly') Places in -ejo triciklejo 'a place for tricycles' Feminine in -ino penisino 'vagina' (peniso penis) Instrument in -ilo maltajpilo 'delete key' (maltajpi to delete, un-type, from tajpi to type) Verbs from nouns nazas 'rubs noses' (nazo nose) buŝas 'kisses on the mouth' (buŝo mouth) langeti 'to give a little lick' (diminutive, from lango tongue) dentumado 'activity with teeth' (dento tooth, -umi doing something undefined with, -ado noun of action) kuvi 'to have a bath' (kuvo 'tub'; standard Esperanto bani sin 'to bathe oneself') mukis '(my nose) was running' (muko 'snot', by analogy with sangis 'bled', from sango 'blood') literiĝas 'the letters are changing' (middle voice, from litero 'letter (of the alphabet)') ne seĝu sur la divano 'don't sit on the couch' (seĝo 'chair'; standard Esperanto sidu 'sit') muzi 'to museum' (from muzeo
speakers have limited opportunity to meet one another except where meetings are specially arranged. For that reason, many parents consider it important to bring their children regularly to Esperanto conventions such as the annual "Renkontiĝo de Esperanto-familioj" (or "Esperantistaj familioj"; REF, since 1979). Similarly, the annual happens alongside the largest Esperanto convention, the World Congress of Esperanto (Universala Kongreso). List of noted native speakers Below is a list of noted native Esperanto speakers. The billionaire George Soros has often appeared on such lists, but Humphrey Tonkin, the translator of Soros' father memoir Maskerado ĉirkaŭ la morto into English (under the title Masquerade: The Incredible True Story of How George Soros’ Father Outsmarted the Gestapo) has disputed this. He has made no statements either way concerning Soros' brother. Daniel Bovet Petr Ginz Kim J. Henriksen Ino Kolbe Carlo Minnaja Paul Soros Grammatical characteristics The Esperanto of native-speaking children differs from the standard Esperanto spoken by their parents. In some cases this is due to interference from their other native language (the adstrate), but in others it appears to be an effect of acquisition. Bergen (2001) found the following patterns in a study of eight native-speaking children, aged 6 to 14, who were bilingual in Hebrew (two siblings), Slovak (two siblings), French, Swiss German, Russian, and Croatian. Phonological reduction (usually to schwa) of vowels in common grammatical suffixes and one-syllable grammatical words. This occurred about 5% of the time. The reduced grammatical suffixes were mostly the -o of nouns and -as of present-tense verbs, but occasionally also the -a of adjectives. Reduced grammatical words included personal pronouns (which all end in i), the article la 'the', and prepositions such as al 'to' and je (a generic preposition). The article la was sometimes omitted with the Slavic speakers, as might be expected as a contact effect. Proper nouns were generally unassimilated, either to Esperanto grammatical suffixes or to stress patterns. Proper nouns are common exceptions to grammatical rules in many languages, and this pattern is common among L2-speakers of Esperanto as well. However, stress was also observed to vary in native words, for example nómiĝas 'is/am called' and ámikoj 'friends' (stress expected on the i in both cases). Children were not observed to use compound tenses (esti + a participle) or aspectual affixes (ek-, -iĝi, -adi, re-, el-) on verbal roots. Except for simple passives, the parents were not observed to use compound tenses either. However, they did use aspectual affixes (at least in the formal context of Bergen's interviews), but nonetheless the children did not use such affixes even when their other language was Slavic, where aspectual affixes are important. The closest thing to such forms that the children were observed to use were fini + verb 'stop doing something', komenci + verb 'start doing something', ankoraŭ 'still', and kaj poste 'and then'; but even then, usage was not as common as equivalents in the adstrate language. -Iĝi was, however, used on adjectival roots: Malheliĝas kaj ili ankoraŭ estas ĉe la plaĝo. – It's becoming dark and they are still on the beach. The word order was mostly SVO. OSV order was also attested, but half of all instances were with the Swiss German-speaking child, and Swiss German allows preposing the object. Related to the fixed word order, there is evidence that the accusative case has become redundant. Usage closely reflects the role of case in the adstrate language, being used only where consistent with the other language, but not always even there. Usage ranged from ≈100% with the Slovak-speaking children, to 0% with the French-speaking child, despite the fact that the French mother consistently used the accusative case in her own speech. Slovak has an accusative case on nouns, French does not. Other children used the accusative in only some of the contexts required by standard Esperanto, largely reflecting usage in their other language. There were other patterns to emerge as well. The Croatian child, for example, used the accusative only on personal pronouns immediately following a verb (underlined): En la sepa, unu infano prenis lian ŝtrumpo. (Standard: lian ŝtrumpon) – At seven o'clock, a child took his sock. butPoste li iris kaj poste li prenis en unu mano lia simio. (Standard: lian simion) – Then he went and then he took in one hand his monkey. Among children that do use the accusative, its usage may be regularized from adult usage, at least at young ages. For example, when a screw dropped out of a lock, a young (≤ 5-year-old) child said it malvenis la pordon. Besides the novel use of mal-
in earlier attacks, the initial objective of this offensive was to capture the border town of Jalapa to install a provisional government, which the CIA informed the contras would be immediately recognized by the United States Government. But this contra offensive was also repulsed by the Nicaraguan government forces. In the beginning of 1984, the contras made a major effort to prevent the harvesting of the coffee crop, which is one of Nicaragua's most important export products. Coffee plantations and state farms where coffee is grown were attacked, vehicles were destroyed, and coffee farmers were killed. Commander Carrion testified that the ability of the contras to carry out military operations was completely dependent upon United States funding, training and logistical support. Carrion stated that the U.S. Government supplied the contras with uniforms, weapons, communications equipment, intelligence, training, and coordination in using this material aid. In September 1983, CIA operatives blew up Nicaragua's only oil pipeline, which was used to transport oil from off-loading facilities to storage tanks on shore. The United States was also directly involved in a large scale sabotage operation directed against Nicaragua's oil storage facilities. This last attack was carried out by CIA contract employees termed by that organization as "Unilaterally Controlled Latin Assets" (UCLAs). The CIA personnel were also directly involved in a helicopter attack on a Nicaraguan army training camp. One of the helicopters was shot down by Nicaraguan ground fire resulting in the death of two U.S. citizens. Commander Carrion testified that the United States was involved in the mining of Nicaragua's ports between February – April 1984. The mining operation was carried out by CIA ships directing the operation from international waters, while the actual mining was carried out by CIA employees on board speedboats operating inshore. After the mine-laying was completed the speedboats returned to the mother vessel. Carrion stated that 3,886 people had been killed and 4,731 wounded in the four years since the contras began their attacks. Carrion estimated property damage at $375 million. Commander Carrion stated if the United States stopped aid, support and training, this would result in the end of the contras military activities within three months. Asked why he was so sure of this, Commander Carrion answered, "Well, because the contras are an artificial force, artificially set up by the United States, that exists only because it counts on United States direction, on United States training, on United States assistance, on United States weapons, on United States everything...Without that kind of support and direction the contras would simply disband, disorganize, and thus lose their military capacity in a very short time". Second witness: Dr. David MacMichael David MacMichael was an expert on counter-insurgency, guerrilla warfare, and Latin American affairs, he was also a witness because he was closely involved with U.S. intelligence activities as a contract employee from March 1981 – April 1983. MacMichael worked for Stanford Research Institute, which was contracted by the U.S. Department of Defense. After this he worked two years for the CIA as a "senior estimates officer", preparing the National Intelligence Estimate. Dr. MacMichael's responsibility was centered upon Central America. He had top-secret clearance. He was qualified and authorized to have access to all relevant U.S. intelligence concerning Central America, including intelligence relating to alleged Nicaraguan support for, and arms shipments to the anti-Government insurgents in El Salvador. He took part in high level meetings of the Latin American affairs office of the CIA. Including a fall 1981 meeting, which submitted the initial plan to set up a 1500-man covert force on the Nicaraguan border, shipping arms from Nicaragua to the El Salvador insurgents. This plan was approved by President Reagan. "The overall purpose (for the creation of the contras) was to weaken, even destabilize the Nicaraguan Government and thus reduce the menace it allegedly posed to the United States' interests in Central America..." Contra paramilitary actions would "hopefully provoke cross-border attacks by Nicaraguan forces and thus serve to demonstrate Nicaragua's aggressive nature and possibly call into play the Organization of American States' provisions (regarding collective self-defense). It was hoped that the Nicaraguan Government would clamp down on civil liberties within Nicaragua itself, arresting its opposition, so demonstrating its allegedly inherent totalitarian nature and thus increase domestic dissent within the country, and further that there would be reaction against United States citizens, particularly against United States diplomatic personnel within Nicaragua and thus to demonstrate the hostility of Nicaragua towards the United States". In response to repeated questions as to whether there was any substantial evidence of the supply of weapons to the guerrilla movement in El Salvador- either directly by the Nicaraguan Government itself-or with the knowledge, approval or authorization of the Nicaraguan Government of either non-official Nicaraguan sources, or by third country nationals inside or outside Nicaragua, using Nicaraguan territory for this purpose, Dr. MacMichael answered that there was no such evidence. In the opinion of the witness it would not have been possible for Nicaragua to send arms to the insurgents in El Salvador in significant amounts (as alleged by the U.S. Government) and over a prolonged period, without this being detected by the U.S. intelligence network in the area...Counsel for Nicaragua, asked the witness several times whether any detection of arms shipments by or through Nicaragua had taken place during the period he was employed by the CIA. (MacMichael) answered repeatedly that there was no such evidence. He also stated that after his employment had terminated, nothing had occurred that would cause him to change his opinion. He termed the evidence that had been publicly disclosed by the U.S. Government concerning Nicaraguan arms deliveries to the El Salvadoran insurgents as both "scanty" and "unreliable". The witness did however state that based on evidence, which had been gathered immediately prior to his employment with the CIA, evidence he had already actually seen, there was substantial evidence that arms shipments were reaching El Salvador from Nicaragua – with the probable involvement and complicity of the Nicaraguan Government – through late 1980 up until the spring of 1981....But this evidence, which most importantly had included actual seizures of weapons, which could be traced to Nicaragua, as well as documentary evidence and other sources, had completely ceased by early 1981. Since then, no evidence linking Nicaragua to shipments of arms in any substantial quantities had resumed coming in. Third witness: Professor Michael Glennon Mr. Glennon testified about a fact-finding mission he had conducted in Nicaragua to investigate alleged human rights violations committed by the Contra guerrillas, sponsored by the International Human Rights Law Group, and the Washington Office on Latin America. Glennon conducted the investigation with Mr. Donald T. Fox who is a New York attorney and a member of the International Commission of Jurists. They traveled to Nicaragua, visiting the northern region where the majority of contra military operations took place. The two lawyers interviewed around 36 northern frontier residents who had direct experience with the contras. They also spoke with the U.S. Ambassador to Nicaragua, and with senior officials of the U.S. Department of State in Washington after returning to the United States. No hearsay evidence was accepted. Professor Glennon stated that those interviewed were closely questioned, and their evidence was carefully cross-checked with available documentary evidence. Doubtful "testimonies" were rejected, and the results were published in April 1985. The conclusions of the report were summarized by Glennon in Court: We found that there is substantial credible evidence that the contras were engaged with some frequency in acts of terroristic violence directed at Nicaraguan civilians. These are individuals who have no connection with the war effort-persons with no economic, political or military significance. These are Individuals who are not caught in the cross-fire between Government and contra forces, but rather individuals who are deliberately targeted by the contras for acts of terror. "Terror" was used in the same sense as in recently enacted United States law, i.e. "an activity that involves a violent act or an act dangerous to human life that Is a violation or the criminal law, and appears to be intended to intimidate or coerce a civilian population, to Influence the policy of a government by intimidation or coercion, or to affect the conduct of a government by assassination or kidnapping. In talks with U.S. State Department officials, at those in Managua U.S. Embassy, and with officials in Washington, Professor Glennon had inquired whether the U.S. Government had ever investigated human rights abuses by the contras. Professor Glennon testified that no such investigation had ever been conducted, because in the words of a ranking State Department official who he could not name, the U.S. Government maintained a policy of "intentional ignorance" on the matter. State Department officials in Washington- had admitted to Glennon that "it was clear that the level of atrocities was enormous". Those words "enormous" and "atrocities" were the ranking State Department official's words. Fourth witness: Father Jean Loison Father Jean Loison was a French priest who worked as a nurse in a hospital in the northern frontier region close to Honduras. Asked whether the contras engaged in acts of violence directed against the civilian population, Father Loison answered: Yes, I could give you several examples. Near Quilali, at about 30 kilometers east of Quilali, there was a little village called El Coco. The contras arrived, they devastated it, they destroyed and burned everything. They arrived in front of a little house and turned their machinegun fire on it, without bothering to check if there were any people inside. Two children, who had taken fright and hidden under a bed, were hit. I could say the same thing of a man and woman who were hit, this was in the little co-operative of Sacadias Olivas. It was just the same. They too had taken fright and got into bed. Unlike El Coco, the contras had just been on the attack, they had encountered resistance and were now in flight. During their flight they went into a house, and seeing that there were people there, they threw grenade. The man and the woman were killed and one of the children was injured. About contra kidnappings: I would say that kidnappings are one of the reasons why some of the peasants have formed themselves into groups. Here (indicates a point on the map) is Quilali. Between Quilali and Uilili, in this region to the north, there are hardly any peasants left of any age to bear arms, because they have all been carried off. Father Loison described many examples of violence, mostly indiscriminate, directed at the civilian population in the region where he resides. The picture that emerges from his testimony is that the contras engage in brutal violation of minimum standards of humanity. He described murders of unarmed civilians, including women and children, rape followed in many instances by torture or murder, and indiscriminate terror designed to coerce the civilian population. His testimony was similar to various reports including the International Human Rights Law Group, Amnesty International, and others. Fifth witness: William Hüper William Hüper was Nicaragua's Minister of Finance. He testified about Nicaragua economic damage, including the loss of fuel as a result of the attack in the oil storage facilities at Corinto, the damage to Nicaragua's commerce as a result of the mining of its ports, and other economic damage. UN voting After five vetoes in the Security Council between 1982 and 1985 of resolutions concerning the situation in Nicaragua , the United States made one final veto on 28 October 1986 (France, Thailand, and United Kingdom abstaining) of a resolution calling for full and immediate compliance with the judgment. Nicaragua brought the matter to the U.N. Security Council, where the United States vetoed a resolution (11 to 1, 3 abstentions) calling on all states to observe international law. Nicaragua also turned to the General Assembly, which passed a resolution 94 to 3 calling for compliance with the World Court ruling. Two states, Israel and El Salvador, joined the United States in opposition. At that time, El Salvador was receiving substantial funding and military advisement from the U.S., which was aiming to crush a Sandinista-like revolutionary movement by the FMLN. At the same session, Nicaragua called upon the U.N. to send an independent fact-finding mission to the border to secure international monitoring of the borders after a conflict there; the proposal was rejected by Honduras with U.S. backing. A year later, on November 12,
on November 26 by 11 votes to one that it had jurisdiction in the case on the basis of either Article 36 of the Statute of the International Court of Justice (i.e. compulsory jurisdiction) or the 1956 Treaty of Friendship, Commerce and Navigation between the United States and Nicaragua. The Charter provides that, in case of doubt, it is for the Court itself to decide whether it has jurisdiction, and that each member of the United Nations undertakes to comply with the decision of the Court. The Court also ruled by unanimity that the present case was admissible. The United States then announced that it had "decided not to participate in further proceedings in this case." About a year after the Court's jurisdictional decision, the United States took the further, radical step of withdrawing its consent to the Court's compulsory jurisdiction, ending its previous 40 year legal commitment to binding international adjudication. The Declaration of acceptance of the general compulsory jurisdiction of the International Court of Justice terminated after a 6-month notice of termination delivered by the Secretary of State to the United Nations on October 7, 1985. Although the Court called on the United States to "cease and to refrain" from the unlawful use of force against Nicaragua and stated that the US was "in breach of its obligation under customary international law not to use force against another state" and ordered it to pay reparations, the United States refused to comply. As a permanent member of the Security Council, the U.S. has been able to block any enforcement mechanism attempted by Nicaragua. On November 3, 1986 the United Nations General Assembly passed, by a vote of 94-3 (El Salvador, Israel and the US voted against), a non-binding resolution urging the US to comply. The ruling On June 27, 1986, the Court made the following ruling: The Court Decides that in adjudicating the dispute brought before it by the Application filed by the Republic of Nicaragua on 9 April 1984, the Court is required to apply the "multilateral treaty reservation" contained in proviso (c) to the declaration of acceptance of jurisdiction made under Article 36, paragraph 2, of the Statute of the Court by the Government of the United States of America deposited on 26 August 1946; Rejects the justification of collective self-defense maintained by the United States of America in connection with the military and paramilitary activities in and against Nicaragua the subject of this case; Decides that the United States of America, by training, arming, equipping, financing and supplying the contra forces or otherwise encouraging, supporting and aiding military and paramilitary activities in and against Nicaragua, has acted, against the Republic of Nicaragua, in breach of its obligation under customary international law not to intervene in the affairs of another State; Decides that the United States of America, by certain attacks on Nicaraguan territory in 1983–1984, namely attacks on Puerto Sandino on 13 September and 14 October 1983, an attack on Corinto on 10 October 1983; an attack on Potosi Naval Base on 4/5 January 1984, an attack on San Juan del Sur on 7 March 1984; attacks on patrol boats at Puerto Sandino on 28 and 30 March 1984; and an attack on San Juan del Norte on 9 April 1984; and further by those acts of intervention referred to in subparagraph (3) hereof which involve the use of force, has acted, against the Republic of Nicaragua, in breach of its obligation under customary international law not to use force against another State; Decides that the United States of America, by directing or authorizing overflights of Nicaraguan territory, and by the acts imputable to the United States referred to in subparagraph (4) hereof, has acted, against the Republic of Nicaragua, in breach of its obligation under customary international law not to violate the sovereignty of another State; Decides that, by laying mines in the internal or territorial waters of the Republic of Nicaragua during the first months of 1984, the United States of America has acted, against the Republic of Nicaragua, in breach of its obligations under customary international law not to use force against another State, not to intervene in its affairs, not to violate its sovereignty and not to interrupt peaceful maritime commerce; Decides that, by the acts referred to in subparagraph (6) hereof the United States of America has acted, against the Republic of Nicaragua, in breach of its obligations under Article XIX of the Treaty of Friendship, Commerce and Navigation between the United States of America and the Republic of Nicaragua signed at Managua on 21 January 1956; Decides that the United States of America, by failing to make known the existence and location of the mines laid by it, referred to in subparagraph (6) hereof, has acted in breach of its obligations under customary international law in this respect; Finds that the United States of America, by producing in 1983 a manual entitled 'Operaciones sicológicas en guerra de guerrillas', and disseminating it to Contra forces, has encouraged the commission by them of acts contrary to general principles of humanitarian law; but does not find a basis for concluding that any such acts which may have been committed are imputable to the United States of America as acts of the United States of America; Decides that the United States of America, by the attacks on Nicaraguan territory referred to in subparagraph (4) hereof, and by declaring a general embargo on trade with Nicaragua on 1 May 1985, has committed acts calculated to deprive of its object and purpose the Treaty of Friendship, Commerce and Navigation between the Parties signed at Managua on 21 January 1956; Decides that the United States of America, by the attacks on Nicaraguan territory referred to in subparagraph (4) hereof, and by declaring a general embargo on trade with Nicaragua on 1 May 1985, has acted in breach of its obligations under Article XIX of the Treaty of Friendship, Commerce and Navigation between the Parties signed at Managua on 21 January 1956; Decides that the United States of America is under a duty immediately to cease and to refrain from all such acts as may constitute breaches of the foregoing legal obligations; Decides that the United States of America is under an obligation to make reparation to the Republic of Nicaragua for all injury caused to Nicaragua by the breaches of obligations under customary international law enumerated above; Decides that the United States of America is under an obligation to make reparation to the Republic of Nicaragua for all injury caused to Nicaragua by the breaches of the Treaty of Friendship, Commerce and Navigation between the Parties signed at Managua on 21 January 1956; Decides that the form and amount of such reparation, failing agreement between the Parties, will be settled by the Court, and reserves for this purpose the subsequent procedure in the case; Recalls to both Parties their obligation to seek a solution to their disputes by peaceful means in accordance with international law. Legal clarification and importance The ruling did in many ways clarify issues surrounding prohibition of the use of force and the right of self-defence. Arming and training the Contra was found to be in breach with principles of non-intervention and prohibition of use of force, as was laying mines in Nicaraguan territorial waters. Nicaragua's dealings with the armed opposition in El Salvador, although it might be considered a breach with the principle of non-intervention and the prohibition of use of force, did not constitute "an armed attack", which is the wording in article 51 justifying the right of self-defence. The Court considered also the United States claim to be acting in collective self-defence of El Salvador and found the conditions for this not reached as El Salvador never requested the assistance of the United States on the grounds of self-defence. In regards to laying mines, "...the laying of mines in the waters of another State without any warning or notification is not only an unlawful act but also a breach of the principles of humanitarian law underlying the Hague Convention No. VIII of 1907." How the judges voted Votes of Judges – Nicaragua v. United States Dissent Judge Schwebel's dissent was twice as long as the actual judgment. Judge Schwebel argued that the Sandinista government came to power with support of foreign intervention similar to what it was now complaining about. He argued that the Sandinista government achieved international recognition and received large amounts of foreign aid in exchange for commitments they subsequently violated. He cited evidence that the Sandinista government had indeed supported the rebels in El Salvador and noted that Nicaragua's own CIA witness contradicted their assertions that they had never at any point supported the rebels in El Salvador. The CIA witness said that there was no evidence of weapon shipments since early 1981, but Schwebel argued that he could not credibly explain why opponents of Contra aid such as Congressman Boland, who also saw the evidence, believed that weapon shipments were ongoing. He further argued that Daniel Ortega publicly admitted such shipments in statements in 1985 and 1986. Furthermore, there was no dispute that the leadership of the rebels operated in Nicaragua from time to time. He stated that in August 1981 the U.S. offered to resume aid to Nicaragua and to not support regime change in exchange for Nicaraguan commitments to not support the rebels in El Salvador. These proposals were rejected by the Sandinistas, and judge Schwebel argued that the U.S. was entitled to take action in collective self-defense with El Salvador by authorizing Contra aid in December 1981. He stated that further U.S. proposals to resolve the issue made in early 1982 were also ignored by the Sandinistas. The Sandinista government in 1983 began advancing proposals in which it would undertake not to support the rebels, but Schwebel noted that these were coupled with demands that the U.S. cease supporting the lawful government of El Salvador. The judge noted that since early 1985 the U.S. had increasingly made regime change a primary objective but argued this was not inconsistent with self-defense because it was reasonable to believe that Nicaragua would not maintain any commitments unless Sandinista power was diluted. The judge said that both sides of the wars in Nicaragua and El Salvador had committed atrocities. He said the U.S. mining of Nicaraguan harbors was unlawful in regard to third parties, but not Nicaragua. Certain witnesses against the US First witness: Commander Luis Carrión The first witness called by Nicaragua was Nicaragua's first Vice Minister of the Interior, Commander Luis Carrion. Commander Carrion had overall responsibility for state security and was in charge of all government operations in the "principal war zone". He was responsible for monitoring United States involvement in military and paramilitary activities against Nicaragua, directing Nicaragua's military and intelligence efforts against the contra guerrillas. Commander Carrion began by explaining the condition of the contras prior to United States' aid in December 1981. Commander Carrion stated that the contras consisted of insignificant bands of poorly armed and poorly organized members of Somoza's National Guard, who carried out uncoordinated border raids and rustled cattle (presumably for food). In December 1981, the U.S. Congress authorized an initial appropriation of 19 million dollars to finance paramilitary operations in Nicaragua and elsewhere in Central America. Because of this aid, Commander Carrion stated that the contras began to become centralized and received both training and weapons from the CIA. During 1982 the contra guerrillas engaged the Sandinista armed forces in a series of hit and run border raids and carried out a number of sabotage operations including: the destruction
idea that such a language will be relatively easier to use passively -- in many cases, without prior study -- by speakers of one or more languages in the group. The term is most commonly used to apply to planned languages predominantly based on the Romance languages, best known of which are Interlingue (previously known as Occidental) and Interlingua. Both were designed
there are also languages intended for speakers of a particular language family (zonal constructed languages), including Pan-Germanic, Pan-Slavic and even Pan-Celtic naturalistic planned languages. Since the creation of such a language often includes shared idiosyncrasies from the source languages, active use seems to be generally more difficult to learn than for schematic planned languages, though
"structure". Thus no single universal unitary evolution U can clone an arbitrary quantum state according to the no-cloning theorem. It would have to depend on the transformed qubit (initial) state and thus would not have been universal. Generalization In the statement of the theorem, two assumptions were made: the state to be copied is a pure state and the proposed copier acts via unitary time evolution. These assumptions cause no loss of generality. If the state to be copied is a mixed state, it can be purified. Alternately, a different proof can be given that works directly with mixed states; in this case, the theorem is often known as the no-broadcast theorem. Similarly, an arbitrary quantum operation can be implemented via introducing an ancilla and performing a suitable unitary evolution. Thus the no-cloning theorem holds in full generality. Consequences The no-cloning theorem prevents the use of certain classical error correction techniques on quantum states. For example, backup copies of a state in the middle of a quantum computation cannot be created and used for correcting subsequent errors. Error correction is vital for practical quantum computing, and for some time it was unclear whether or not it was possible. In 1995, Shor and Steane showed that it is by independently devising the first quantum error correcting codes, which circumvent the no-cloning theorem. Similarly, cloning would violate the no-teleportation theorem, which says that it is impossible to convert a quantum state into a sequence of classical bits (even an infinite sequence of bits), copy those bits to some new location, and recreate a copy of the original quantum state in the new location. This should not be confused with entanglement-assisted teleportation, which does allow a quantum state to be destroyed in one location, and an exact copy to be recreated in another location. The no-cloning theorem is implied by the no-communication theorem, which states that quantum entanglement cannot be used to transmit classical information (whether superluminally, or slower). That is, cloning, together with entanglement, would allow such communication to occur. To see this, consider the EPR thought experiment, and suppose quantum states could be cloned. Assume parts of a maximally entangled Bell state are distributed to Alice and Bob. Alice could send bits to Bob in the following way: If Alice wishes to transmit a "0", she measures the spin of her electron in the z direction, collapsing Bob's state to either or . To transmit "1", Alice does nothing to her qubit. Bob creates many copies of his electron's state, and measures the spin of each copy in the z direction. Bob will know that Alice has transmitted a "0" if all his measurements will produce the same result; otherwise, his measurements will have outcomes or with equal probability. This would allow Alice and Bob to communicate classical bits between each other (possibly across space-like separations, violating causality). Quantum states cannot be discriminated perfectly. The
two assumptions were made: the state to be copied is a pure state and the proposed copier acts via unitary time evolution. These assumptions cause no loss of generality. If the state to be copied is a mixed state, it can be purified. Alternately, a different proof can be given that works directly with mixed states; in this case, the theorem is often known as the no-broadcast theorem. Similarly, an arbitrary quantum operation can be implemented via introducing an ancilla and performing a suitable unitary evolution. Thus the no-cloning theorem holds in full generality. Consequences The no-cloning theorem prevents the use of certain classical error correction techniques on quantum states. For example, backup copies of a state in the middle of a quantum computation cannot be created and used for correcting subsequent errors. Error correction is vital for practical quantum computing, and for some time it was unclear whether or not it was possible. In 1995, Shor and Steane showed that it is by independently devising the first quantum error correcting codes, which circumvent the no-cloning theorem. Similarly, cloning would violate the no-teleportation theorem, which says that it is impossible to convert a quantum state into a sequence of classical bits (even an infinite sequence of bits), copy those bits to some new location, and recreate a copy of the original quantum state in the new location. This should not be confused with entanglement-assisted teleportation, which does allow a quantum state to be destroyed in one location, and an exact copy to be recreated in another location. The no-cloning theorem is implied by the no-communication theorem, which states that quantum entanglement cannot be used to transmit classical information (whether superluminally, or slower). That is, cloning, together with entanglement, would allow such communication to occur. To see this, consider the EPR thought experiment, and suppose quantum states could be cloned. Assume parts of a maximally entangled Bell state are distributed to Alice and Bob. Alice could send bits to Bob in the following way: If Alice wishes to transmit a "0", she measures the spin of her electron in the z direction, collapsing Bob's state to either or . To transmit "1", Alice does nothing to her qubit. Bob creates many copies of his electron's state, and measures the spin of each copy in the z direction. Bob will know that Alice has transmitted a "0" if all his measurements will produce the same result; otherwise, his measurements will have outcomes or with equal probability. This would allow Alice and Bob to communicate classical bits between each other (possibly across space-like separations, violating causality). Quantum states cannot be discriminated perfectly. The no cloning theorem prevents an interpretation of the holographic principle for black holes as meaning that there are two copies of information, one lying at the event horizon and the other in the black hole interior. This leads to more radical interpretations, such as black hole complementarity. The no-cloning theorem applies to all dagger compact categories: there is no universal cloning morphism for any non-trivial category of this kind. Although the theorem is inherent in the definition of this category, it is not trivial to see that this is so; the insight is important, as this category includes things that are not finite-dimensional Hilbert spaces, including the category of sets and relations and the category of cobordisms. Imperfect cloning Even though it is impossible to make perfect
1961, and vice chancellor for academic affairs for the University of Texas System in 1963. Hackerman left the University of Texas in 1970 for Rice, where he retired 15 years later. He was named professor emeritus of chemistry at the University of Texas in 1985 and taught classes until the end of his life. He was a member of the National Academy of Sciences and the American Academy of Arts and Sciences. Among his many honors are the Olin Palladium Award of the Electrochemical Society, the Gold Medal of the American Institute of Chemists (1978), the Charles Lathrop Parsons Award, the Vannevar Bush Award and the National Medal of Science. He was awarded the Acheson Award by the Electrochemical Society in 1984. Hackerman served on advisory committees and boards of several technical societies and government agencies, including the National Science Board, the Texas Governor's Task Force on Higher Education and the Scientific Advisory Board of the Welch Foundation. He also served as editor of the Journal of the Electrochemical Society and as president of the Electrochemical Society. Family Hackerman's wife of 61 years, Gene Coulbourn, died in 2002; they had three daughters and one son. Legacy In 1982 The Electrochemical Society created the Norman Hackerman Young Author Award to honor the best paper published in the Journal of the Electrochemical Society for a topic in the field of electrochemical science and technology by a young author or authors. In 2000 the Welch Foundation created the Norman Hackerman Award in
corrosion. Biography Born in Baltimore, Maryland, he was the only son of Jacob Hackerman and Anna Raffel, immigrants from the Baltic regions of the Russian Empire that later became Estonia and Latvia, respectively. Hackerman earned his bachelor's degree in 1932 and his doctor's degree in chemistry in 1935 from Johns Hopkins University. He taught at Johns Hopkins, Loyola College in Baltimore and the Virginia Polytechnic Institute and State University in Blacksburg, Virginia, before working on the Manhattan Project in World War II. He joined the University of Texas in 1945 as an assistant professor of chemistry, became an associate professor in 1946, a full professor in 1950, a department chair in 1952, dean of research in 1960, vice president and provost in 1961, and vice chancellor for academic affairs for the University of Texas System in 1963. Hackerman left the University of Texas in 1970 for Rice, where he retired 15 years later. He was named professor emeritus of chemistry at the University of Texas in 1985 and taught classes until the end of his life. He was a member of the National Academy of Sciences and the American Academy of Arts and Sciences. Among his many honors
prevailed upon by the British journal Nature to travel to Blondlot's laboratory in France to investigate further. Wood suggested that Rubens should go since he had been the most embarrassed when Kaiser Wilhelm II of Germany asked him to repeat the French experiments, and then after two weeks Rubens had to report his failure to do so. Rubens, however, felt it would look better if Wood went, since Blondlot had been most polite in answering his many questions. In the darkened room during Blondlot's demonstration, Wood surreptitiously removed an essential prism from the experimental apparatus, yet the experimenters still said that they observed N-rays. Wood also stealthily swapped a large file that was supposed to be giving off N-rays with an inert piece of wood, yet the N-rays were still "observed". His report on these investigations were published in Nature, and they suggested that the N-rays were a purely subjective phenomenon, with the scientists involved having recorded data that matched their expectations. There is reason to believe that Blondlot in particular was misled by his laboratory assistant, who confirmed all observations. By 1905, no one outside of Nancy believed in N-rays, but Blondlot himself is reported to have still been convinced of their existence in 1926. Martin Gardner, referencing Wood's biographer William Seabrook's account of the affair, attributed a subsequent decline in mental health and eventual death of Blondlot to the resulting scandal, but there is evidence that this is at least some exaggeration of the facts. The term "N-ray" was added to dictionaries upon its announcement and was described as a real phenomenon until at least the 1940s. For instance, the 1946 Webster's Dictionary defined it as "An emanation or radiation from certain hot
to repeat the French experiments, and then after two weeks Rubens had to report his failure to do so. Rubens, however, felt it would look better if Wood went, since Blondlot had been most polite in answering his many questions. In the darkened room during Blondlot's demonstration, Wood surreptitiously removed an essential prism from the experimental apparatus, yet the experimenters still said that they observed N-rays. Wood also stealthily swapped a large file that was supposed to be giving off N-rays with an inert piece of wood, yet the N-rays were still "observed". His report on these investigations were published in Nature, and they suggested that the N-rays were a purely subjective phenomenon, with the scientists involved having recorded data that matched their expectations. There is reason to believe that Blondlot in particular was misled by his laboratory assistant, who confirmed all observations. By 1905, no one outside of Nancy believed in N-rays, but Blondlot himself is reported to have still been convinced of their existence in 1926. Martin Gardner, referencing Wood's biographer William Seabrook's account of the affair, attributed a subsequent decline in mental health and eventual death of Blondlot to the resulting scandal, but there is evidence that this is at least some exaggeration of the facts. The term "N-ray" was added to dictionaries upon its announcement and was described as a real phenomenon until at least the 1940s. For instance, the 1946 Webster's Dictionary defined it as "An emanation or radiation from certain hot bodies which increases the luminosity without increasing the temperature: as yet, not fully determined." Significance The incident is used as a cautionary tale among scientists on the dangers of error introduced by experimenter bias. N-rays were cited as an example of pathological science by Irving Langmuir. Nearly identical properties of an equally unknown radiation had been recorded about 50 years before in another country by Carl Reichenbach in his
Frunze Higher Naval School in 1926, Kuznetsov served on the cruiser , first as watch officer and then as First Lieutenant. In 1932, he graduated from the Naval College after studying operational tactics. Upon graduation, he was offered two options – a desk job with the general staff or a command post on a ship. Kuznetsov successfully applied for the post of executive officer on the cruiser . Within a year, the young officer earned his next promotion. In 1934, he returned to the Chervona Ukraina, this time as her commander. Under Kuznetsov, the ship became an outstanding example of discipline and organization, quickly drawing attention to her young captain. From 5 September 1936 to 15 August 1937, Kuznetsov served as the Soviet naval attaché and chief naval advisor to Republican Spain. During the early stages of the Spanish Civil War of 1936-1939 he developed a strong dislike of fascism. On returning home, on January 10, 1938, he was promoted to the rank of flag officer, 2nd rank, and given command of the Pacific Fleet. While in this position, he came face to face with Stalin's purge of the military. Kuznetsov himself was never implicated, but many of the officers under his command were. Kuznetsov resisted the purges at every step, and his intervention saved the lives of many Soviet officers. On 28 April 1939, Kuznetsov, still only thirty-four, was appointed the People's Commissar (Minister) of the Navy, a post he would hold throughout the Second World War until 1946. In 1939, despite Stalin's negative attitude to the Nikolaevsky Engineering Academy, Nikolay Gerasimovich Kuznetsov ordered the return of the Naval Engineering faculty from Moscow to Leningrad, and set up the Military Engineering-Technical University to educate engineers for the construction of naval bases. The Second World War Kuznetsov played a crucial role during the first hours of the war – at this pivotal moment, his resolve and blatant disregard for orders averted the destruction of the Soviet Navy. By June 21, 1941, Kuznetzov was convinced of the inevitability of war with Nazi Germany. On the same day Semyon Timoshenko and Georgy Zhukov issued a directive prohibiting Soviet commanders from responding to "German provocations". The Navy, however, constituted a distinct ministry (narkomat), and thus Kuznetsov held a position which was technically outside the direct chain of command. He utilized this fact in a very bold move. Shortly after midnight on the morning of June 22, Kuznetsov ordered all Soviet fleets to battle readiness. At 3:15 am that same morning, the Wehrmacht began Operation Barbarossa. The Soviet Navy was the only branch of the military in the highest state of combat readiness at the start of the initial German push. In the following two years, Kuznetsov's primary concern was the protection of the Caucasus from a German invasion. Throughout the war, the Black Sea remained the primary theater of operations for the Soviet Navy. During the war years Kuznetsov honed Soviet methods of amphibious assault. A notable subordinate in the Black Sea and in command of the Azov Flotilla was S.G. Gorshkov who would later succeed him as Commander-in-Chief of the Navy. In May 1944 he was given the rank of Admiral of the Fleet – a newly created position initially equated to that of a four-star general. In the same year, Kuznetsov was given the title of Hero of the Soviet Union. On May 31, 1945, his rank was equated to the rank of Marshal of the Soviet Union with a similar insignia. In August 1945, he took part in Operation August Storm in the Far East, helping to provide functions for the Soviet Navy fleet for Commander-in-Chief of USSR Forces in the Far East Marshal Aleksandr Vasilevsky. The first fall From 1946 to 1947 he was the Deputy Minister of the USSR Armed Forces and Commander-in-Chief of the Naval Forces. In 1947 he was removed from his post on Stalin's orders and in 1948 he, as well as several other admirals were put on trial by the Naval Tribunal. Kuznetsov was demoted to vice-admiral, while the other admirals received prison
Within a year, the young officer earned his next promotion. In 1934, he returned to the Chervona Ukraina, this time as her commander. Under Kuznetsov, the ship became an outstanding example of discipline and organization, quickly drawing attention to her young captain. From 5 September 1936 to 15 August 1937, Kuznetsov served as the Soviet naval attaché and chief naval advisor to Republican Spain. During the early stages of the Spanish Civil War of 1936-1939 he developed a strong dislike of fascism. On returning home, on January 10, 1938, he was promoted to the rank of flag officer, 2nd rank, and given command of the Pacific Fleet. While in this position, he came face to face with Stalin's purge of the military. Kuznetsov himself was never implicated, but many of the officers under his command were. Kuznetsov resisted the purges at every step, and his intervention saved the lives of many Soviet officers. On 28 April 1939, Kuznetsov, still only thirty-four, was appointed the People's Commissar (Minister) of the Navy, a post he would hold throughout the Second World War until 1946. In 1939, despite Stalin's negative attitude to the Nikolaevsky Engineering Academy, Nikolay Gerasimovich Kuznetsov ordered the return of the Naval Engineering faculty from Moscow to Leningrad, and set up the Military Engineering-Technical University to educate engineers for the construction of naval bases. The Second World War Kuznetsov played a crucial role during the first hours of the war – at this pivotal moment, his resolve and blatant disregard for orders averted the destruction of the Soviet Navy. By June 21, 1941, Kuznetzov was convinced of the inevitability of war with Nazi Germany. On the same day Semyon Timoshenko and Georgy Zhukov issued a directive prohibiting Soviet commanders from responding to "German provocations". The Navy, however, constituted a distinct ministry (narkomat), and thus Kuznetsov held a position which was technically outside the direct chain of command. He utilized this fact in a very bold move. Shortly after midnight on the morning of June 22, Kuznetsov ordered all Soviet fleets to battle readiness. At 3:15 am that same morning, the Wehrmacht began Operation Barbarossa. The Soviet Navy was the only branch of the military in the highest state of combat readiness at the start of the initial German push. In the following two years, Kuznetsov's primary concern was the protection of the Caucasus from a German invasion. Throughout the war, the Black Sea remained the primary theater of operations for the Soviet Navy. During the war years Kuznetsov honed Soviet methods of amphibious assault. A notable subordinate in the Black Sea and in command of the Azov Flotilla was S.G. Gorshkov who would later succeed him as Commander-in-Chief of the Navy. In May 1944 he was given the rank of Admiral of the Fleet – a newly created position initially equated to that of a four-star general. In the same year, Kuznetsov was given the title of Hero of the Soviet Union. On May 31, 1945, his
to Genesis Microchip in April 2002. By November 2004, there were no Nuon-enabled DVD players shipping and no new Nuon software titles released or in development. Specification 32/128 bit 54 MHz or 108 MHz quad-core VM labs Nuon MPE hybrid stack processor (Media Processing Element, supporting 128-bit SIMD floating point and 32-bit integer but both share the same IEEE 754 floating point register stack to store both flop and integer instructions similar to the Intel MMX technology through context switching. Each contains a 128-bytes unified cache, with 32-kilobyte shared cache (32-bit SRAM block) and maximum 2 GB physical memory addresses. Some report(s) suggested that a certain model had sported a 333+ MHz clock frequency but it was never released widely. MCS-251 microcontroller for background task 32-megabyte 8-bit Fast Page DRAM at 33 MHz, 512-kilobytes sound RAM and 24-kilobytes programmable ROM 2x 3d Media GL MPE with 8-megabyte 32bit video ram at 66mhz. 64~256 MB writable rom and optional hard drive (up to 137 GB) Optical drive support DVD or CD-R Peripherals and accessories Peripherals for Nuon-enhanced DVD players included the following: Logitech Gamepad Pro-elite controller AirPlay wireless controller Stealth controller Warrior Digital-D pad controller extension cable port replicator to move the Nuon ports to anywhere desired Released movies Only four DVD releases utilized Nuon technology. All of them were released by 20th Century Fox Home Entertainment: The Adventures of Buckaroo Banzai Across the 8th Dimension Bedazzled (2000 remake) Dr. Dolittle 2 Planet of the Apes (2001 remake, Bug Free Version UPC - 2454302898) Released games Only eight games were officially released for the Nuon: Tempest 3000 Freefall 3050 A.D. Merlin Racing (later had a sequel entitled Miracle Space Race for the PlayStation) Space Invaders X.L. Iron Soldier
programmable ROM 2x 3d Media GL MPE with 8-megabyte 32bit video ram at 66mhz. 64~256 MB writable rom and optional hard drive (up to 137 GB) Optical drive support DVD or CD-R Peripherals and accessories Peripherals for Nuon-enhanced DVD players included the following: Logitech Gamepad Pro-elite controller AirPlay wireless controller Stealth controller Warrior Digital-D pad controller extension cable port replicator to move the Nuon ports to anywhere desired Released movies Only four DVD releases utilized Nuon technology. All of them were released by 20th Century Fox Home Entertainment: The Adventures of Buckaroo Banzai Across the 8th Dimension Bedazzled (2000 remake) Dr. Dolittle 2 Planet of the Apes (2001 remake, Bug Free Version UPC - 2454302898) Released games Only eight games were officially released for the Nuon: Tempest 3000 Freefall 3050 A.D. Merlin Racing (later had a sequel entitled Miracle Space Race for the PlayStation) Space Invaders X.L. Iron Soldier 3 (later recalled due to incompatibility with some players) Ballistic (only available with Samsung players) The Next Tetris (only available with Toshiba players) Crayon Shin-chan 3 (Korean-only release) Collections and samplers Interactive Sampler (three different versions) Nuon Games + Demos (collection from Nuon-Dome) Nuon-Dome PhillyClassic 5 Demo Disc (giveaway collection) Homebrew development During late 2001, VM Labs released a homebrew SDK which allowed people to be able to program apps/games for their Nuon system. Only the Samsung DVD-N501/DVDN504/DVDN505 and RCA DRC300N/DRC480N can load homebrew games. Some homebrew titles have been created for or ported to Nuon. They are not commercially available and require the user to burn the material to a Nuon-compatible CD-R. References External links NUON's homepage (archived August 2002) Nuon—Dome Page Nuon Alumni Page Entry at Video Game Console Library Entry At Giant Bomb "A Fan’s History – The NUON" blog post at arcryphongames.wordpress.com (Dated February 22, 2015. It has a copious amount of embedded video links and media shots.) DVD Home video game consoles Sixth-generation video game consoles 2000 introductions Discontinued video game consoles
(community), Wisconsin, an unincorporated community Music Nashville (Bill Frisell album), 1997 Nashville (Andy Williams album), 1991 Nashville (Josh Rouse album), 2005 "Nashville", a song recorded by Stonewall Jackson and others Nashville!, a commercial music channel on XM Satellite Radio controlled by Clear Channel Communications An E9 tuning sometimes used for ten string pedal steel guitar Nashville sound, a subgenre of country music Nashville Symphony, an orchestra Nashville tuning (high strung), a tuning for a six string guitar Movies and television Nashville (film), a 1975 American musical film directed by Robert Altman Nashville (2007 TV series), a Fox reality series Nashville (2012 TV series), an ABC and CMT drama series "Nashville" (Master of None), a 2015 TV episode Other uses Nashville
Tennessee. Nashville may also refer to: Places Nashville, Arkansas Nashville, California Nashville, Georgia Nashville, Illinois Nashville, Indiana Nashville, Hancock County, Indiana Nashville, Iowa Nashville, Kansas Nashville Plantation, Maine Nashville, Michigan Nashville Center, Minnesota Nashville Township, Minnesota Nashville, Missouri Nashville, Nebraska Nashville Historic District (Nashua, New Hampshire) Nashville, New York Nashville, North Carolina Nashville, Ohio Nashville, Ontario Nashville, Oregon Nashville, Texas, also known as Nashville-on-the-Brazos Nashville, Wisconsin, a town Nashville (community), Wisconsin, an unincorporated community Music Nashville (Bill Frisell album), 1997 Nashville (Andy Williams album), 1991 Nashville (Josh Rouse album), 2005 "Nashville", a song recorded by Stonewall Jackson and others Nashville!, a commercial music channel on XM Satellite Radio controlled by Clear Channel Communications An E9 tuning sometimes used for ten string pedal steel guitar Nashville sound, a subgenre of country music Nashville
Adapted from the American Indians, the clambake is a traditional meal in New England where clams, lobsters and corn are cooked over a firepit. Modern versions of the dish may include mussels, fish, crabs and non-seafood ingredients like chicken, sausage, potatoes and other root vegetables. The official state fish are as follows: Seasonings Many herbs were uncommon, particularly Mediterranean herbs, which are not hardy in much of New England away from the coast. As a result, most savory New England dishes do not have much strong seasoning, aside from salt and ground black pepper, nor are there many particularly spicy staple items. Other dishes meant as desserts often contain ingredients such as nutmeg, cinnamon, allspice, cloves, and ground ginger which are a legacy of trade with the Caribbean region beginning in the 17th century, lasting well into the 19th. Pizza Much of the pizza in New England is Greek pizza, owing to the strong presence of Greek immigrants and Greek Americans in the food-service industry in New England. Greek pizza (as understood in New England) is typified by its chewy, bready crust similar to focaccia, which is baked in shallow, round metal pan liberally coated with olive oil. Greek-style pizzerias in New England are often found under the name House of Pizza. Italians emigrated to New England beginning a little over a century ago, and Southern New England pizza tends to be more Italian influenced. World-famous restaurants such as Pepe's Pizza in New Haven, CT serve a thin, coal-fired hand-tossed style of pie. New Haven-style pizza is typified by a slightly burnt, crunchy exterior crust and soft, slightly chewy interior. Southern New England pizza (or apizza) is closely related to Neapolitan-style pizza. List of foods common to New England cuisine Regional specialties Connecticut Irish-American influences are common in the interior portions of the state, including the Hartford area. During the 18th century the Hartford election cake was a spicy, boozy yeast-leavened cake based on a traditional English holiday cake. During the colonial era, elections were celebrated with drink and a huge celebration cake large enough to feed the entire community, and the recipe as given by Amelia Simmons in 1796 called for butter, sugar, raisins, eggs, wine and spices in enormous quantities. Hasty pudding is sometimes found in rural communities, particularly around Thanksgiving. Italian-inspired cuisine is dominant in the New Haven area, which is known for charred thin-crust New Haven-style pizza baked in coal-fired ovens. The well-known white clam pie is made with fresh clams, olive oil, fresh garlic, oregano and grated Romano cheese. Some pizza places also offer subs on Italian bread ("grinders") and standard Italian fare like eggplant rollatini, manicotti, baked ziti and chicken parmesan. Well-known pizzerias include Pepe's Pizza, Sally's Apizza and Modern Apizza. The cuisine of Southeastern Connecticut is heavily based on the local fishing industry. Typical New England seafood dishes are available at local restaurants like Abbot's "lobster in the rough". Lobster rolls, crab cakes, oysters, clam chowder, steamer clams and mussels are served with sides like potato chips, remoulade sauce and coleslaw. Shad is the state fish and is cooked on planks (usually hickory, oak, or cedar) by the fire, called a "shad bake", deboning the fish requires some skill with a boning knife. Louis' Lunch began as a lunch wagon started by Danish immigrant Louis Lassen in 1895. Their burgers are still cooked in the original antique cast-iron broiler. A local specialty of Meriden, Connecticut, steamed cheeseburgers started as simple steamed cheese on a roll sandwiches sold off horse-drawn food carts in the 1900s. Some believe the hamburger originated in New Haven at Louis', and like the butter burger and deep-fried hamburger, the steamed version may be remnant of an earlier time before the broiled hamburger on a bun became the standard form. Ice cream is made with milk from local creameries at UCONN Dairy Bar using a century-old recipe to produce 24 different flavors of ice cream. Ferris Acres Creamery is a 150-year-old dairy farm offering 50 flavors of ice cream. The most popular is the "Cow Trax", a base of vanilla with peanut butter swirls and chocolate chips. Maine Maine is known for its access to fresh, local foods and many farms. Northern Maine produces potato crops, second only to Idaho in the United States. Fiddlehead ferns were part of the Native American cuisine and are still prized in Maine, where they are gathered in springtime. Wild blueberries are a common ingredient or garnish, and blueberry pie is the official state dessert (when made with wild Maine blueberries). Buckwheat pancakes called ployes are popular in Maine. Much like grits, or potatoes, the ploye was originally a simple carbohydrate filler food for the local population. It was very cheap, easy to make, and with local toppings, such as maple syrup or cretons, could vary in taste. This staple is often eaten with baked beans. Over time however, it simply became a traditional dish. Ployes are an Acadian pancake-type mix of buckwheat flour, wheat flour, baking powder and water, which is extremely popular in the Madawaska region in New Brunswick and Maine. The whoopie pie, which is also a staple in the Philadelphia/Pennsylvania Dutch cuisine, is the official state treat. Maine is the place of origin for the needham, a dessert bar made from chocolate, coconut, and potato. Wax-wrapped salt water taffy is a popular item sold in tourist areas, although it is originally from New Jersey. The city of Portland, Maine, is known for its numerous nationally renowned restaurants; it was ranked as Bon Appétit magazine's "America's Foodiest Small Town" in 2009. and the "2018 Restaurant City of the Year" in 2018. The city has the Portland Farmers Market, founded in 1768, and the city ranks as a top city for vegans and vegetarians. Maine is known for its lobster. Relatively inexpensive lobster rolls—lobster meat mixed with mayonnaise and other ingredients, served in a grilled hot dog roll are often available in the summer, particularly on the coast. The Francophone part of northern Maine in the St. John Valley has a lot of Acadian influences in their cuisine. A popular dish among all Acadians in this region is tourtière or meat pies. These are especially popular around Christmastime. The Italian sandwich is popular in Portland and southern Maine. Portland restaurant Amato's claims to have invented the Italian sandwich in 1902—specifically, a submarine sandwich made with ham, cheese, tomato, raw peppers, and pickles, served with or without oil, salt, and pepper. Moxie was America's first mass-produced soft drink and is the official state soft drink. It is known for its strong aftertaste and is found throughout New England. Massachusetts Coastal Massachusetts is known for its clams, haddock, and cranberries, and previously cod. Massachusetts had similar immigrant influences as the coastal regions, though historically strong Eastern European populations instilled kielbasa and pierogi as common dishes. Named after the town of Newton, Fig Newtons were first made in 1891 using a machine invented by James Mitchell to fill cookie dough with fig jam. The small round Necco Wafers, made with the first American candy machine, similarly originated in Cambridge. Graham bread was first made in 19th-century Massachusetts by Sylvester Graham. Tollhouse cookies, the official state cookie of Massachusetts were created in 1930 at the Toll House Inn, located in Whitman. Boston is known for, baked beans (hence the nickname "Beantown"), bulkie rolls, and various pastries. Boston cream pie is not a pie but a cake with custard filling. The origins are mysterious, but it is likely that antecedent cakes were made with either a sponge cake or pound cake. Parker's Restaurant, located inside the Parker House Hotel, was the premier dining establishment in Boston in the 19th century and remains a fine-dining establishment in Boston's Government Center area. The a-la-carte menu from 1865 included a range of local seafood offerings like oysters, fried clams, mackerel, shad, salmon in anchovy
broiler. A local specialty of Meriden, Connecticut, steamed cheeseburgers started as simple steamed cheese on a roll sandwiches sold off horse-drawn food carts in the 1900s. Some believe the hamburger originated in New Haven at Louis', and like the butter burger and deep-fried hamburger, the steamed version may be remnant of an earlier time before the broiled hamburger on a bun became the standard form. Ice cream is made with milk from local creameries at UCONN Dairy Bar using a century-old recipe to produce 24 different flavors of ice cream. Ferris Acres Creamery is a 150-year-old dairy farm offering 50 flavors of ice cream. The most popular is the "Cow Trax", a base of vanilla with peanut butter swirls and chocolate chips. Maine Maine is known for its access to fresh, local foods and many farms. Northern Maine produces potato crops, second only to Idaho in the United States. Fiddlehead ferns were part of the Native American cuisine and are still prized in Maine, where they are gathered in springtime. Wild blueberries are a common ingredient or garnish, and blueberry pie is the official state dessert (when made with wild Maine blueberries). Buckwheat pancakes called ployes are popular in Maine. Much like grits, or potatoes, the ploye was originally a simple carbohydrate filler food for the local population. It was very cheap, easy to make, and with local toppings, such as maple syrup or cretons, could vary in taste. This staple is often eaten with baked beans. Over time however, it simply became a traditional dish. Ployes are an Acadian pancake-type mix of buckwheat flour, wheat flour, baking powder and water, which is extremely popular in the Madawaska region in New Brunswick and Maine. The whoopie pie, which is also a staple in the Philadelphia/Pennsylvania Dutch cuisine, is the official state treat. Maine is the place of origin for the needham, a dessert bar made from chocolate, coconut, and potato. Wax-wrapped salt water taffy is a popular item sold in tourist areas, although it is originally from New Jersey. The city of Portland, Maine, is known for its numerous nationally renowned restaurants; it was ranked as Bon Appétit magazine's "America's Foodiest Small Town" in 2009. and the "2018 Restaurant City of the Year" in 2018. The city has the Portland Farmers Market, founded in 1768, and the city ranks as a top city for vegans and vegetarians. Maine is known for its lobster. Relatively inexpensive lobster rolls—lobster meat mixed with mayonnaise and other ingredients, served in a grilled hot dog roll are often available in the summer, particularly on the coast. The Francophone part of northern Maine in the St. John Valley has a lot of Acadian influences in their cuisine. A popular dish among all Acadians in this region is tourtière or meat pies. These are especially popular around Christmastime. The Italian sandwich is popular in Portland and southern Maine. Portland restaurant Amato's claims to have invented the Italian sandwich in 1902—specifically, a submarine sandwich made with ham, cheese, tomato, raw peppers, and pickles, served with or without oil, salt, and pepper. Moxie was America's first mass-produced soft drink and is the official state soft drink. It is known for its strong aftertaste and is found throughout New England. Massachusetts Coastal Massachusetts is known for its clams, haddock, and cranberries, and previously cod. Massachusetts had similar immigrant influences as the coastal regions, though historically strong Eastern European populations instilled kielbasa and pierogi as common dishes. Named after the town of Newton, Fig Newtons were first made in 1891 using a machine invented by James Mitchell to fill cookie dough with fig jam. The small round Necco Wafers, made with the first American candy machine, similarly originated in Cambridge. Graham bread was first made in 19th-century Massachusetts by Sylvester Graham. Tollhouse cookies, the official state cookie of Massachusetts were created in 1930 at the Toll House Inn, located in Whitman. Boston is known for, baked beans (hence the nickname "Beantown"), bulkie rolls, and various pastries. Boston cream pie is not a pie but a cake with custard filling. The origins are mysterious, but it is likely that antecedent cakes were made with either a sponge cake or pound cake. Parker's Restaurant, located inside the Parker House Hotel, was the premier dining establishment in Boston in the 19th century and remains a fine-dining establishment in Boston's Government Center area. The a-la-carte menu from 1865 included a range of local seafood offerings like oysters, fried clams, mackerel, shad, salmon in anchovy sauce, cod in oyster sauce, and soft-shell crab. Other meat dishes included chicken fricassee, potted pigeons, corned beef and baked beans with pork. Sides included corn, rice, macaroni, potatoes, asparagus, green peas, radishes and fried bananas. Sweet pastry and puddings were also served such as Indian pudding, custard, apple pie, rhubarb pie, Washington pie, Charlotte Russe, and blancmange. The North Shore area is locally known for its roast beef sandwich shops, typically serving roast beef sandwiches consisting of thin-sliced roast beef on a hamburger bun. It may be served with condiments such as lettuce, tomato, onion, cheese, and sauces such as mayo and barbecue. Most pizza and roast beef sandwich shops also serve "steak tips" (marinated cubes of sirloin), a common menu item at pizza establishments and backyard cookouts. Marshmallow Fluff was invented in Somerville, Massachusetts and manufactured in Lynn, Massachusetts throughout the 20th century. Fluffernutter sandwiches, combining peanut butter with marshmallow fluff, are popular. The South Shore area maintains a following for bar pizza, with many popular restaurants serving these crisp, thin, often heavily topped creations. Common plant foods in Massachusetts are similar to those of interior northern New England, because of the landlocked, hilly terrain, including potatoes, maple syrup, and wild blueberries. Dairy production is also prominent in this central and western area. New Hampshire Southern New Hampshire cuisine is similar to that of the Boston area, featuring fish, shellfish, and local apples. As with Maine and Vermont, French-Canadian dishes are popular, including tourtière, which is traditionally served on Christmas Eve, and poutine. Corn chowder is also common, which is similar to clam chowder but with corn and bacon replacing the clams. Portsmouth is known for its orange cake. Rhode Island Rhode Island is known for johnnycakes, doughboys, and clam cakes. Johnnycakes, variously and contentiously known as jonnycakes, journeycakes and Shawnee cakes, can vary in thickness and preparation, and disagreements over whether they should be make with milk or water persist. East of Narragansett Bay, johnnycakes are made with cold milk and a little butter, but around South County the batter is sweetened and made with scalded cornmeal. One attempt by the Rhode Island Legislature to settle on an "authentic" recipe ended in a fistfight. They were traditionally served as a flatbread alongside chipped beef or baked beans, but in modern times they are usually eaten for breakfast with butter and maple syrup. According to The Society for the Propagation of the Johnnycake Tradition in Rhode Island, authentic johnnycakes must be made with whitecap flint corn historically grown in the region around Narrangasett Bay. Stone-ground flint corn is not commercially available, but can still be found at a few historic gristmills like the Prescott Farm museum in Middletown. Sweetened coffee-flavored dairy products are popular in Rhode Island. Coffee ice cream is popular and a locally produced coffee gelatin dessert mix can be found at supermarkets. Coffee milk has been the official state drink since 1993. While the origins may date to the 1930s, when some shopkeeps sweetened leftover coffee ground with milk and sugar, its now made with coffee extract syrups like those produced by Autocrat. Also popular in the state are clear clam chowder known as Rhode Island clam chowder, quahogs, milkshakes (called cabinets), submarine sandwiches (called grinders), pizza strips, the chow mein sandwich, and Del's Frozen Lemonade. Italian cooking is long established in the region. In Rhode Island and other parts of New England with a large Portuguese American population, Portuguese foods are common, including linguiça, chouriço, caldo verde, malasadas, and Portuguese sweet bread. Vermont Vermont produces cheddar cheese and other dairy products. Small cheesemakers recognized for producing hand-crafted cheddar cheeses include the Crowley Cheese Factory Grafton Village Cheese Company, and Shelburne Farms. The Vermonter sandwich is made with cold cuts (often turkey and ham), apple, sharp Vermont cheddar and maple mustard (a mix of maple syrup and grainy mustard). The toasted sandwich is served warm. It is known in and outside of New England for its maple syrup. Maple syrup is used as an ingredient in some Vermont dishes, including baked beans. Rhubarb pie is a common dessert and has been combined with strawberries in late spring. Restaurants and pubs The oldest 'continuously' operating restaurant in the United States is the Union Oyster House (1826) located in Boston. The oldest operating restaurant is the White Horse Tavern in Newport, Rhode Island (it had, at one point closed for renovations since its inception). restaurant Legal Sea Foods is a chain restaurant that began by selling fresh fish and fish and chips. The original 1950 shop was located at Cambridge's Inman Square. Woodman's of Essex began selling homemade potato chips in 1914. Their signature dish of fried clams was introduced only a few years later, in 1916. Their chowder has won prizes at the annual Essex Clamfest. Friendly's was founded in 1935 during the Great Depression in Springfield, Massachusetts as an ice-cream parlor selling two scoops for a nickel. By 1960, the company offered 63 flavors of ice cream. They were producing 25 million gallons per year and had moved their headquarters to Wilbraham. It only becomes a full-service chain restaurant after being acquired by Donald Smith in 1988. At local shops along the North Shore of Massachusetts, "three-way" roast beef sandwiches are often served on an onion roll and topped with mayo, barbecue sauce and white American cheese. Kelly's Roast Beef claims to have originated the first roast beef sandwich. Open-faced roast beef sandwiches predate Kelly's version but are typically eaten with a knife and fork. Other well-known North Shore roast beef shops include Londi's and Bill & Bob's. D'Angelo's is a regional chain with locations
popular early television shows. Among the latter were Sid Caesar's Your Show of Shows (where in 1950 he worked alongside other young writers including Carl Reiner, Mel Brooks, Woody Allen, Larry Gelbart and Selma Diamond), and The Phil Silvers Show, which ran from 1955 to 1959. His first produced play was Come Blow Your Horn (1961). It took him three years to complete and ran for 678 performances on Broadway. It was followed by two more successes, Barefoot in the Park (1963) and The Odd Couple (1965). He won a Tony Award for the latter. It made him a national celebrity and "the hottest new playwright on Broadway". From the 1960s to the 1980s he wrote for stage and screen; some of his screenplays were based on his own works for the stage. His style ranged from farce to romantic comedy to more serious dramatic comedy. Overall, he garnered 17 Tony nominations and won three awards. In 1966, he had four successful productions running on Broadway at the same time, and in 1983 he became the only living playwright to have a New York theatre, the Neil Simon Theatre, named in his honor. Early years Neil Simon was born on July 4, 1927, in The Bronx, New York City, to Jewish parents. His father, Irving Simon, was a garment salesman, and his mother, Mamie (Levy) Simon, was mostly a homemaker. Neil had one brother, eight years his senior, television writer and comedy teacher Danny Simon. He grew up in Washington Heights, Manhattan, and graduated from DeWitt Clinton High School when he was sixteen. He was nicknamed 'Doc', and the school yearbook described him as extremely shy. Simon's childhood was marked by his parents' "tempestuous marriage" and the financial hardship caused by the Depression. Sometimes at night he blocked out their arguments by putting a pillow over his ears. His father often abandoned the family for months at a time, causing them further financial and emotional suffering. As a result, the family took in boarders, and Simon and his brother Danny were sometimes forced to live with different relatives. During an interview with writer Lawrence Grobel, Simon said: "To this day I never really knew what the reason for all the fights and battles were about between the two of them ... She'd hate him and be very angry, but he would come back and she would take him back. She really loved him." Simon has said that one of the reasons he became a writer was to fulfill a need to be independent of such emotional family issues, a need he recognized when he was seven or eight: "I'd better start taking care of myself somehow ... It made me strong as an independent person. He was able to do that at the movies, in the work of stars like Charlie Chaplin, Buster Keaton, and Laurel and Hardy. "I was constantly being dragged out of movies for laughing too loud." Simon acknowledged these childhood films as his inspiration: "I wanted to make a whole audience fall onto the floor, writhing and laughing so hard that some of them pass out." He made writing comedy his long-term goal, and also saw it as a way to connect with people. "I was never going to be an athlete or a doctor." He began writing for pay while still in high school: At the age of fifteen, Simon and his brother created a series of comedy sketches for employees at an annual department store event. To help develop his writing skill, he often spent three days a week at the library reading books by famous humorists such as Mark Twain, Robert Benchley, George S. Kaufman and S. J. Perelman. Soon after graduating from high school, he signed up with the Army Air Force Reserve at New York University. He attained the rank of corporal and was eventually sent to Colorado. During those years in the Reserve, Simon wrote professionally, starting as a sports editor. He was assigned to Lowry Air Force Base during 1945 and attended the University of Denver from 1945 to 1946. Writing career Television Simon quit his job as a mailroom clerk in the Warner Brothers offices in Manhattan to write radio and television scripts with his brother Danny Simon, under the tutelage of radio humorist Goodman Ace, who ran a short-lived writing workshop for CBS. Their work for the radio series The Robert Q. Lewis Show led to other writing jobs. Max Liebman hired the duo for the writing team of his popular television comedy series Your Show of Shows. The program received Emmy Award nominations for Best Variety Show in 1951, 1952, 1953, and 1954, and won in 1952 and 1953. Simon later wrote scripts for The Phil Silvers Show, for episodes broadcast during 1958 and 1959. Simon later recalled the importance of these two writing jobs to his career: "Between the two of them, I spent five years and learned more about what I was eventually going to do than in any other previous experience." "I knew when I walked into Your Show of Shows, that this was the most talented group of writers that up until that time had ever been assembled together." Simon described a typical writing session: Simon incorporated some of these experiences into his play Laughter on the 23rd Floor (1993). A 2001 TV adaptation of the play won him two Emmy Award nominations. Stage His first Broadway experience was on Catch a Star! (1955); he collaborated on sketches with his brother, Danny. In 1961, Simon's first Broadway play, Come Blow Your Horn, ran for 678 performances at the Brooks Atkinson Theatre. Simon took three years to create that first play, partly because he was also working on television scripts. He rewrote it at least twenty times from beginning to end: "It was the lack of belief in myself", he recalled. "I said, 'This isn't good enough. It's not right.' ... It was the equivalent of three years of college." Besides being a "monumental effort" for Simon, that play was a turning point in his career: "The theater and I discovered each other." Barefoot in the Park (1963) and The Odd Couple (1965), for which he won a Tony Award, brought him national celebrity, and he was considered "the hottest new playwright on Broadway", according to Susan Koprince. Those successes were followed by others. During 1966, Simon had four shows playing simultaneously at Broadway theatres: Sweet Charity, The Star-Spangled Girl, The Odd Couple and Barefoot in the Park. These earned him royalties of $1 million a year. His professional association with producer Emanuel Azenberg began with The Sunshine Boys and continued with The Good Doctor, God's Favorite, Chapter Two, They're Playing Our Song, I Ought to Be in Pictures, Brighton Beach Memoirs, Biloxi Blues, Broadway Bound, Jake's Women, The Goodbye Girl and Laughter on the 23rd Floor, among others. His work ranged from romantic comedies to serious drama. Overall, he received seventeen Tony nominations and won three awards. Simon also adapted material originated by others, such as the musical Little Me (1962), based on the novel by Patrick Dennis; Sweet Charity (1966) from the screenplay for the film Nights of Cabiria (1957), written by Federico Fellini and others; and Promises, Promises (1968) a musical version of Billy Wilder's film, The Apartment. By the time of Last of the Red Hot Lovers in 1969, Simon was reputedly earning $45,000 a week from his shows (excluding sale of rights), making him the most financially successful Broadway writer ever. Simon also served as an uncredited "script doctor", helping to hone the books of Broadway-bound plays or musicals under development, as he did for A Chorus Line (1975). During the 1970s, he wrote a string of successful plays; sometimes more than one was playing at the same time, to standing room only audiences. Although he was, by then, recognized as one of the country's leading playwrights, his inner drive kept him writing: Simon drew "extensively on his own life and experience" for his stories. His settings are typically working-class New York City neighborhoods, similar to the ones in which he grew up. In 1983, he began writing the first of three autobiographical plays, Brighton
Broadway. It was followed by two more successes, Barefoot in the Park (1963) and The Odd Couple (1965). He won a Tony Award for the latter. It made him a national celebrity and "the hottest new playwright on Broadway". From the 1960s to the 1980s he wrote for stage and screen; some of his screenplays were based on his own works for the stage. His style ranged from farce to romantic comedy to more serious dramatic comedy. Overall, he garnered 17 Tony nominations and won three awards. In 1966, he had four successful productions running on Broadway at the same time, and in 1983 he became the only living playwright to have a New York theatre, the Neil Simon Theatre, named in his honor. Early years Neil Simon was born on July 4, 1927, in The Bronx, New York City, to Jewish parents. His father, Irving Simon, was a garment salesman, and his mother, Mamie (Levy) Simon, was mostly a homemaker. Neil had one brother, eight years his senior, television writer and comedy teacher Danny Simon. He grew up in Washington Heights, Manhattan, and graduated from DeWitt Clinton High School when he was sixteen. He was nicknamed 'Doc', and the school yearbook described him as extremely shy. Simon's childhood was marked by his parents' "tempestuous marriage" and the financial hardship caused by the Depression. Sometimes at night he blocked out their arguments by putting a pillow over his ears. His father often abandoned the family for months at a time, causing them further financial and emotional suffering. As a result, the family took in boarders, and Simon and his brother Danny were sometimes forced to live with different relatives. During an interview with writer Lawrence Grobel, Simon said: "To this day I never really knew what the reason for all the fights and battles were about between the two of them ... She'd hate him and be very angry, but he would come back and she would take him back. She really loved him." Simon has said that one of the reasons he became a writer was to fulfill a need to be independent of such emotional family issues, a need he recognized when he was seven or eight: "I'd better start taking care of myself somehow ... It made me strong as an independent person. He was able to do that at the movies, in the work of stars like Charlie Chaplin, Buster Keaton, and Laurel and Hardy. "I was constantly being dragged out of movies for laughing too loud." Simon acknowledged these childhood films as his inspiration: "I wanted to make a whole audience fall onto the floor, writhing and laughing so hard that some of them pass out." He made writing comedy his long-term goal, and also saw it as a way to connect with people. "I was never going to be an athlete or a doctor." He began writing for pay while still in high school: At the age of fifteen, Simon and his brother created a series of comedy sketches for employees at an annual department store event. To help develop his writing skill, he often spent three days a week at the library reading books by famous humorists such as Mark Twain, Robert Benchley, George S. Kaufman and S. J. Perelman. Soon after graduating from high school, he signed up with the Army Air Force Reserve at New York University. He attained the rank of corporal and was eventually sent to Colorado. During those years in the Reserve, Simon wrote professionally, starting as a sports editor. He was assigned to Lowry Air Force Base during 1945 and attended the University of Denver from 1945 to 1946. Writing career Television Simon quit his job as a mailroom clerk in the Warner Brothers offices in Manhattan to write radio and television scripts with his brother Danny Simon, under the tutelage of radio humorist Goodman Ace, who ran a short-lived writing workshop for CBS. Their work for the radio series The Robert Q. Lewis Show led to other writing jobs. Max Liebman hired the duo for the writing team of his popular television comedy series Your Show of Shows. The program received Emmy Award nominations for Best Variety Show in 1951, 1952, 1953, and 1954, and won in 1952 and 1953. Simon later wrote scripts for The Phil Silvers Show, for episodes broadcast during 1958 and 1959. Simon later recalled the importance of these two writing jobs to his career: "Between the two of them, I spent five years and learned more about what I was eventually going to do than in any other previous experience." "I knew when I walked into Your Show of Shows, that this was the most talented group of writers that up until that time had ever been assembled together." Simon described a typical writing session: Simon incorporated some of these experiences into his play Laughter on the 23rd Floor (1993). A 2001 TV adaptation of the play won him two Emmy Award nominations. Stage His first Broadway experience was on Catch a Star! (1955); he collaborated on sketches with his brother, Danny. In 1961, Simon's first Broadway play, Come Blow Your Horn, ran for 678 performances at the Brooks Atkinson Theatre. Simon took three years to create that first play, partly because he was also working on television scripts. He rewrote it at least twenty times from beginning to end: "It was the lack of belief in myself", he recalled. "I said, 'This isn't good enough. It's not right.' ... It was the equivalent of three years of college." Besides being a "monumental effort" for Simon, that play was a turning point in his career: "The theater and I discovered each other." Barefoot in the Park (1963) and The Odd Couple (1965), for which he won a Tony Award, brought him national celebrity, and he was considered "the hottest new playwright on Broadway", according to Susan Koprince. Those successes were followed by others. During 1966, Simon had four shows playing simultaneously at Broadway theatres: Sweet Charity, The Star-Spangled Girl, The Odd Couple and Barefoot in the Park. These earned him royalties of $1 million a year. His professional association with producer Emanuel Azenberg began with The Sunshine Boys and continued with The Good Doctor, God's Favorite, Chapter Two, They're Playing Our Song, I Ought to Be in Pictures, Brighton Beach Memoirs, Biloxi Blues, Broadway Bound, Jake's Women, The Goodbye Girl and Laughter on the 23rd Floor, among others. His work ranged from romantic comedies to serious drama. Overall, he received seventeen Tony nominations and won three awards. Simon also adapted material originated by others, such as the musical Little Me (1962), based on the novel by Patrick Dennis; Sweet Charity (1966) from the screenplay for the film Nights of Cabiria (1957), written by Federico Fellini and others; and Promises, Promises (1968) a musical version of Billy Wilder's film, The Apartment. By the time of Last of the Red Hot Lovers in 1969, Simon was reputedly earning $45,000 a week from his shows (excluding sale of rights), making him the most financially successful Broadway writer ever. Simon also served as an uncredited "script doctor", helping to hone the books of Broadway-bound plays or musicals under development, as he did for A Chorus Line (1975). During the 1970s, he wrote a string of successful plays; sometimes more than one was playing at the same time, to standing room only audiences. Although he was, by then, recognized as one of the country's leading playwrights, his inner drive kept him writing: Simon drew "extensively on his own life and experience" for his stories. His settings are typically working-class New York City neighborhoods, similar to the ones in which he grew up. In 1983, he began writing the first of three autobiographical plays, Brighton Beach Memoirs (1983), which would be followed by Biloxi Blues (1985) and Broadway Bound (1986). He received his greatest critical acclaim for this trilogy. He received a Pulitzer Prize for his follow-up play, Lost in Yonkers (1991), which starred Mercedes Ruehl and was a success on Broadway. Following Lost in Yonkers, Simon's next several plays did not meet with commercial success. The Dinner Party (2000), which starred Henry Winkler and John Ritter, was "a modest hit". Simon's final play, Rose's Dilemma, premiered in 2003 and received poor reviews. Simon is credited as playwright and contributing writer to at least 49 Broadway plays. Screen Simon chose not to write the screenplay for the first film adaptation of his work, Come Blow Your Horn (1963), preferring to focus on his playwriting. However, he was disappointed with the picture, and thereafter tried to control the conversion of his works. Simon wrote screenplays for more than twenty films and received four Academy Award nominations—for The Odd Couple (1969), The Sunshine Boys (1975), The Goodbye Girl (1977) and California Suite (1978). Other movies include The Out-of-Towners (1970) and Murder by Death (1976). Although most of his films were successful, movies were always of secondary importance to his plays: Many of his earlier adaptations of his own work were very similar to the original plays. Simon observed in hindsight: "I really didn't have an interest in films then. I was mainly interested in continuing writing for
to propose a similar agreement in an effort to bring in foreign investment following the Latin American debt crisis. As the two leaders began negotiating, the Canadian government under Prime Minister Brian Mulroney feared that the advantages Canada had gained through the Canada–US FTA would be undermined by a US–Mexican bilateral agreement, and asked to become a party to the US–Mexican talks. Signing Following diplomatic negotiations dating back to 1990, the leaders of the three nations signed the agreement in their respective capitals on December 17, 1992. The signed agreement then needed to be ratified by each nation's legislative or parliamentary branch. Ratification Canada The earlier Canada–United States Free Trade Agreement had been controversial and divisive in Canada, and featured as an issue in the 1988 Canadian election. In that election, more Canadians voted for anti-free trade parties (the Liberals and the New Democrats), but the split of the votes between the two parties meant that the pro-free trade Progressive Conservatives (PCs) came out of the election with the most seats and so took power. Mulroney and the PCs had a parliamentary majority and easily passed the 1987 Canada–US FTA and NAFTA bills. However, Mulroney was replaced as Conservative leader and prime minister by Kim Campbell. Campbell led the PC party into the 1993 election where they were decimated by the Liberal Party under Jean Chrétien, who campaigned on a promise to renegotiate or abrogate NAFTA. Chrétien subsequently negotiated two supplemental agreements with Bush, who had subverted the LAC advisory process and worked to "fast track" the signing prior to the end of his term, ran out of time and had to pass the required ratification and signing of the implementation law to incoming president Bill Clinton. United States Before sending it to the United States Senate, Clinton added two side agreements, the North American Agreement on Labor Cooperation (NAALC) and the North American Agreement on Environmental Cooperation (NAAEC), to protect workers and the environment, and to also allay the concerns of many House members. The U.S. required its partners to adhere to environmental practices and regulations similar to its own. After much consideration and emotional discussion, the U.S. House of Representatives passed the North American Free Trade Agreement Implementation Act on November 17, 1993, 234–200. The agreement's supporters included 132 Republicans and 102 Democrats. The bill passed the Senate on November 20, 1993, 61–38. Senate supporters were 34 Republicans and 27 Democrats. Republican Representative David Dreier of California, a strong proponent of NAFTA since the Reagan Administration, played a leading role in mobilizing support for the agreement among Republicans in Congress and across the country. Clinton signed it into law on December 8, 1993; the agreement went into effect on January 1, 1994. At the signing ceremony, Clinton recognized four individuals for their efforts in accomplishing the historic trade deal: Vice President Al Gore, Chairwoman of the Council of Economic Advisers Laura Tyson, Director of the National Economic Council Robert Rubin, and Republican Congressman David Dreier. Clinton also stated that "NAFTA means jobs. American jobs, and good-paying American jobs. If I didn't believe that, I wouldn't support this agreement." NAFTA replaced the previous Canada-US FTA. Mexico NAFTA (TLCAN in Spanish) was approved by the Mexican Senate on November 22, 1993, and was published in the Official Gazette of the Federation on December 8, 1993. The decree implementing NAFTA and the various changes to accommodate NAFTA in Mexican law was promulgated on December 14, 1993, with entry into force on January 1, 1994. Provisions The goal of NAFTA was to eliminate barriers to trade and investment between the U.S., Canada and Mexico. The implementation of NAFTA on January 1, 1994, brought the immediate elimination of tariffs on more than one-half of Mexico's exports to the U.S. and more than one-third of U.S. exports to Mexico. Within 10 years of the implementation of the agreement, all U.S.–Mexico tariffs were to be eliminated except for some U.S. agricultural exports to Mexico, to be phased out within 15 years. Most U.S.–Canada trade was already duty-free. NAFTA also sought to eliminate non-tariff trade barriers and to protect the intellectual property rights on traded products. Chapter 20 provided a procedure for the international resolution of disputes over the application and interpretation of NAFTA. It was modeled after Chapter 69 of the Canada–United States Free Trade Agreement. NAFTA is, in part, implemented by Technical Working Groups composed of government officials from each of the three partner nations. Intellectual property The North American Free Trade Agreement Implementation Act made some changes to the copyright law of the United States, foreshadowing the Uruguay Round Agreements Act of 1994 by restoring copyright (within the NAFTA nations) on certain motion pictures which had entered the public domain. Environment The Clinton administration negotiated a side agreement on the environment with Canada and Mexico, the North American Agreement on Environmental Cooperation (NAAEC), which led to the creation of the Commission for Environmental Cooperation (CEC) in 1994. To alleviate concerns that NAFTA, the first regional trade agreement between a developing country and two developed countries, would have negative environmental impacts, the commission was mandated to conduct ongoing ex post environmental assessment, It created one of the first ex post frameworks for environmental assessment of trade liberalization, designed to produce a body of evidence with respect to the initial hypotheses about NAFTA and the environment, such as the concern that NAFTA would create a "race to the bottom" in environmental regulation among the three countries, or that NAFTA would pressure governments to increase their environmental protections. The CEC has held four symposia to evaluate the environmental impacts of NAFTA and commissioned 47 papers on the subject from leading independent experts. Labor Proponents of NAFTA in the United States emphasized that the pact was a free-trade, not an economic-community, agreement. The freedom of movement it establishes for goods, services and capital did not extend to labor. In proposing what no other comparable agreement had attempted—to open industrialized countries to "a major Third World country"—NAFTA eschewed the creation of common social and employment policies. The regulation of the labor market and or the workplace remained the exclusive preserve of the national governments. A "side agreement" on enforcement of existing domestic labor law, concluded in August 1993, the North American Agreement on Labour Cooperation (NAALC), was highly circumscribed. Focused on health and safety standards and on child labor law, it excluded issues of collective bargaining, and its "so-called [enforcement] teeth" were accessible only at the end of "a long and tortuous" disputes process". Commitments to enforce existing labor law also raised issues of democratic practice. The Canadian anti-NAFTA coalition, Pro-Canada Network, suggested that guarantees of minimum standards would be "meaningless" without "broad democratic reforms in the [Mexican] courts, the unions, and the government". Later assessment, however, did suggest that NAALC's principles and complaint mechanisms did "create new space for advocates to build coalitions and take concrete action to articulate challenges to the status quo and advance workers’ interests". Agriculture From the earliest negotiation, agriculture was a controversial topic within NAFTA, as it has been with almost all free trade agreements signed within the WTO framework. Agriculture was the only section that was not negotiated trilaterally; instead, three separate agreements were signed between each pair of parties. The Canada–U.S. agreement contained significant restrictions and tariff quotas on agricultural products (mainly sugar, dairy, and poultry products), whereas the Mexico–U.S. pact allowed for a wider liberalization within a framework of phase-out periods (it was the first North–South FTA on agriculture to be signed). Transportation infrastructure NAFTA established the CANAMEX Corridor for road transport between Canada and Mexico, also proposed for use by rail, pipeline, and fiber optic telecommunications infrastructure. This became a High Priority Corridor under the U.S. Intermodal Surface Transportation Efficiency Act of 1991. Chapter 11 – investor-state dispute settlement procedures Another contentious issue was the investor-state dispute settlement obligations contained in Chapter 11 of NAFTA. Chapter 11 allowed corporations or individuals to sue Mexico, Canada or the United States for compensation when actions taken by those governments (or by those for whom they are responsible at international law, such as provincial, state, or municipal governments) violated international law. This chapter has been criticized by groups in the United States, Mexico, and Canada for a variety of reasons, including not taking into account important social and environmental considerations. In Canada, several groups, including the Council of Canadians, challenged the constitutionality of Chapter 11. They lost at the trial level and the subsequent appeal. Methanex Corporation, a Canadian corporation, filed a US$970 million suit against the United States. Methanex claimed that a California ban on methyl tert-butyl ether (MTBE), a substance that had found its way into many wells in the state, was hurtful to the corporation's sales of methanol. The claim was rejected, and the company was ordered to pay US$3 million to the U.S. government in costs, based on the following reasoning: "But as a matter of general international law, a non-discriminatory regulation for a public purpose, which is enacted in accordance with due process and, which affects, inter alios, a foreign investor or investment is not deemed expropriatory and compensable unless specific commitments had been given by the regulating government to the then putative foreign investor contemplating investment that the government would refrain from such regulation." In another case, Metalclad, an American corporation, was awarded US$15.6 million from Mexico after a Mexican municipality refused a construction permit for the hazardous waste landfill it intended to construct in Guadalcázar, San Luis Potosí. The construction had already been approved by the federal government with various environmental requirements imposed (see paragraph 48 of the tribunal decision). The NAFTA panel found that the municipality did not have the authority to ban construction on the basis of its environmental concerns. In Eli Lilly and Company v. Government of Canada the plaintiff presented a US$500 million claim for the way Canada requires usefulness in its drug patent legislation. Apotex is sued the U.S. for US$520 million because of opportunity it says it lost in an FDA generic drug decision. Lone Pine Resources Inc. v. Government of Canada filed a US$250 million claim against Canada, accusing it of "arbitrary, capricious and illegal" behaviour, because Quebec intends to prevent fracking exploration under the St. Lawrence Seaway. Lone Pine Resources is incorporated in Delaware but headquartered in Calgary, and had an initial public offering on the NYSE May 25, 2011, of 15 million shares each for $13, which raised US$195 million. Barutciski acknowledged "that NAFTA and other investor-protection treaties create an anomaly in that Canadian companies that have also seen their permits rescinded by the very same Quebec legislation, which expressly forbids the paying of compensation, do not have the right (to) pursue a NAFTA claim", and that winning "compensation in Canadian courts for domestic companies in this case would be more difficult since the Constitution puts property rights in provincial hands". A treaty with China would extend similar rights to Chinese investors, including SOEs. Chapter 19 – countervailing duty NAFTA's Chapter 19 was a trade dispute mechanism which subjects antidumping and countervailing duty (AD/CVD) determinations to binational panel review instead of, or in addition to, conventional judicial review. For example, in the United States, review of agency decisions imposing antidumping and countervailing duties are normally heard before the U.S. Court of International Trade, an Article III court. NAFTA parties, however, had the option of appealing the decisions to binational panels composed of five citizens from the two relevant NAFTA countries. The panelists were generally lawyers experienced in international trade law. Since NAFTA did not include substantive provisions concerning AD/CVD, the panel was charged with determining whether final agency determinations involving AD/CVD conformed with the country's domestic law. Chapter 19 was an anomaly in international dispute settlement since it did not apply international law, but required a panel composed of individuals from many countries to re-examine the application of one country's domestic law. A Chapter 19 panel was expected to examine whether the agency's determination was supported by "substantial evidence". This standard assumed significant deference to the domestic agency. Some of the most controversial trade disputes in recent years, such as the U.S.–Canada softwood lumber dispute, have been litigated before Chapter 19 panels. Decisions by Chapter 19 panels could be challenged before a NAFTA extraordinary challenge committee. However, an extraordinary challenge committee did not function as an ordinary appeal. Under NAFTA, it only vacated or remanded a decision if the decision involveed a significant and material error that threatens the integrity of the NAFTA dispute settlement system. Since January 2006, no NAFTA party had successfully challenged a Chapter 19 panel's decision before an extraordinary challenge committee. Adjudication The roster of NAFTA adjudicators included many retired judges, such as Alice Desjardins, John Maxwell Evans, Constance Hunt, John Richard, Arlin Adams, Susan Getzendanner, George C. Pratt, Charles B. Renfrew and Sandra Day O'Connor. Impact Canada Historical context In 2008, Canadian exports to the United States and Mexico were at $381.3 billion, with imports at $245.1 billion. According to a 2004 article by University of Toronto economist Daniel Trefler, NAFTA produced a significant net benefit to Canada in 2003, with long-term productivity increasing by up to 15 percent in industries that experienced the deepest tariff cuts. While the contraction of low-productivity plants reduced employment (up to 12 percent of existing positions), these job losses lasted less than a decade; overall, unemployment in Canada has fallen since the passage of the act. Commenting on this trade-off, Trefler said that the critical question in trade policy is to understand "how freer trade can be implemented in an industrialized economy in a way that recognizes both the long-run gains and the short-term adjustment costs borne by workers and others". A study in 2007 found that NAFTA had "a substantial impact on international trade volumes, but a modest effect on prices and welfare". According to a 2012 study, with reduced NAFTA trade tariffs, trade with the United States and Mexico only increased by a modest 11% in Canada compared to an increase of 41% for the U.S. and 118% for Mexico. Moreover, the U.S. and Mexico benefited more from the tariff reductions component, with welfare increases of 0.08% and 1.31%, respectively, with Canada experiencing a decrease of 0.06%. Current issues According to a 2017 report by the New York City based public policy think tank report, Council on Foreign Relations (CFR), bilateral trade in agricultural products tripled in size from 1994 to 2017 and is considered to be one of the largest economic effects of NAFTA on U.S.-Canada trade with Canada becoming the U.S. agricultural sectors' leading importer. Canadian fears of losing manufacturing jobs to the United States did not materialize with manufacturing employment holding "steady". However, with Canada's labour productivity levels at 72% of U.S. levels, the hopes of closing the "productivity gap" between the two countries were also not realized. According to a 2018 Sierra Club report, Canada's commitments under NAFTA and the Paris agreement conflicted. The Paris commitments were voluntary, and NAFTA's were compulsory. According to a 2018 report by Gordon Laxter published by the Council of Canadians, NAFTA's Article 605, energy proportionality rule ensures that Americans had "virtually unlimited first access to most of Canada's oil and natural gas" and Canada could not reduce oil, natural gas and electricity exports (74% its oil and 52% its natural gas) to the U.S., even if Canada was experiencing shortages. These provisions that seemed logical when NAFTA was signed in 1993 are no longer appropriate. The Council of Canadians promoted environmental protection and was against NAFTA's role in encouraging development of the tar sands and fracking. US President Donald Trump, angered by Canada's dairy tax of "almost 300%", threatened to leave Canada out of the NAFTA. Since 1972, Canada has been operating on a "supply management" system, which the United States is attempting to pressure it out of, specifically focusing on the dairy industry. However, this has not yet taken place, as Quebec, which holds approximately half the country's dairy farms, still supports supply management. Mexico Maquiladoras (Mexican assembly plants that take in imported components and produce goods for export) became the landmark of trade in Mexico. They moved to Mexico from the United States, hence the debate over the loss of American jobs. Income in the maquiladora sector had increased 15.5% since the implementation of NAFTA in 1994. Other sectors also benefited from the free trade agreement, and the share of exports to the U.S. from non-border states increased in the last five years while the share of exports from border states decreased. This allowed for rapid growth in non-border metropolitan areas such as Toluca, León, and Puebla, which were all larger in population than Tijuana, Ciudad Juárez, and Reynosa. The overall effect of the Mexico–U.S. agricultural agreement is disputed. Mexico did not invest in the infrastructure necessary for competition, such as efficient railroads and highways. This resulted in more difficult living conditions for the country's poor. Mexico's agricultural exports increased 9.4 percent annually between 1994 and 2001, while imports increased by only 6.9 percent a year during the same period. One of the most affected agricultural sectors was the meat industry. Mexico went from a small player in the pre-1994 U.S. export market to the second largest importer of U.S. agricultural products in 2004, and NAFTA may have been a major catalyst for this change. Free trade removed the hurdles that impeded business between the two countries, so Mexico provided a growing market for meat for the U.S., and increased sales and profits for the U.S. meat industry. A coinciding noticeable increase in the Mexican per capita GDP greatly changed meat consumption patterns as per capita meat consumption grew. Production of corn in Mexico increased since NAFTA. However, internal demand for corn had increased beyond Mexico's supply to the point where imports became necessary, far beyond the quotas Mexico originally negotiated. Zahniser & Coyle pointed out that corn prices in Mexico, adjusted for international prices, have drastically decreased, but through a program of subsidies expanded by former president Vicente Fox, production remained stable since 2000. Reducing agricultural subsidies, especially corn subsidies, was suggested as a way to reduce harm to Mexican farmers. A 2001 Journal of Economic Perspectives review of the existing literature found that NAFTA was a net benefit to Mexico. By the year 2003, 80% of the commerce in Mexico was
fears of losing manufacturing jobs to the United States did not materialize with manufacturing employment holding "steady". However, with Canada's labour productivity levels at 72% of U.S. levels, the hopes of closing the "productivity gap" between the two countries were also not realized. According to a 2018 Sierra Club report, Canada's commitments under NAFTA and the Paris agreement conflicted. The Paris commitments were voluntary, and NAFTA's were compulsory. According to a 2018 report by Gordon Laxter published by the Council of Canadians, NAFTA's Article 605, energy proportionality rule ensures that Americans had "virtually unlimited first access to most of Canada's oil and natural gas" and Canada could not reduce oil, natural gas and electricity exports (74% its oil and 52% its natural gas) to the U.S., even if Canada was experiencing shortages. These provisions that seemed logical when NAFTA was signed in 1993 are no longer appropriate. The Council of Canadians promoted environmental protection and was against NAFTA's role in encouraging development of the tar sands and fracking. US President Donald Trump, angered by Canada's dairy tax of "almost 300%", threatened to leave Canada out of the NAFTA. Since 1972, Canada has been operating on a "supply management" system, which the United States is attempting to pressure it out of, specifically focusing on the dairy industry. However, this has not yet taken place, as Quebec, which holds approximately half the country's dairy farms, still supports supply management. Mexico Maquiladoras (Mexican assembly plants that take in imported components and produce goods for export) became the landmark of trade in Mexico. They moved to Mexico from the United States, hence the debate over the loss of American jobs. Income in the maquiladora sector had increased 15.5% since the implementation of NAFTA in 1994. Other sectors also benefited from the free trade agreement, and the share of exports to the U.S. from non-border states increased in the last five years while the share of exports from border states decreased. This allowed for rapid growth in non-border metropolitan areas such as Toluca, León, and Puebla, which were all larger in population than Tijuana, Ciudad Juárez, and Reynosa. The overall effect of the Mexico–U.S. agricultural agreement is disputed. Mexico did not invest in the infrastructure necessary for competition, such as efficient railroads and highways. This resulted in more difficult living conditions for the country's poor. Mexico's agricultural exports increased 9.4 percent annually between 1994 and 2001, while imports increased by only 6.9 percent a year during the same period. One of the most affected agricultural sectors was the meat industry. Mexico went from a small player in the pre-1994 U.S. export market to the second largest importer of U.S. agricultural products in 2004, and NAFTA may have been a major catalyst for this change. Free trade removed the hurdles that impeded business between the two countries, so Mexico provided a growing market for meat for the U.S., and increased sales and profits for the U.S. meat industry. A coinciding noticeable increase in the Mexican per capita GDP greatly changed meat consumption patterns as per capita meat consumption grew. Production of corn in Mexico increased since NAFTA. However, internal demand for corn had increased beyond Mexico's supply to the point where imports became necessary, far beyond the quotas Mexico originally negotiated. Zahniser & Coyle pointed out that corn prices in Mexico, adjusted for international prices, have drastically decreased, but through a program of subsidies expanded by former president Vicente Fox, production remained stable since 2000. Reducing agricultural subsidies, especially corn subsidies, was suggested as a way to reduce harm to Mexican farmers. A 2001 Journal of Economic Perspectives review of the existing literature found that NAFTA was a net benefit to Mexico. By the year 2003, 80% of the commerce in Mexico was executed only with the U.S. The commercial sales surplus, combined with the deficit with the rest of the world, created a dependency in Mexico's exports. These effects were evident in the 2001 recession, which resulted in either a low rate or a negative rate in Mexico's exports. A 2015 study found that Mexico's welfare increased by 1.31% as a result of the NAFTA tariff reductions and that Mexico's intra-bloc trade increased by 118%. Inequality and poverty fell in the most globalization-affected regions of Mexico. 2013 and 2015 studies showed that Mexican small farmers benefited more from NAFTA than large-scale farmers. NAFTA had also been credited with the rise of the Mexican middle class. A Tufts University study found that NAFTA lowered the average cost of basic necessities in Mexico by up to 50%. This price reduction increased cash-on-hand for many Mexican families, allowing Mexico to graduate more engineers than Germany each year. Growth in new sales orders indicated an increase in demand for manufactured products, which resulted in expansion of production and a higher employment rate to satisfy the increment in the demand. The growth in the maquiladora industry and in the manufacturing industry was of 4.7% in August 2016. Three quarters of the imports and exports are with the U.S. Tufts University political scientist Daniel W. Drezner argued that NAFTA made it easier for Mexico to transform to a real democracy and become a country that views itself as North American. This has boosted cooperation between the United States and Mexico. United States Economists generally agreed that the United States economy benefited overall from NAFTA as it increased trade. In a 2012 survey of the Initiative on Global Markets' Economic Experts Panel, 95% of the participants said that, on average, U.S. citizens benefited from NAFTA while none said that NAFTA hurt US citizens, on average. A 2001 Journal of Economic Perspectives review found that NAFTA was a net benefit to the United States. A 2015 study found that US welfare increased by 0.08% as a result of NAFTA tariff reductions, and that US intra-bloc trade increased by 41%. A 2014 study on the effects of NAFTA on US trade jobs and investment found that between 1993 and 2013, the US trade deficit with Mexico and Canada increased from $17.0 to $177.2 billion, displacing 851,700 US jobs. In 2015, the Congressional Research Service concluded that the "net overall effect of NAFTA on the US economy appears to have been relatively modest, primarily because trade with Canada and Mexico accounts for a small percentage of US GDP. However, there were worker and firm adjustment costs as the three countries adjusted to more open trade and investment among their economies." The report also estimated that NAFTA added $80 billion to the US economy since its implementation, equivalent to a 0.5% increase in US GDP. The US Chamber of Commerce credited NAFTA with increasing U.S. trade in goods and services with Canada and Mexico from $337 billion in 1993 to $1.2 trillion in 2011, while the AFL–CIO blamed the agreement for sending 700,000 American manufacturing jobs to Mexico over that time. University of California, San Diego economics professor Gordon Hanson said that NAFTA helped the US compete against China and therefore saved US jobs. While some jobs were lost to Mexico as a result of NAFTA, considerably more would have been lost to China if not for NAFTA. Trade balances The US had a trade surplus with NAFTA countries of $28.3 billion for services in 2009 and a trade deficit of $94.6 billion (36.4% annual increase) for goods in 2010. This trade deficit accounted for 26.8% of all US goods trade deficit. A 2018 study of global trade published by the Center for International Relations identified irregularities in the patterns of trade of NAFTA ecosystem using network theory analytical techniques. The study showed that the US trade balance was influenced by tax avoidance opportunities provided in Ireland. A study published in the August 2008 issue of the American Journal of Agricultural Economics, found NAFTA increased US agricultural exports to Mexico and Canada, even though most of the increase occurred a decade after its ratification. The study focused on the effects that gradual "phase-in" periods in regional trade agreements, including NAFTA, have on trade flows. Most of the increases in members' agricultural trade, which was only recently brought under the purview of the World Trade Organization, was due to very high trade barriers before NAFTA or other regional trade agreements. Investment The U.S. foreign direct investment (FDI) in NAFTA countries (stock) was $327.5 billion in 2009 (latest data available), up 8.8% from 2008. The US direct investment in NAFTA countries was in non-bank holding companies and the manufacturing, finance/insurance, and mining sectors. The foreign direct investment of Canada and Mexico in the United States (stock) was $237.2 billion in 2009 (the latest data available), up 16.5% from 2008. Economy and jobs In their May 24, 2017 report, the Congressional Research Service (CRS) wrote that the economic impacts of NAFTA on the U.S. economy were modest. In a 2015 report, the Congressional Research Service summarized multiple studies as follows: "In reality, NAFTA did not cause the huge job losses feared by the critics or the large economic gains predicted by supporters. The net overall effect of NAFTA on the U.S. economy appears to have been relatively modest, primarily because trade with Canada and Mexico accounts for a small percentage of U.S. GDP. However, there were worker and firm adjustment costs as the three countries adjusted to more open trade and investment among their economies." Many American small businesses depended on exporting their products to Canada or Mexico under NAFTA. According to the U.S. Trade Representative, this trade supported over 140,000 small- and medium-sized businesses in the US. According to University of California, Berkeley professor of economics Brad DeLong, NAFTA had an insignificant impact on US manufacturing. The adverse impact on manufacturing was exaggerated in US political discourse according to DeLong and Harvard economist Dani Rodrik. According to a 2013 article by Jeff Faux published by the Economic Policy Institute, California, Texas, Michigan and other states with high concentrations of manufacturing jobs were most affected by job loss due to NAFTA. According to a 2011 article by EPI economist Robert Scott, about 682,900 U.S. jobs were "lost or displaced" as a result of the trade agreement. More recent studies agreed with reports by the Congressional Research Service that NAFTA only had a modest impact on manufacturing employment and automation explained 87% of the losses in manufacturing jobs. Environment According to a study in the Journal of International Economics, NAFTA reduced pollution emitted by the US manufacturing sector: "On average, nearly two-thirds of the reductions in coarse particulate matter (PM10) and sulfur dioxide (SO2) emissions from the U.S. manufacturing sector between 1994 and 1998 can be attributed to trade liberalization following NAFTA." According to the Sierra Club, NAFTA contributed to large-scale, export-oriented farming, which led to the increased use of fossil fuels, pesticides and GMO. NAFTA also contributed to environmentally destructive mining practices in Mexico. It prevented Canada from effectively regulating its tar sands industry, and created new legal avenues for transnational corporations to fight environmental legislation. In some cases, environmental policy was neglected in the wake of trade liberalization; in other cases, NAFTA's measures for investment protection, such as Chapter 11, and measures against non-tariff trade barriers threatened to discourage more vigorous environmental policy. The most serious overall increases in pollution due to NAFTA were found in the base metals sector, the Mexican petroleum sector, and the transportation equipment sector in the United States and Mexico, but not in Canada. Mobility of persons According to the Department of Homeland Security Yearbook of Immigration Statistics, during fiscal year 2006 (October 2005 – September 2006), 73,880 foreign professionals (64,633 Canadians and 9,247 Mexicans) were admitted into the United States for temporary employment under NAFTA (i.e., in the TN status). Additionally, 17,321 of their family members (13,136 Canadians, 2,904 Mexicans, as well as a number of third-country nationals married to Canadians and Mexicans) entered the U.S. in the treaty national's dependent (TD) status. Because DHS counts the number of the new I-94 arrival records filled at the border, and the TN-1 admission is valid for three years, the number of non-immigrants in TN status present in the U.S. at the end of the fiscal year is approximately equal to the number of admissions during the year. (A discrepancy may be caused by some TN entrants leaving the country or changing status before their three-year admission period has expired, while other immigrants admitted earlier may change their status to TN or TD, or extend TN status granted earlier). According to the International Organization for Migration, deaths of migrants have been on the rise worldwide with 5,604 deaths in 2016. An increased number of undocumented farmworkers in California may be due to the initial passing of NAFTA. Canadian authorities estimated that on December 1, 2006, 24,830 U.S. citizens and 15,219 Mexican citizens were in Canada as "foreign workers". These numbers include both entrants under NAFTA and those who entered under other provisions of Canadian immigration law. New entries of foreign workers in 2006 totalled 16,841 U.S. citizens and 13,933 Mexicans. Disputes and controversies 1992 U.S. presidential candidate Ross Perot In the second 1992 presidential debate, Ross Perot argued: Perot ultimately lost the election, and the winner, Bill Clinton, supported NAFTA, which went into effect on January 1, 1994. Legal disputes In 1996, the gasoline additive MMT was brought to Canada by Ethyl Corporation, an American company when the Canadian federal government banned imports of the additive. The American company brought a claim under NAFTA Chapter 11 seeking US$201 million, from the Canadian federal government as well as the Canadian provinces under the Agreement on Internal Trade (AIT). They argued that the additive had not been conclusively linked to any health dangers, and that the prohibition was damaging to their company. Following a finding that the ban was a violation of the AIT, the Canadian federal government repealed the ban and settled with the American company for US$13 million. Studies by Health and Welfare Canada (now Health Canada) on the health effects of MMT in fuel found no significant health effects associated with exposure to these exhaust emissions. Other Canadian researchers and the U.S. Environmental Protection Agency disagreed citing studies that suggested possible nerve damage. The United States and Canada argued for years over the United States' 27% duty on Canadian softwood lumber imports. Canada filed many motions to have the duty eliminated and the collected duties returned to Canada. After the United States lost an appeal before a NAFTA panel, spokesperson for U.S. Trade Representative Rob Portman responded by saying: "we are, of course, disappointed with the [NAFTA panel's] decision, but it will have no impact on the anti-dumping and countervailing duty orders." On July 21, 2006, the United States Court of International Trade found that imposition of the duties was contrary to U.S. law. Change in income trust taxation not expropriation On October 30, 2007, American citizens Marvin and Elaine Gottlieb filed a Notice of Intent to Submit a Claim to Arbitration under NAFTA, claiming thousands of U.S. investors lost a total of $5 billion in the fall-out from the Conservative Government's decision the previous year to change the tax rate on income trusts in the energy sector. On April 29, 2009, a determination was made that this change in tax law was not expropriation. Impact on Mexican farmers Several studies rejected NAFTA responsibility for depressing the incomes of poor corn farmers. The trend existed more than a decade before NAFTA existed. Also, maize production increased after 1994, and there wasn't a measurable impact on the price of Mexican corn because of subsidized corn from the United States. The studies agreed that the abolition of U.S. agricultural subsidies would benefit Mexican farmers. Zapatista Uprising in Chiapas, Mexico Preparations for NAFTA included cancellation of Article 27 of Mexico's constitution, the cornerstone of Emiliano Zapata's revolution in 1910–1919. Under the historic Article 27, indigenous communal landholdings were protected from sale or privatization. However, this barrier to investment was incompatible with NAFTA. Indigenous farmers feared the loss of their remaining land and cheap imports (substitutes) from the US. The Zapatistas labelled NAFTA a "death sentence" to indigenous communities all over Mexico and later declared war on the Mexican state on January 1, 1994, the day NAFTA came into force. Criticism from 2016 U.S. presidential candidates In a 60 Minutes interview in September 2015, 2016 presidential candidate Donald Trump called NAFTA "the single worst trade deal ever approved in [the United States]", and said that if elected, he would "either renegotiate it, or we will break it". , president of the trade group Consejo Coordinador Empresarial, expressed concern about renegotiation and the willingness to focus on the car industry. A range of trade experts said that pulling out of NAFTA would have a range of unintended consequences for the United States, including reduced access to its biggest export markets, a reduction in economic growth, and higher prices for gasoline, cars, fruits, and vegetables. Members of the private initiative in Mexico noted that to eliminate NAFTA, many laws must be adapted by the U.S. Congress. The move would also eventually result in legal complaints by the World Trade Organization. The Washington Post noted that a Congressional Research Service review of academic literature concluded that the "net overall effect of NAFTA on the U.S. economy appears to have been relatively modest, primarily because trade with Canada and Mexico accounts for a small percentage of U.S. GDP". Democratic candidate Bernie Sanders, opposing the Trans-Pacific Partnership trade agreement, called it "a continuation of other disastrous trade agreements, like NAFTA, CAFTA, and permanent normal trade relations with China". He believes that free trade agreements have caused a loss of American jobs and depressed American wages. Sanders said that America needs to rebuild its manufacturing base using American factories for well-paying jobs for American labor rather than outsourcing to China and elsewhere. Policy of the Trump administration Renegotiation Shortly after his election, U.S. President Donald Trump said he would begin renegotiating the terms of NAFTA, to resolve trade issues he had campaigned on. The leaders of Canada and Mexico have indicated their willingness to work with the Trump administration. Although vague on the exact terms he seeks in a renegotiated NAFTA, Trump threatened to withdraw from it if negotiations fail. In July 2017, the Trump administration provided a detailed list of changes that it would like to see to NAFTA. The top priority was a reduction in the United States' trade deficit. The administration also called for the elimination of provisions that allowed Canada and Mexico to appeal duties imposed by the United States and limited the ability of the United States to impose import restrictions on Canada and Mexico. The list also alleged subsidized state-owned enterprises and currency manipulation. According to Chad Bown of the Peterson Institute for International Economics, the Trump administration's list "is very consistent with the president's stance on liking trade barriers, liking
released in 1992. This was followed by one issue in 1993, five in 1994, and three in 1995. For the last three years of its existence, the magazine was published only once a year. 1998, last issue The magazine's final print publication was November 1998, after which the contract was renegotiated, and in a sharp reversal, J2 Communications was then prohibited from publishing issues of the magazine. J2, however, still owned the rights to the brand name, which it continued to franchise out to other users. In 2002, the use of the brand name and the rights to republish old material were sold to a new, and otherwise unrelated, company which chose to call itself National Lampoon, Incorporated. 2007, DVD-ROM In 2007, in association with Graphic Imaging Technology, Inc. National Lampoon, Inc. released a collection of the entire 246 issues of the magazine in .pdf format viewable with the Adobe Acrobat reader. The cover of the DVD box featured a remake of the January 1973 "Death" issue, with the caption altered to read "If You Don”t Buy This DVD-ROM, We’ll Kill This Dog". The pages are viewable on both Windows (starting with Windows 2000) and Macintosh (starting with OSX) systems. Related media During its most active period, the magazine spun off numerous productions in a wide variety of media. National Lampoon released books, special issues, anthologies, and other print pieces, including: Special editions The Best of National Lampoon No. 1, 1971, an anthology The Breast of National Lampoon (a "Best of" No. 2), 1972, an anthology The Best of National Lampoon No. 3, 1973, an anthology, art directed by Michael Gross National Lampoon The Best of #4, 1973, an anthology, art directed by Gross The National Lampoon Encyclopedia of Humor, 1973, edited by Michael O'Donoghue and art directed by Gross.This publication featured the fake Volkswagen ad seen above, which was written by Anne Beatts. The spoof was listed in the contents page as "Doyle Dane Bernbach," the name of the advertising agency that had produced the iconic 1960s ad campaign for Volkswagen. According to Mark Simonson's "Very Large National Lampoon Site": "If you buy a copy of this issue, you may find the ad is missing. As a result of a lawsuit by VW over the ad for unauthorized use of their trademark, NatLamp was forced to remove the page (with razor blades!) from any copies they still had in inventory (which, from what I gather, was about half the first printing of 250,000 copies) and all subsequent reprints." National Lampoon Comics, an anthology, 1974, art directed by Gross and David Kaestle National Lampoon The Best of No. 5, 1974, an anthology, art directed by Gross and Kaestle National Lampoon 1964 High School Yearbook Parody, 1974, Edited by P.J. O'Rourke and Doug Kenney, art directed by Kaestle. National Lampoon Presents The Very Large Book of Comical Funnies, 1975, edited by Sean Kelly National Lampoon The 199th Birthday Book, 1975, edited by Tony Hendra National Lampoon The Gentleman's Bathroom Companion, 1975 edited by Hendra, art directed by Peter Kleinman Official National Lampoon Bicentennial Calendar 1976, 1975, written and compiled by Christopher Cerf & Bill Effros National Lampoon Art Poster Book, 1975, Design direction by Peter Kleinman The Best of National Lampoon No. 6, 1976, an anthology National Lampoon The Iron On Book 1976, Original T-shirt designs, edited by Tony Hendra, art directed by Peter Kleinman. National Lampoon Songbook, 1976, edited by Sean Kelly, musical parodies in sheet music form National Lampoon The Naked and the Nude: Hollywood and Beyond, 1977, written by Brian McConnachie The Best of National Lampoon No. 7, 1977, an anthology National Lampoon Presents French Comics, 1977, edited by Peter Kaminsky, translators Sophie Balcoff, Sean Kelly, and Valerie Marchant National Lampoon The Up Yourself Book, 1977, Gerry Sussman National Lampoon Gentleman's Bathroom Companion 2, 1977, art directed by Peter Kleinman. National Lampoon The Book of Books, 1977 edited by Jeff Greenfield, art directed by Peter Kleinman The Best of National Lampoon No. 8, 1978, an anthology, Cover photo by Chris Callis, art directed by Peter Kleinman National Lampoon's Animal House Book, 1978, Chris Miller, Harold Ramis, Doug Kenney Art Direction by Peter Kleinman and Judith Jacklin Belushi National Lampoon Sunday Newspaper Parody, 1978 (claiming to be a Sunday issue of the Dacron, Ohio (a spoof on Akron, Ohio) Republican–Democrat, this publication was originally issued in loose newsprint sections, mimicking a genuine American Sunday newspaper.) Art Direction and Design by Skip Johnston National Lampoon Presents Claire Bretécher, 1978, work by Claire Bretécher, French satirical cartoonist, 1978, Sean Kelly (editor), Translator Valerie Marchant Slightly Higher in Canada, 1978, Anthology of Canadian humor from National Lampoon. Sean Kelly and Ted Mann (Editors) Cartoons Even We Won't Dare Print, 1979, Sean Kelly and John Weidman (Editors), Simon and Schuster National Lampoon The Book of Books, 1979, Edited by Jeff Greenfield. Designed and Art Directed by Peter Kleinman National Lampoon Tenth Anniversary Anthology 1970–1980 1979 Edited by P.J. O'Rourke, art directed by Peter Kleinman National Lampoon Best Of #9: The Good Parts 1978-1980, 1981, the last anthology. Books Would You Buy A Used War From This Man?, 1972, edited by Henry Beard Letters from the Editors of National Lampoon, 1973, edited by Brian McConnachie National Lampoon This Side of Parodies, 1974, edited by Brian McConnachie and Sean Kelly The Paperback Conspiracy, 1974, Anthology, Brian McConnachie (editor) Warner Paperback Library The Job of Sex, 1974, edited by Brian McConnachie A Dirty Book!, 1976, Sexual Humor from the National Lampoon. P.J. O'Rourke (editor). New American Library, Another Dirty Book Sexual Humor from the National Lampoon. P.J. O'Rourke and Peter Kaminsky (editors) National Lampoon's Doon, 1984 "True Facts" special editions and books National Lampoon True Facts, 1981, compiled by John Bendel, special edition National Lampoon Peekers & Other True Facts, 1982, by John Bendel, special edition National Lampoon Presents True Facts: The Book, 1991, by John Bendel "Amazing Ads, Stupefying Signs, Weird Wedding Announcements, and Other Absurd-but-True Samples of Real-Life Funny stuff" by John Bendel, trade paperback by Contemporary Press (now McGraw Hill) National Lampoon Presents More True Facts, 1992 Contemporary Press National Lampoon's Big Book of True Facts: 2004 Brand-New Collection of Absurd-but-True Real-Life Funny Stuff (There were also four all-True-Facts regular issues of the magazine, in 1985, 1986, 1987, and 1988.) Recordings Vinyl Vinyl record albums National Lampoon Radio Dinner, 1972, produced by Tony Hendra Lemmings, 1973, an album of material taken from the stage show Lemmings, and produced by Tony Hendra National Lampoon Missing White House Tapes, 1974, an album taken from the radio show, creative directors Tony Hendra and Sean Kelly Official National Lampoon Stereo Test and Demonstration Record, 1974, conceived by and written by Ed Subitzky National Lampoon Gold Turkey, 1975, creative director Brian McConnachie. Cover Photography by Chris Callis. Art Direction by Peter Kleinman National Lampoon Goodbye Pop 1952–1976, 1975, creative director Sean Kelly National Lampoon That's Not Funny, That's Sick, 1977. Art directed by Peter Kleinman. Illustrated by Sam Gross National Lampoon's Animal House (album), 1978, soundtrack album from the movie Greatest Hits of the National Lampoon, 1978 National Lampoon White Album, 1979 National Lampoon Sex, Drugs, Rock 'N' Roll & the End of the World, 1982 Vinyl singles A snide parody of Les Crane's 1971 hit "Desiderata", written by Tony Hendra, was recorded and released as "Deteriorata", and stayed on the lower reaches of the Billboard magazine charts for a month in late 1972. "Deteriorata" also became one of National Lampoon best-selling posters. The gallumphing theme to Animal House rose slightly higher and charted slightly longer in December 1978. Cassette tape National Lampoon Radio Dinner, 1972, produced by Tony Hendra Lemmings, 1973, an album of material taken from the stage show Lemmings, and produced by Tony Hendra National Lampoon Missing White House Tapes, 1974, an album taken from the radio show, creative directors Tony Hendra and Sean Kelly National Lampoon Gold Turkey, 1975, creative director Brian McConnachie. Cover Photography by Chris Callis. Art Direction by Peter Kleinman National Lampoon Goodbye Pop 1952–1976, 1975, creative director Sean Kelly National Lampoon That's Not Funny, That's Sick, 1977. Art directed by Peter Kleinman. Illustrated by Sam Gross National Lampoon's Animal House (album), 1978, soundtrack album from the movie Greatest Hits of the National Lampoon, 1978 National Lampoon White Album, 1979 The Official National Lampoon Car Stereo Test and Demonstration Tape, 1980, conceived and written by Ed Subitzky National Lampoon Sex, Drugs, Rock 'N' Roll & the End of the World, 1982 CDs A single CD release, National Lampoon Gold Turkey recordings from The National Lampoon Radio Hour, was released by Rhino Records in 1996. A three-CD boxed set Buy This Box or We'll Shoot This Dog: The Best of the National Lampoon Radio Hour was released in 1996. Many of the older albums that were originally on vinyl have been re-issued as CDs and a number of tracks from certain albums are available as MP3s. Radio The National Lampoon Radio Hour was a nationally syndicated radio comedy show which was on the air weekly from 1973 to 1974. For a complete listing of shows, see. Former Lampoon editor Tony Hendra later revived this format in 2012 for The Final Edition Radio Hour, which became a podcast for National Lampoon, Inc. in 2015. True Facts, 1977–1978, written by and starring Peter Kaminsky, Ellis Weiner, Danny Abelson, Sylvia Grant Theater Lemmings (1973) was National Lampoon most successful theatrical venture. The off-Broadway production took the form of a parody of the Woodstock Festival. Co-written by Tony Hendra and Sean Kelly, and directed and produced by Hendra, it introduced John Belushi, Chevy Chase and Christopher Guest in their first major roles. The show formed several companies and ran for a year at New York's Village Gate. A touring
Animal House, Caddyshack, Ghostbusters, and many more. Brian Doyle Murray has had roles in dozens of films, and Belzer is an Emmy Award-winning TV actor. Gerald L. "Jerry" Taylor was the publisher, followed by William T. Lippe. The business side of the magazine was controlled by Matty Simmons, who was chairman of the board and CEO of Twenty First Century Communications, a publishing company. True Facts "True Facts" was a section near the front of the magazine which contained true but ridiculous items from real life. Together with the masthead, it was one of the few parts of the magazine that was factual. "True Facts" included photographs of unintentionally funny signage, extracts from ludicrous newspaper reports, strange headlines, and so on. For many years John Bendel was in charge of the "True Facts" section of the magazine. Steven Brykman edited the "True Facts" section of the National Lampoon website. Several "True Facts" compilation books were published in the 1980s and early 90s, and several all-True-Facts issues of the magazine were published during the 1980s. Foto Funnies Most issues of the magazine featured one or more "Foto Funny" or fumetti, comic strips that use photographs instead of drawings as illustrations. The characters who appeared in the Lampoon's Foto Funnies were usually writers, editors, artists, photographers or contributing editors of the magazine, often cast alongside nude or semi-nude models. In 1980, a paperback compilation book, National Lampoon Foto Funnies which appeared as a part of National Lampoon Comics, was published. Funny Pages The "Funny Pages" was a large section at the back of the magazine that was composed entirely of comic strips of various kinds. These included work from a number of artists who also had pieces published in the main part of the magazine, including Gahan Wilson, Ed Subitzky and Vaughn Bode, as well as artists whose work was only published in this section. The regular strips included "Dirty Duck" by Bobby London, "Trots and Bonnie" by Shary Flenniken, "The Appletons" and "Timberland Tales" by B. K. Taylor, "Politeness Man" by Ron Barrett, and many other strips. A compilation of Gahan Wilson's "Nuts" strip was published in 2011. The Funny Pages logo header art, which was positioned above Gahan Wilson's "Nuts" in each issue, and showed a comfortable, old-fashioned family reading newspaper-sized funny papers, was drawn by Mike Kaluta. Other merchandise From time to time, the magazine advertised Lampoon-related merchandise for sale, including T-shirts that had been specially designed. Chronology The magazine existed from 1970 to 1998. Some consider its finest period was from 1971 to 1975, although it continued to be produced on a monthly schedule throughout the 1970s and the early 1980s, and did well during that time. However, during the late 1980s, a much more serious decline set in. In 1986, it attempted a takeover bid by upstart video distributor Vestron Inc., but the board members of the magazine rejected the offer. In 1989, the company that controlled the magazine and its related projects (which was part of "Twenty First Century Communications") was the subject of a hostile takeover by Daniel Grodnik, a Hollywood producer, and Tim Matheson, an actor who starred in the Lampoon's first big hit, Animal House. In 1990 it was sold outright to another company, "J2 Communications". At that point "National Lampoon" was considered valuable only as a brand name that could be licensed out to other companies. The magazine was issued erratically and rarely from 1991 onwards. 1998 saw the last issue. 1970 The first issue was April 1970; by November of that year, Michael C. Gross had become the art director. He achieved a unified, sophisticated, and integrated look for the magazine, which enhanced its humorous appeal. The sixth issue from September 1970 entitled "Show Biz," got the company in hot water with The Walt Disney Company after a lawsuit was threatened because of the issue's cover, which showed a drawing of Minnie Mouse topless, wearing pasties. 1973–1975 National Lampoon's most successful sales period was 1973–75. Its national circulation peaked at 1,000,096 copies sold of the October 1974 "Pubescence" issue. The 1974 monthly average was 830,000, which was also a peak. Former Lampoon editor Tony Hendra's book Going Too Far includes a series of precise circulation figures. It was also during this time that National Lampoon: Lemmings stage show and The National Lampoon Radio Hour show was broadcast, bringing interest and acclaim to the National Lampoon brand with magazine talent like writer Michael O'Donoghue that would go on to write for Saturday Night Live. The magazine was considered by many to be at its creative zenith during this time. It should however be noted that the publishing industry's newsstand sales were excellent for many other titles during that time: there were sales peaks for Mad (more than 2 million), Playboy (more than 7 million), and TV Guide (more than 19 million). 1975 Some fans consider the glory days of National Lampoon to have ended in 1975, although the magazine remained popular and profitable long after that point. During 1975, the three founders (Kenney, Beard, and Hoffman) took advantage of a buyout clause in their contracts for $7.5 million (although Kenney remained on the magazine's masthead as a senior editor until about 1976). About the same time, writers Michael O'Donoghue and Anne Beatts left to join the NBC comedy show Saturday Night Live (SNL). At the same time, the National Lampoon Show's John Belushi and Gilda Radner left the troupe to join the original septet of SNL's Not Ready for Primetime Players. The magazine was a springboard to the cinema of the United States for a generation of comedy writers, directors, and performers. Various alumni went on to create and write for SNL, The David Letterman Show, SCTV, The Simpsons, Married... with Children, Night Court, and various films including National Lampoon's Animal House, Caddyshack, National Lampoon's Vacation, and Ghostbusters. As some of the original creators departed, the magazine remained popular and profitable as it had the emergence of John Hughes and editor-in-chief P.J. O'Rourke, along with artists and writers such as Gerry Sussman, Ellis Weiner, Tony Hendra, Ted Mann, Peter Kleinman, Chris Cluess, Stu Kreisman, John Weidman, Jeff Greenfield, Bruce McCall, and Rick Meyerowitz. 1985 In 1985, Matty Simmons (who had been working only on the business end of the Lampoon up to that point) took over as editor-in-chief. He fired the entire editorial staff, and appointed his two sons, Michael Simmons and Andy Simmons, as editors, Peter Kleinman as creative director and editor, and Larry "Ratso" Sloman as executive editor. The magazine was on an increasingly shaky financial footing, and beginning in November 1986, the magazine was published six times a year instead of every month. 1989 On 29 December 1988, producer Daniel Grodnik and actor Tim Matheson (who played "Otter" in the 1978 film National Lampoon's Animal House) filed with the SEC that their production company, Grodnick/Matheson Co., had acquired voting control of 21.3 percent of National Lampoon Inc. stock and wanted to gain management control. They were named to the company's board in January 1989, and eventually took control of the company by purchasing the ten-percent share of Simmons, who departed the company. Grodnik and Matheson became the co-chairmen/co-CEOs. During their tenure, the stock went up from under $2 to $6, and the magazine was able to double its monthly ad pages. The company moved its headquarters from New York to Los Angeles to focus on film and television. The publishing operation stayed in New York. Grodnik and Matheson sold the company in 1990. 1990 In 1990, the magazine (and more importantly, the rights to the brand name "National Lampoon") were bought by a company called J2 Communications (a company previously known for marketing Tim Conway's Dorf videos), headed by James P. Jimirro. J2 Communications' focus was to make money by licensing out the brand name "National Lampoon". The company was contractually obliged to publish at least one new issue of the magazine per year to retain the rights to the Lampoon name. However, the company had very little interest in the magazine itself; throughout the 1990s, the number of issues per year declined precipitously and erratically. In 1991, an attempt at monthly publication was made; nine issues were produced that year. Only two issues were released in 1992. This was followed by one issue in 1993, five in 1994, and three in 1995. For the last three years of its existence, the magazine was published only once a year. 1998, last issue The magazine's final print publication was November 1998, after which the contract was renegotiated, and in a sharp reversal, J2 Communications was then prohibited from publishing issues of the magazine. J2, however, still owned the rights to the brand name, which it continued to franchise out to other users. In 2002, the use of the brand name and the rights to republish old material were sold to a new, and otherwise unrelated, company which chose to call itself National Lampoon, Incorporated. 2007, DVD-ROM In 2007, in association with Graphic Imaging Technology, Inc. National Lampoon, Inc. released a collection of the entire 246 issues of the magazine in .pdf format viewable with the Adobe Acrobat reader. The cover of the DVD box featured a remake of the January 1973 "Death" issue, with the caption altered to read "If You Don”t Buy This DVD-ROM, We’ll Kill This Dog". The pages are viewable on both Windows (starting with Windows 2000) and Macintosh (starting with OSX) systems. Related media During its most active period, the magazine spun off numerous productions in a wide variety of media. National Lampoon released books, special issues, anthologies, and other print pieces, including: Special editions The Best of National Lampoon No. 1, 1971, an anthology The Breast of National Lampoon (a "Best of" No. 2), 1972, an anthology The Best of National Lampoon No. 3, 1973, an anthology, art directed by Michael Gross National Lampoon The Best of #4, 1973, an anthology, art directed by Gross The National Lampoon Encyclopedia of Humor, 1973, edited by Michael O'Donoghue and art directed by Gross.This publication featured the fake Volkswagen ad seen above, which was written by Anne Beatts. The spoof was listed in the contents page as "Doyle Dane Bernbach," the name of the advertising agency that had produced the iconic 1960s ad campaign for Volkswagen. According to Mark Simonson's "Very Large National Lampoon Site": "If you buy a copy of this issue, you may find the ad is missing. As a result of a lawsuit by VW over the ad for unauthorized use of their trademark, NatLamp was forced to remove the page (with razor blades!) from any copies they still had in inventory (which, from what I gather, was about half the first printing of 250,000 copies) and all subsequent reprints." National Lampoon Comics, an anthology, 1974, art directed by Gross and David Kaestle National Lampoon The Best of No. 5, 1974, an anthology, art directed by Gross and Kaestle National Lampoon 1964 High School Yearbook Parody, 1974, Edited by P.J. O'Rourke and Doug Kenney, art directed by Kaestle. National Lampoon Presents The Very Large Book of Comical Funnies, 1975, edited by Sean Kelly National Lampoon The 199th Birthday Book, 1975, edited by Tony Hendra National Lampoon The Gentleman's Bathroom Companion, 1975 edited by Hendra, art directed by Peter Kleinman Official National Lampoon Bicentennial Calendar 1976, 1975, written and compiled by Christopher Cerf & Bill Effros National Lampoon Art Poster Book, 1975, Design direction by Peter Kleinman The Best of National Lampoon No. 6, 1976, an anthology National Lampoon The Iron On Book 1976, Original T-shirt designs, edited by Tony Hendra, art directed by Peter Kleinman. National Lampoon Songbook, 1976, edited by Sean Kelly, musical parodies in sheet music form National Lampoon The Naked and the Nude: Hollywood and Beyond, 1977, written by Brian McConnachie The Best of National Lampoon No. 7, 1977, an anthology National Lampoon Presents French Comics, 1977, edited by Peter Kaminsky, translators Sophie Balcoff, Sean Kelly, and Valerie Marchant National Lampoon The Up Yourself Book, 1977, Gerry Sussman National Lampoon Gentleman's Bathroom Companion 2, 1977, art directed by Peter Kleinman. National Lampoon The Book of Books, 1977 edited by Jeff Greenfield, art directed by Peter Kleinman The Best of National Lampoon No. 8, 1978, an anthology, Cover photo by Chris Callis, art directed by Peter Kleinman National Lampoon's Animal House Book, 1978, Chris Miller, Harold Ramis, Doug Kenney Art Direction by Peter Kleinman and Judith Jacklin Belushi National Lampoon Sunday Newspaper Parody, 1978 (claiming to be a Sunday issue of the Dacron, Ohio (a spoof on Akron, Ohio) Republican–Democrat, this publication was originally issued in loose newsprint sections, mimicking a genuine American Sunday newspaper.) Art Direction and Design by Skip Johnston National Lampoon Presents Claire Bretécher, 1978, work by
In any case, a subpoena would more likely than not override a contract of any sort; provisions restricting the transfer of data in violation of laws governing export control and national security; the term and conditions (in years) of the confidentiality, i.e. the time period of confidentiality; the term (in years) the agreement is binding; permission to obtain ex-parte injunctive relief; description of the actions need to be done with the confidential materials upon agreement ending; the obligations of the recipient regarding the confidential information, typically including some version of obligations: to use the information only for enumerated purposes; to disclose it only to persons with a need to know the information for those purposes; to use appropriate efforts (not less than reasonable efforts) to keep the information secure. Reasonable efforts is often defined as a standard of care relating to confidential information that is no less rigorous than that which the recipient uses to keep its own similar information secure; and to ensure that anyone to whom the information is disclosed further abides by obligations restricting use, restricting disclosure, and ensuring security at least as protective as the agreement; and types of permissible disclosure – such as those required by law or court order (many NDAs require the receiving party to give the disclosing party prompt notice of any efforts to obtain such disclosure, and possibly to cooperate with any attempt by the disclosing party to seek judicial protection for the relevant confidential information). the law and jurisdiction governing the parties. The parties may choose exclusive jurisdiction of a court of a country. Australia Deeds of confidentiality and fidelity (also referred to as deeds of confidentiality or confidentiality deeds) are commonly used in Australia. These documents generally serve the same purpose as and contain provisions similar to non-disclosure agreements (NDAs) used elsewhere. However, these documents are legally treated as deeds and are thus binding, unlike contracts, without consideration. California In California, (and some other U.S. states), there are some special circumstances relating to non-disclosure agreements and non-compete clauses. California's courts and legislature have signaled that they generally value an employee's mobility and entrepreneurship more highly than they do protectionist doctrine. India Use of NDAs are on the rise in India and is governed by the Indian Contract Act 1872. Use of an NDA is crucial in many circumstances, such as to tie in employees who are developing patentable technology if the employer intends to apply for a patent. Non-disclosure agreements have become very important in light of India's burgeoning outsourcing industry. In India, an NDA must be stamped to be a valid enforceable document. United Kingdom In
confidentiality, i.e. the time period of confidentiality; the term (in years) the agreement is binding; permission to obtain ex-parte injunctive relief; description of the actions need to be done with the confidential materials upon agreement ending; the obligations of the recipient regarding the confidential information, typically including some version of obligations: to use the information only for enumerated purposes; to disclose it only to persons with a need to know the information for those purposes; to use appropriate efforts (not less than reasonable efforts) to keep the information secure. Reasonable efforts is often defined as a standard of care relating to confidential information that is no less rigorous than that which the recipient uses to keep its own similar information secure; and to ensure that anyone to whom the information is disclosed further abides by obligations restricting use, restricting disclosure, and ensuring security at least as protective as the agreement; and types of permissible disclosure – such as those required by law or court order (many NDAs require the receiving party to give the disclosing party prompt notice of any efforts to obtain such disclosure, and possibly to cooperate with any attempt by the disclosing party to seek judicial protection for the relevant confidential information). the law and jurisdiction governing the parties. The parties may choose exclusive jurisdiction of a court of a country. Australia Deeds of confidentiality and fidelity (also referred to as deeds of confidentiality or confidentiality deeds) are commonly used in Australia. These documents generally serve the same purpose as and contain provisions similar to non-disclosure agreements (NDAs) used elsewhere. However, these documents are legally treated as deeds and are thus binding, unlike contracts, without consideration. California In California, (and some other U.S. states), there are some special circumstances relating to non-disclosure agreements and non-compete clauses. California's courts and legislature have signaled that they generally value an employee's mobility and entrepreneurship more highly than they do protectionist doctrine. India Use of NDAs are on the rise in India and is governed by the Indian Contract Act 1872. Use of an NDA is crucial in many circumstances, such as to tie in employees who are developing patentable technology if the employer intends to apply for a patent. Non-disclosure agreements have become very important in light of India's burgeoning outsourcing industry. In India, an NDA must be stamped to be a valid enforceable document. United Kingdom In Britain, in addition to use to protect trade secrets, NDAs are often used as a condition of a financial settlement in an attempt to
send e-mail, and eventually access the Internet. The economic theory of the network effect was advanced significantly between 1985 and 1995 by researchers Michael L. Katz, Carl Shapiro, Joseph Farrell, and Garth Saloner. Author, high-tech entrepreneur Rod Beckstrom presented a mathematical model for describing networks that are in a state of positive network effect at BlackHat and Defcon in 2009 and also presented the "inverse network effect" with an economic model for defining it as well. Because of the positive feedback often associated with the network effect, system dynamics can be used as a modelling method to describe the phenomena. Word of mouth and the Bass diffusion model are also potentially applicable. The next major advance occurred between 2000 and 2003 when researchers Geoffrey G Parker, Marshall Van Alstyne, Jean-Charles Rochet and Jean Tirole independently developed the two-sided market literature showing how network externalities that cross distinct groups can lead to free pricing for one of those groups. Evidence and Consequences New research addresses an apparent paradox: the web is a source of continual innovation, and yet it appears increasingly dominated by a small number of dominant players - one of the most visible consequences of network effects. We now know the impact on economic diversity is due to a variety of network effects with the variety of online players worldwide shrinking rapidly, despite the fact that the overall size of the worldwide web continues to expand and new categories of services offered online continues to rise. This research tackles this paradox by using large-scale longitudinal data sets from social media to measure the distribution of attention across the whole online economy over more than a decade from 2006 until 2017. While the diversity of sources is in decline, there is a countervailing force of continually increasing functionality with new services, products and applications — such as music streaming services (Spotify), file sharing programs (Dropbox) and messaging platforms (Messenger, Whatsapp and Snapchat). Another major finding was the dramatic increase in the “infant mortality” rate of websites — with the dominant players in each functional niche - once established guarding their turf more staunchly than ever. Economics Network economics refers to business economics that benefit from the network effect. This is when the value of a good or service increases when others buy the same good or service. Examples are website such as EBay, or iVillage where the community comes together and shares thoughts to help the website become a better business organization. In sustainability, network economics refers to multiple professionals (architects, designers, or related businesses) all working together to develop sustainable products and technologies. The more companies are involved in environmentally friendly production, the easier and cheaper it becomes to produce new sustainable products. For instance, if no one produces sustainable products, it is difficult and expensive to design a sustainable house with custom materials and technology. But due to network economics, the more industries are involved in creating such products, the easier it is to design an environmentally sustainable building. Another benefit of network economics in a certain field is improvement that results from competition and networking within an industry. Adoption and competition Critical mass In the early phases of a network technology, incentives to adopt the new technology are low. After a certain number of people have adopted the technology, network effects become significant enough that adoption becomes a dominant strategy. This point is called critical mass. At the critical mass point, the value obtained from the good or service is greater than or equal to the price paid for the good or service. When a product reaches critical mass, network effects will drive subsequent growth until a stable balance is reached. Therefore, a key business concern must then be how to attract users prior to reaching critical mass. Critical quality is closely related to consumer expectations, which will be affected by price and quality of products or services, the company's reputation and the growth path of the network. Thus, one way is to rely on extrinsic motivation, such as a payment, a fee waiver, or a request for friends to sign up. A more natural strategy is to build a system that has enough value without network effects, at least to early adopters. Then, as the number of users increases, the system becomes even more valuable and is able to attract a wider user base. Beyond critical mass, the increasing number of subscribers generally cannot continue indefinitely. After a certain point, most networks become either congested or saturated, stopping future uptake. Congestion occurs due to overuse. The applicable analogy is that of a telephone network. While the number of users is below the congestion point, each additional user adds additional value to every other customer. However, at some point, the addition of an extra user exceeds the capacity of the existing system. After this point, each additional user decreases the value obtained by every other user. In practical terms, each additional user increases the total system load, leading to busy signals, the inability to get a dial tone, and poor customer support. Assuming the congestion point is below the potential market size, the next critical point is where the value obtained again equals the price paid. The network will cease to grow at this point if system capacity is not improved. Peer-to-peer (P2P) systems are networks designed to distribute load among their user pool. This theoretically allows P2P networks to scale indefinitely. The P2P based telephony service Skype benefits from this effect and its growth is limited primarily by market saturation. Market tipping Network effects give rise to the potential outcome of market tipping, defined as "the tendency of one system to pull away from its rivals in popularity once it has gained an initial edge". Tipping results in a market in which only one good or service dominates and competition is stifled. This is because network effects tend to incentivise users to coordinate their adoption of a single product. Therefore, tipping can result in a natural form of market concentration in markets that display network effects. However, the presence of network effects does not necessarily imply that a market will tip; the following additional conditions must be met: The utility derived by users from network effects must exceed the utility they derive from differentiation Users must have high costs of multihoming (i.e. adopting more than one competing networks) Users must have high switching costs If any of these three conditions are not satisfied, the market may fail to tip and multiple products with significant market shares may coexist. One such example is the U.S. instant messaging market, which remained an oligopoly despite significant network effects. This can be attributed to the low multi-homing and switching costs faced by users. Market tipping does not imply permanent success in a given market. Competition can be reintroduced into the market due to shocks such as the development of new technologies. Additionally, if the price is raised above customers' willingness to pay, this may reverse market tipping. Multiple equilibria and expectations Networks effects often result in multiple potential market equilibrium outcomes. The key determinant in which equilibrium will manifest are the expectations of the market participants, which are self-fulfilling. Because users are incentivised to coordinate their adoption, user will tend to adopt the product that they expect to draw the largest number of users. These expectations may be shaped by path dependence, such as a perceived first-mover advantage, which can result in lock-in. The most commonly cited example of path dependence is the QWERTY keyboard, which owes its ubiquity to its establishment of an early lead in the keyboard layout industry and high switching costs, rather than any inherent advantage over competitors. Other key influences of adoption expectations can be reputational (e.g. a firm that has previously produced high quality products may be favoured over a new firm). Markets with network effects may result in inefficient equilibrium outcomes. With simultaneous adoption, users may fail to coordinate towards a single agreed-upon product, resulting in splintering among different networks, or may coordinate to lock-in to a different product than the one that is best for them. Technology lifecycle If some existing technology or company whose benefits are largely based on network effects starts to lose market share against a challenger such as a disruptive technology or open standards based competition, the benefits of network effects will reduce for the incumbent, and increase for the challenger. In this model, a tipping point is eventually reached at which the network effects of the challenger dominate those of the former incumbent, and the incumbent is forced into an accelerating decline, whilst the challenger takes over the incumbent's former position. Sony's Betamax and Victor Company of Japan (JVC)'s video home system (VHS) can both be used for video cassette recorders (VCR), but the two technologies are not compatible. Therefore, the VCR that is suitable for one type of cassette cannot fit in another. VHS's technology
only one good or service dominates and competition is stifled. This is because network effects tend to incentivise users to coordinate their adoption of a single product. Therefore, tipping can result in a natural form of market concentration in markets that display network effects. However, the presence of network effects does not necessarily imply that a market will tip; the following additional conditions must be met: The utility derived by users from network effects must exceed the utility they derive from differentiation Users must have high costs of multihoming (i.e. adopting more than one competing networks) Users must have high switching costs If any of these three conditions are not satisfied, the market may fail to tip and multiple products with significant market shares may coexist. One such example is the U.S. instant messaging market, which remained an oligopoly despite significant network effects. This can be attributed to the low multi-homing and switching costs faced by users. Market tipping does not imply permanent success in a given market. Competition can be reintroduced into the market due to shocks such as the development of new technologies. Additionally, if the price is raised above customers' willingness to pay, this may reverse market tipping. Multiple equilibria and expectations Networks effects often result in multiple potential market equilibrium outcomes. The key determinant in which equilibrium will manifest are the expectations of the market participants, which are self-fulfilling. Because users are incentivised to coordinate their adoption, user will tend to adopt the product that they expect to draw the largest number of users. These expectations may be shaped by path dependence, such as a perceived first-mover advantage, which can result in lock-in. The most commonly cited example of path dependence is the QWERTY keyboard, which owes its ubiquity to its establishment of an early lead in the keyboard layout industry and high switching costs, rather than any inherent advantage over competitors. Other key influences of adoption expectations can be reputational (e.g. a firm that has previously produced high quality products may be favoured over a new firm). Markets with network effects may result in inefficient equilibrium outcomes. With simultaneous adoption, users may fail to coordinate towards a single agreed-upon product, resulting in splintering among different networks, or may coordinate to lock-in to a different product than the one that is best for them. Technology lifecycle If some existing technology or company whose benefits are largely based on network effects starts to lose market share against a challenger such as a disruptive technology or open standards based competition, the benefits of network effects will reduce for the incumbent, and increase for the challenger. In this model, a tipping point is eventually reached at which the network effects of the challenger dominate those of the former incumbent, and the incumbent is forced into an accelerating decline, whilst the challenger takes over the incumbent's former position. Sony's Betamax and Victor Company of Japan (JVC)'s video home system (VHS) can both be used for video cassette recorders (VCR), but the two technologies are not compatible. Therefore, the VCR that is suitable for one type of cassette cannot fit in another. VHS's technology gradually surpassed Betamax in the competition. In the end, Betamax lost its original market share and was replaced by VHS. Negative network externalities Negative network externalities, in the mathematical sense, are those that have a negative effect compared to normal (positive) network effects. Just as positive network externalities (network effects) cause positive feedback and exponential growth, negative network externalities create negative feedback and exponential decay. In nature, negative network externalities are the forces that pull towards equilibrium, are responsible for stability, and represent physical limitations keeping systems bounded. Besides, Negative network externalities has four characteristics, which are namely, more login retries, longer query times, longer download times and more download attempts. Therefore, congestion occurs when the efficiency of a network decreases as more people use it, and this reduces the value to people already using it. Traffic congestion that overloads the freeway and network congestion on connections with limited bandwidth both display negative network externalities. Braess's paradox suggests that adding paths through a network can have a negative effect on performance of the network. Interoperability Interoperability has the effect of making the network bigger and thus increases the external value of the network to consumers. Interoperability achieves this primarily by increasing potential connections and secondarily by attracting new participants to the network. Other benefits of interoperability include reduced uncertainty, reduced lock-in, commoditization and competition based on price. Interoperability can be achieved through standardization or other cooperation. Companies involved in fostering interoperability face a tension between cooperating with their competitors to grow the potential market for products and competing for market share. Compatibility and incompatibility Product compatibility is closely related to network externalities in company's competition, which refers to two systems that can be operated together without changing. Compatible products are characterized by better matching with customers, so they can enjoy all the benefits of the network without having to purchase products from the same company. However, not only products of compatibility will intensify competition between companies, this will make users who had purchased products lose their advantages, but also proprietary networks may raise the industry entry standards. Compared to large companies with better reputation or strength, weaker companies or small networks will more inclined to choose compatible products. Besides, the compatibility of products is conducive to the company's increase in market share. For example, the Windows system is famous for its operating compatibility, thereby satisfying consumers' diversification of other applications. As the supplier of Windows systems, Microsoft benefits from indirect network effects, which cause the growing of the company's market share. Incompatibility is the opposite of compatibility. Because incompatibility of products will aggravate market segmentation and reduce efficiency, and also harm consumer interests and enhance competition. The result of the competition between incompatible networks depends on the complete sequential of adoption and the early preferences of the adopters. Effective competition determines the market share of companies, which is historically important. Since the installed base can directly bring more network profit and increase the consumers' expectations, which will have a positive impact on the smooth implementation of subsequent network effects. Open versus closed standards In communication and information technologies, open standards and interfaces are often developed through the participation of multiple companies and are usually perceived to provide mutual benefit. But, in cases in which the relevant communication protocols or interfaces are closed standards, the network effect can give the company controlling those standards monopoly power. The Microsoft corporation is widely seen by computer professionals as maintaining its monopoly through these means. One observed method Microsoft uses to put the network effect to its advantage is called Embrace, extend and extinguish. Mirabilis is an Israeli start-up which pioneered instant messaging (IM) and was bought by America Online. By giving away their ICQ product for free and preventing interoperability between their client software and other products, they were able to temporarily dominate the market for instant messaging. The IM technology has completed the use from the home to the workplace, because of its faster processing speed and simplified process characteristics. Because of the network effect, new IM users gained much more value by choosing to use the Mirabilis system (and join its large network of users) than they would use a competing system. As was typical for that era, the company never made any attempt to generate profits from its dominant position before selling the company. Examples The Telephone Network effects are the incremental benefit gained by each user for each new user that joins a network. An example of a direct network effect is the telephone. Originally when only a small number of people owned a telephone the value it provided was minimal. Not only did other people need to own a telephone for it to be useful, but it also had to be connected to the network through the users home. As technology advanced it became more affordable for people to own a telephone. This created more value and utility due to the increase in users. Eventually increased usage through exponential growth led to the telephone is used by almost every household adding more value to the network for all users. Without the network effect and technological advances the telephone would have no where near the amount of value or utility as it does today. Financial exchanges Stock exchanges and derivatives exchanges feature a network effect. Market liquidity is a major determinant of transaction cost in the sale or purchase of a security, as a bid–ask spread exists between the price at which a purchase can be made versus the price at which the sale of the same security can be made. As the number of sellers and buyers in the exchange, who have the symmetric information increases, liquidity increases, and transaction costs decrease. This then attracts a larger number of buyers and sellers to the exchange. The network advantage of financial exchanges is apparent in the difficulty that startup exchanges have in dislodging a dominant exchange. For example, the Chicago Board of Trade has retained overwhelming dominance of trading in US Treasury bond futures despite the startup of Eurex US trading of identical futures contracts. Similarly, the Chicago Mercantile Exchange has maintained dominance in trading of Eurobond interest rate futures despite a challenge from Euronext.Liffe. Cryptocurrencies Cryptocurrencies such as Bitcoin, also feature network effects. Bitcoin's unique properties make it an attractive asset to users and investors. The more users that join the network, the more valuable and secure it becomes. This method creates incentive for users to join so that when the network and community grows, a network effect occurs, making it more likely that new people will also join. Bitcoin provides its users with financial value through the network effect which may lead to more investors due to the appeal of financial gain. This is an example of an indirect network effect as the value only increases due to the initial network being created. Software The widely used computer software benefits from powerful network effects. The software-purchase characteristic is that it is easily influenced by the opinions of others, so the customer base of the software is the key to realizing a positive network effect. Although customers' motivation for choosing software is related to the product itself, media interaction and word-of-mouth recommendations from purchased customers can still increase the possibility of software being applied to other customers who have not purchased it, thereby resulting in network effects. In 2007 Apple released the iPhone followed by the app store. Most iPhone apps rely heavily on the existence of strong network effects. This enables the software to grow in popularity very quickly and spread to a large userbase with very limited marketing needed. The Freemium business model has evolved to take advantage of these network effects by releasing a free version that will not limit the adoption or any users and then charge for premium features as the primary source of revenue. Furthermore, some software companies will launch free trial versions during the trial period to attract buyers and reduce their uncertainty. The duration of free time is related to the network effect. The more positive feedback the company received, the shorter the free trial time will be. Web sites Many web sites benefit from a network effect. One example is web marketplaces and exchanges. For example, eBay would not be a particularly useful site if auctions were not competitive. As the number of users grows on eBay, auctions grow more competitive, pushing up the prices of bids on items. This makes it more worthwhile to sell on eBay and brings more sellers onto eBay, which, in turn, drives prices down again due to increased supply. Increased supply brings even more buyers to eBay. Essentially, as the number of users of eBay grows, prices fall and supply increases, and more and more people find the site to be useful. Network effects were used as justification in business models by some of the dot-com companies in the late 1990s. These firms operated under the belief that when a new market comes into being which contains strong network effects, firms should care more about growing their market share than about becoming profitable. The justification was that market share would determine which firm could set technical and marketing standards and giving these companies a first-mover advantage. Social networking websites are good examples. The more people register onto a social networking website, the more useful the website is to its registrants. Google uses the network effect in its advertising business with its Google
nucleon of the starting element. The fission of U235 by a slow neutron yields nearly identical energy to the fission of U238 by a fast neutron. This energy release profile holds true for thorium and the various minor actinides as well. By contrast, most chemical oxidation reactions (such as burning coal or TNT) release at most a few eV per event. So, nuclear fuel contains at least ten million times more usable energy per unit mass than does chemical fuel. The energy of nuclear fission is released as kinetic energy of the fission products and fragments, and as electromagnetic radiation in the form of gamma rays; in a nuclear reactor, the energy is converted to heat as the particles and gamma rays collide with the atoms that make up the reactor and its working fluid, usually water or occasionally heavy water or molten salts. When a uranium nucleus fissions into two daughter nuclei fragments, about 0.1 percent of the mass of the uranium nucleus appears as the fission energy of ~200 MeV. For uranium-235 (total mean fission energy 202.79 MeV), typically ~169 MeV appears as the kinetic energy of the daughter nuclei, which fly apart at about 3% of the speed of light, due to Coulomb repulsion. Also, an average of 2.5 neutrons are emitted, with a mean kinetic energy per neutron of ~2 MeV (total of 4.8 MeV). The fission reaction also releases ~7 MeV in prompt gamma ray photons. The latter figure means that a nuclear fission explosion or criticality accident emits about 3.5% of its energy as gamma rays, less than 2.5% of its energy as fast neutrons (total of both types of radiation ~ 6%), and the rest as kinetic energy of fission fragments (this appears almost immediately when the fragments impact surrounding matter, as simple heat). In an atomic bomb, this heat may serve to raise the temperature of the bomb core to 100 million kelvin and cause secondary emission of soft X-rays, which convert some of this energy to ionizing radiation. However, in nuclear reactors, the fission fragment kinetic energy remains as low-temperature heat, which itself causes little or no ionization. So-called neutron bombs (enhanced radiation weapons) have been constructed which release a larger fraction of their energy as ionizing radiation (specifically, neutrons), but these are all thermonuclear devices which rely on the nuclear fusion stage to produce the extra radiation. The energy dynamics of pure fission bombs always remain at about 6% yield of the total in radiation, as a prompt result of fission. The total prompt fission energy amounts to about 181 MeV, or ~ 89% of the total energy which is eventually released by fission over time. The remaining ~ 11% is released in beta decays which have various half-lives, but begin as a process in the fission products immediately; and in delayed gamma emissions associated with these beta decays. For example, in uranium-235 this delayed energy is divided into about 6.5 MeV in betas, 8.8 MeV in antineutrinos (released at the same time as the betas), and finally, an additional 6.3 MeV in delayed gamma emission from the excited beta-decay products (for a mean total of ~10 gamma ray emissions per fission, in all). Thus, about 6.5% of the total energy of fission is released some time after the event, as non-prompt or delayed ionizing radiation, and the delayed ionizing energy is about evenly divided between gamma and beta ray energy. In a reactor that has been operating for some time, the radioactive fission products will have built up to steady state concentrations such that their rate of decay is equal to their rate of formation, so that their fractional total contribution to reactor heat (via beta decay) is the same as these radioisotopic fractional contributions to the energy of fission. Under these conditions, the 6.5% of fission which appears as delayed ionizing radiation (delayed gammas and betas from radioactive fission products) contributes to the steady-state reactor heat production under power. It is this output fraction which remains when the reactor is suddenly shut down (undergoes scram). For this reason, the reactor decay heat output begins at 6.5% of the full reactor steady state fission power, once the reactor is shut down. However, within hours, due to decay of these isotopes, the decay power output is far less. See decay heat for detail. The remainder of the delayed energy (8.8 MeV/202.5 MeV = 4.3% of total fission energy) is emitted as antineutrinos, which as a practical matter, are not considered "ionizing radiation". The reason is that energy released as antineutrinos is not captured by the reactor material as heat, and escapes directly through all materials (including the Earth) at nearly the speed of light, and into interplanetary space (the amount absorbed is minuscule). Neutrino radiation is ordinarily not classed as ionizing radiation, because it is almost entirely not absorbed and therefore does not produce effects (although the very rare neutrino event is ionizing). Almost all of the rest of the radiation (6.5% delayed beta and gamma radiation) is eventually converted to heat in a reactor core or its shielding. Some processes involving neutrons are notable for absorbing or finally yielding energy — for example neutron kinetic energy does not yield heat immediately if the neutron is captured by a uranium-238 atom to breed plutonium-239, but this energy is emitted if the plutonium-239 is later fissioned. On the other hand, so-called delayed neutrons emitted as radioactive decay products with half-lives up to several minutes, from fission-daughters, are very important to reactor control, because they give a characteristic "reaction" time for the total nuclear reaction to double in size, if the reaction is run in a "delayed-critical" zone which deliberately relies on these neutrons for a supercritical chain-reaction (one in which each fission cycle yields more neutrons than it absorbs). Without their existence, the nuclear chain-reaction would be prompt critical and increase in size faster than it could be controlled by human intervention. In this case, the first experimental atomic reactors would have run away to a dangerous and messy "prompt critical reaction" before their operators could have manually shut them down (for this reason, designer Enrico Fermi included radiation-counter-triggered control rods, suspended by electromagnets, which could automatically drop into the center of Chicago Pile-1). If these delayed neutrons are captured without producing fissions, they produce heat as well. Product nuclei and binding energy In fission there is a preference to yield fragments with even proton numbers, which is called the odd-even effect on the fragments' charge distribution. However, no odd-even effect is observed on fragment mass number distribution. This result is attributed to nucleon pair breaking. In nuclear fission events the nuclei may break into any combination of lighter nuclei, but the most common event is not fission to equal mass nuclei of about mass 120; the most common event (depending on isotope and process) is a slightly unequal fission in which one daughter nucleus has a mass of about 90 to 100 u and the other the remaining 130 to 140 u. Unequal fissions are energetically more favorable because this allows one product to be closer to the energetic minimum near mass 60 u (only a quarter of the average fissionable mass), while the other nucleus with mass 135 u is still not far out of the range of the most tightly bound nuclei (another statement of this, is that the atomic binding energy curve is slightly steeper to the left of mass 120 u than to the right of it). Origin of the active energy and the curve of binding energy Nuclear fission of heavy elements produces exploitable energy because the specific binding energy (binding energy per mass) of intermediate-mass nuclei with atomic numbers and atomic masses close to 62Ni and 56Fe is greater than the nucleon-specific binding energy of very heavy nuclei, so that energy is released when heavy nuclei are broken apart. The total rest masses of the fission products () from a single reaction is less than the mass of the original fuel nucleus (). The excess mass is the invariant mass of the energy that is released as photons (gamma rays) and kinetic energy of the fission fragments, according to the mass-energy equivalence formula . The variation in specific binding energy with atomic number is due to the interplay of the two fundamental forces acting on the component nucleons (protons and neutrons) that make up the nucleus. Nuclei are bound by an attractive nuclear force between nucleons, which overcomes the electrostatic repulsion between protons. However, the nuclear force acts only over relatively short ranges (a few nucleon diameters), since it follows an exponentially decaying Yukawa potential which makes it insignificant at longer distances. The electrostatic repulsion is of longer range, since it decays by an inverse-square rule, so that nuclei larger than about 12 nucleons in diameter reach a point that the total electrostatic repulsion overcomes the nuclear force and causes them to be spontaneously unstable. For the same reason, larger nuclei (more than about eight nucleons in diameter) are less tightly bound per unit mass than are smaller nuclei; breaking a large nucleus into two or more intermediate-sized nuclei releases energy. Also because of the short range of the strong binding force, large stable nuclei must contain proportionally more neutrons than do the lightest elements, which are most stable with a 1 to 1 ratio of protons and neutrons. Nuclei which have more than 20 protons cannot be stable unless they have more than an equal number of neutrons. Extra neutrons stabilize heavy elements because they add to strong-force binding (which acts between all nucleons) without adding to proton–proton repulsion. Fission products have, on average, about the same ratio of neutrons and protons as their parent nucleus, and are therefore usually unstable to beta decay (which changes neutrons to protons) because they have proportionally too many neutrons compared to stable isotopes of similar mass. This tendency for fission product nuclei to undergo beta decay is the fundamental cause of the problem of radioactive high-level waste from nuclear reactors. Fission products tend to be beta emitters, emitting fast-moving electrons to conserve electric charge, as excess neutrons convert to protons in the fission-product atoms. See Fission products (by element) for a description of fission products sorted by element. Chain reactions Several heavy elements, such as uranium, thorium, and plutonium, undergo both spontaneous fission, a form of radioactive decay and induced fission, a form of nuclear reaction. Elemental isotopes that undergo induced fission when struck by a free neutron are called fissionable; isotopes that undergo fission when struck by a slow-moving thermal neutron are also called fissile. A few particularly fissile and readily obtainable isotopes (notably 233U, 235U and 239Pu) are called nuclear fuels because they can sustain a chain reaction and can be obtained in large enough quantities to be useful. All fissionable and fissile isotopes undergo a small amount of spontaneous fission which releases a few free neutrons into any sample of nuclear fuel. Such neutrons would escape rapidly from the fuel and become a free neutron, with a mean lifetime of about 15 minutes before decaying to protons and beta particles. However, neutrons almost invariably impact and are absorbed by other nuclei in the vicinity long before this happens (newly created fission neutrons move at about 7% of the speed of light, and even moderated neutrons move at about 8 times the speed of sound). Some neutrons will impact fuel nuclei and induce further fissions, releasing yet more neutrons. If enough nuclear fuel is assembled in one place, or if the escaping neutrons are sufficiently contained, then these freshly emitted neutrons outnumber the neutrons that escape from the assembly, and a sustained nuclear chain reaction will take place. An assembly that supports a sustained nuclear chain reaction is called a critical assembly or, if the assembly is almost entirely made of a nuclear fuel, a critical mass. The word "critical" refers to a cusp in the behavior of the differential equation that governs the number of free neutrons present in the fuel: if less than a critical mass is present, then the amount of neutrons is determined by radioactive decay, but if a critical mass or more is present, then the amount of neutrons is controlled instead by the physics of the chain reaction. The actual mass of a critical mass of nuclear fuel depends strongly on the geometry and surrounding materials. Not all fissionable isotopes can sustain a chain reaction. For example, 238U, the most abundant form of uranium, is fissionable but not fissile: it undergoes induced fission when impacted by an energetic neutron with over 1 MeV of kinetic energy. However, too few of the neutrons produced by 238U fission are energetic enough to induce further fissions in 238U, so no chain reaction is possible with this isotope. Instead, bombarding 238U with slow neutrons causes it to absorb them (becoming 239U) and decay by beta emission to 239Np which then decays again by the same process to 239Pu; that process is used to manufacture 239Pu in breeder reactors. In-situ plutonium production also contributes to the neutron chain reaction in other types of reactors after sufficient plutonium-239 has been produced, since plutonium-239 is also a fissile element which serves as fuel. It is estimated that up to half of the power produced by a standard "non-breeder" reactor is produced by the fission of plutonium-239 produced in place, over the total life-cycle of a fuel load. Fissionable, non-fissile isotopes can be used as fission energy source even without a chain reaction. Bombarding 238U with fast neutrons induces fissions, releasing energy as long as the external neutron source is present. This is an important effect in all reactors where fast neutrons from the fissile isotope can cause the fission of nearby 238U nuclei, which means that some small part of the 238U is "burned-up" in all nuclear fuels, especially in fast breeder reactors that operate with higher-energy neutrons. That same fast-fission effect is used to augment the energy released by modern thermonuclear weapons, by jacketing the weapon with 238U to react with neutrons released by nuclear fusion at the center of the device. But the explosive effects of nuclear fission chain reactions can be reduced by using substances like moderators which slow down the speed of secondary neutrons. Fission reactors Critical fission reactors are the most common type of nuclear reactor. In a critical fission reactor, neutrons produced by fission of fuel atoms are used to induce yet more fissions, to sustain a controllable amount of energy release. Devices that produce engineered but non-self-sustaining fission reactions are subcritical fission reactors. Such devices use radioactive decay or particle accelerators to trigger fissions. Critical fission reactors are built for three primary purposes, which typically involve different engineering trade-offs to take advantage of either the heat or the neutrons produced by the fission chain reaction: power reactors are intended to produce heat for nuclear power, either as part of a generating station or a local power system such as a nuclear submarine. research reactors are intended to produce neutrons and/or activate radioactive sources for scientific, medical, engineering, or other research purposes. breeder reactors are intended to produce nuclear fuels in bulk from more abundant isotopes. The better known fast breeder reactor makes 239Pu (a nuclear fuel) from the naturally very abundant 238U (not a nuclear fuel). Thermal breeder reactors previously tested using 232Th to breed the fissile isotope 233U (thorium fuel cycle) continue to be studied and developed. While, in principle, all fission reactors can act in all three capacities, in practice the tasks lead to conflicting engineering goals and most reactors have been built with only one of the above tasks in mind. (There are several early counter-examples, such as the Hanford N reactor, now decommissioned). Power reactors generally convert the kinetic energy of fission products into heat, which is used to heat a working fluid and drive a heat engine that generates mechanical or electrical power. The working fluid is usually water with a steam turbine, but some designs use other materials such as gaseous helium. Research reactors produce neutrons that are used in various ways, with the heat of fission being treated as an unavoidable waste product. Breeder reactors are a specialized form of research reactor, with the caveat that the sample being irradiated is usually the fuel itself, a mixture of 238U and 235U. For a more detailed description of the physics and operating principles of critical fission reactors, see nuclear reactor physics. For a description of their social, political, and environmental aspects, see nuclear power. Fission bombs One class of nuclear weapon, a fission bomb (not to be confused with the fusion bomb), otherwise known as an atomic bomb or atom bomb, is a fission reactor designed to liberate as much energy as possible as rapidly as possible, before the released energy causes the reactor to explode (and the chain reaction to stop). Development of nuclear weapons was the motivation behind early research into nuclear fission which the Manhattan Project during World War II (September 1, 1939 – September 2, 1945) carried out most of the early scientific work on fission chain reactions, culminating in the three events involving fission bombs that occurred during the war. The first fission bomb, codenamed "The Gadget", was detonated during the Trinity Test in the desert of New Mexico on July 16, 1945. Two other fission bombs, codenamed "Little Boy" and "Fat Man", were used in combat against the Japanese cities of Hiroshima and Nagasaki on August 6 and 9 (respectively) of 1945. Even the first fission bombs were thousands of times more explosive than a comparable mass of chemical explosive. For example, Little Boy weighed a total of about four tons (of which 60 kg was nuclear fuel) and was long; it also yielded an explosion equivalent to about 15 kilotons of TNT, destroying a large part of the city of Hiroshima. Modern nuclear weapons (which include a thermonuclear fusion as well as one or more fission stages) are hundreds of times more energetic for their weight than the first pure fission atomic bombs (see nuclear weapon yield), so that a modern single missile warhead bomb weighing less than 1/8 as much as Little Boy (see for example W88) has a yield of 475 kilotons of TNT, and could bring destruction to about 10 times the city area. While the fundamental physics of the fission chain reaction in a nuclear weapon is similar to the physics of a controlled nuclear reactor, the two types of device must be engineered quite differently (see nuclear reactor physics). A nuclear bomb is designed to release all its energy at once, while a reactor is designed to generate a steady supply of useful power. While overheating of a reactor can lead to, and has led to, meltdown and steam explosions, the much lower uranium enrichment makes it impossible for a nuclear reactor to explode with the same destructive power as a nuclear weapon. It is also difficult to extract useful power from a nuclear bomb, although at least one rocket propulsion system, Project Orion, was intended to work by exploding fission bombs behind a massively padded and shielded spacecraft. The strategic importance of nuclear weapons is a major
capture, is a result of the attractive nuclear force acting between the neutron and nucleus. It is enough to deform the nucleus into a double-lobed "drop", to the point that nuclear fragments exceed the distances at which the nuclear force can hold two groups of charged nucleons together and, when this happens, the two fragments complete their separation and then are driven further apart by their mutually repulsive charges, in a process which becomes irreversible with greater and greater distance. A similar process occurs in fissionable isotopes (such as uranium-238), but in order to fission, these isotopes require additional energy provided by fast neutrons (such as those produced by nuclear fusion in thermonuclear weapons). The liquid drop model of the atomic nucleus predicts equal-sized fission products as an outcome of nuclear deformation. The more sophisticated nuclear shell model is needed to mechanistically explain the route to the more energetically favorable outcome, in which one fission product is slightly smaller than the other. A theory of fission based on the shell model has been formulated by Maria Goeppert Mayer. The most common fission process is binary fission, and it produces the fission products noted above, at 95±15 and 135±15 u. However, the binary process happens merely because it is the most probable. In anywhere from 2 to 4 fissions per 1000 in a nuclear reactor, a process called ternary fission produces three positively charged fragments (plus neutrons) and the smallest of these may range from so small a charge and mass as a proton (Z = 1), to as large a fragment as argon (Z = 18). The most common small fragments, however, are composed of 90% helium-4 nuclei with more energy than alpha particles from alpha decay (so-called "long range alphas" at ~ 16 MeV), plus helium-6 nuclei, and tritons (the nuclei of tritium). The ternary process is less common, but still ends up producing significant helium-4 and tritium gas buildup in the fuel rods of modern nuclear reactors. Energetics Input The fission of a heavy nucleus requires a total input energy of about 7 to 8 million electron volts (MeV) to initially overcome the nuclear force which holds the nucleus into a spherical or nearly spherical shape, and from there, deform it into a two-lobed ("peanut") shape in which the lobes are able to continue to separate from each other, pushed by their mutual positive charge, in the most common process of binary fission (two positively charged fission products + neutrons). Once the nuclear lobes have been pushed to a critical distance, beyond which the short range strong force can no longer hold them together, the process of their separation proceeds from the energy of the (longer range) electromagnetic repulsion between the fragments. The result is two fission fragments moving away from each other, at high energy. About 6 MeV of the fission-input energy is supplied by the simple binding of an extra neutron to the heavy nucleus via the strong force; however, in many fissionable isotopes, this amount of energy is not enough for fission. Uranium-238, for example, has a near-zero fission cross section for neutrons of less than one MeV energy. If no additional energy is supplied by any other mechanism, the nucleus will not fission, but will merely absorb the neutron, as happens when U-238 absorbs slow and even some fraction of fast neutrons, to become U-239. The remaining energy to initiate fission can be supplied by two other mechanisms: one of these is more kinetic energy of the incoming neutron, which is increasingly able to fission a fissionable heavy nucleus as it exceeds a kinetic energy of one MeV or more (so-called fast neutrons). Such high energy neutrons are able to fission U-238 directly (see thermonuclear weapon for application, where the fast neutrons are supplied by nuclear fusion). However, this process cannot happen to a great extent in a nuclear reactor, as too small a fraction of the fission neutrons produced by any type of fission have enough energy to efficiently fission U-238 (fission neutrons have a mode energy of 2 MeV, but a median of only 0.75 MeV, meaning half of them have less than this insufficient energy). Among the heavy actinide elements, however, those isotopes that have an odd number of neutrons (such as U-235 with 143 neutrons) bind an extra neutron with an additional 1 to 2 MeV of energy over an isotope of the same element with an even number of neutrons (such as U-238 with 146 neutrons). This extra binding energy is made available as a result of the mechanism of neutron pairing effects. This extra energy results from the Pauli exclusion principle allowing an extra neutron to occupy the same nuclear orbital as the last neutron in the nucleus, so that the two form a pair. In such isotopes, therefore, no neutron kinetic energy is needed, for all the necessary energy is supplied by absorption of any neutron, either of the slow or fast variety (the former are used in moderated nuclear reactors, and the latter are used in fast-neutron reactors, and in weapons). As noted above, the subgroup of fissionable elements that may be fissioned efficiently with their own fission neutrons (thus potentially causing a nuclear chain reaction in relatively small amounts of the pure material) are termed "fissile". Examples of fissile isotopes are uranium-235 and plutonium-239. Output Typical fission events release about two hundred million eV (200 MeV) of energy, the equivalent of roughly >2 trillion kelvin, for each fission event. The exact isotope which is fissioned, and whether or not it is fissionable or fissile, has only a small impact on the amount of energy released. This can be easily seen by examining the curve of binding energy (image below), and noting that the average binding energy of the actinide nuclides beginning with uranium is around 7.6 MeV per nucleon. Looking further left on the curve of binding energy, where the fission products cluster, it is easily observed that the binding energy of the fission products tends to center around 8.5 MeV per nucleon. Thus, in any fission event of an isotope in the actinide's range of mass, roughly 0.9 MeV is released per nucleon of the starting element. The fission of U235 by a slow neutron yields nearly identical energy to the fission of U238 by a fast neutron. This energy release profile holds true for thorium and the various minor actinides as well. By contrast, most chemical oxidation reactions (such as burning coal or TNT) release at most a few eV per event. So, nuclear fuel contains at least ten million times more usable energy per unit mass than does chemical fuel. The energy of nuclear fission is released as kinetic energy of the fission products and fragments, and as electromagnetic radiation in the form of gamma rays; in a nuclear reactor, the energy is converted to heat as the particles and gamma rays collide with the atoms that make up the reactor and its working fluid, usually water or occasionally heavy water or molten salts. When a uranium nucleus fissions into two daughter nuclei fragments, about 0.1 percent of the mass of the uranium nucleus appears as the fission energy of ~200 MeV. For uranium-235 (total mean fission energy 202.79 MeV), typically ~169 MeV appears as the kinetic energy of the daughter nuclei, which fly apart at about 3% of the speed of light, due to Coulomb repulsion. Also, an average of 2.5 neutrons are emitted, with a mean kinetic energy per neutron of ~2 MeV (total of 4.8 MeV). The fission reaction also releases ~7 MeV in prompt gamma ray photons. The latter figure means that a nuclear fission explosion or criticality accident emits about 3.5% of its energy as gamma rays, less than 2.5% of its energy as fast neutrons (total of both types of radiation ~ 6%), and the rest as kinetic energy of fission fragments (this appears almost immediately when the fragments impact surrounding matter, as simple heat). In an atomic bomb, this heat may serve to raise the temperature of the bomb core to 100 million kelvin and cause secondary emission of soft X-rays, which convert some of this energy to ionizing radiation. However, in nuclear reactors, the fission fragment kinetic energy remains as low-temperature heat, which itself causes little or no ionization. So-called neutron bombs (enhanced radiation weapons) have been constructed which release a larger fraction of their energy as ionizing radiation (specifically, neutrons), but these are all thermonuclear devices which rely on the nuclear fusion stage to produce the extra radiation. The energy dynamics of pure fission bombs always remain at about 6% yield of the total in radiation, as a prompt result of fission. The total prompt fission energy amounts to about 181 MeV, or ~ 89% of the total energy which is eventually released by fission over time. The remaining ~ 11% is released in beta decays which have various half-lives, but begin as a process in the fission products immediately; and in delayed gamma emissions associated with these beta decays. For example, in uranium-235 this delayed energy is divided into about 6.5 MeV in betas, 8.8 MeV in antineutrinos (released at the same time as the betas), and finally, an additional 6.3 MeV in delayed gamma emission from the excited beta-decay products (for a mean total of ~10 gamma ray emissions per fission, in all). Thus, about 6.5% of the total energy of fission is released some time after the event, as non-prompt or delayed ionizing radiation, and the delayed ionizing energy is about evenly divided between gamma and beta ray energy. In a reactor that has been operating for some time, the radioactive fission products will have built up to steady state concentrations such that their rate of decay is equal to their rate of formation, so that their fractional total contribution to reactor heat (via beta decay) is the same as these radioisotopic fractional contributions to the energy of fission. Under these conditions, the 6.5% of fission which appears as delayed ionizing radiation (delayed gammas and betas from radioactive fission products) contributes to the steady-state reactor heat production under power. It is this output fraction which remains when the reactor is suddenly shut down (undergoes scram). For this reason, the reactor decay heat output begins at 6.5% of the full reactor steady state fission power, once the reactor is shut down. However, within hours, due to decay of these isotopes, the decay power output is far less. See decay heat for detail. The remainder of the delayed energy (8.8 MeV/202.5 MeV = 4.3% of total fission energy) is emitted as antineutrinos, which as a practical matter, are not considered "ionizing radiation". The reason is that energy released as antineutrinos is not captured by the reactor material as heat, and escapes directly through all materials (including the Earth) at nearly the speed of light, and into interplanetary space (the amount absorbed is minuscule). Neutrino radiation is ordinarily not classed as ionizing radiation, because it is almost entirely not absorbed and therefore does not produce effects (although the very rare neutrino event is ionizing). Almost all of the rest of the radiation (6.5% delayed beta and gamma radiation) is eventually converted to heat in a reactor core or its shielding. Some processes involving neutrons are notable for absorbing or finally yielding energy — for example neutron kinetic energy does not yield heat immediately if the neutron is captured by a uranium-238 atom to breed plutonium-239, but this energy is emitted if the plutonium-239 is later fissioned. On the other hand, so-called delayed neutrons emitted as radioactive decay products with half-lives up to several minutes, from fission-daughters, are very important to reactor control, because they give a characteristic "reaction" time for the total nuclear reaction to double in size, if the reaction is run in a "delayed-critical" zone which deliberately relies on these neutrons for a supercritical chain-reaction (one in which each fission cycle yields more neutrons than it absorbs). Without their existence, the nuclear chain-reaction would be prompt critical and increase in size faster than it could be controlled by human intervention. In this case, the first experimental atomic reactors would have run away to a dangerous and messy "prompt critical reaction" before their operators could have manually shut them down (for this reason, designer Enrico Fermi included radiation-counter-triggered control rods, suspended by electromagnets, which could automatically drop into the center of Chicago Pile-1). If these delayed neutrons are captured without producing fissions, they produce heat as well. Product nuclei and binding energy In fission there is a preference to yield fragments with even proton numbers, which is called the odd-even effect on the fragments' charge distribution. However, no odd-even effect is observed on fragment mass number distribution. This result is attributed to nucleon pair breaking. In nuclear fission events the nuclei may break into any combination of lighter nuclei, but the most common event is not fission to equal mass nuclei of about mass 120; the most common event (depending on isotope and process) is a slightly unequal fission in which one daughter nucleus has a mass of about 90 to 100 u and the other the remaining 130 to 140 u. Unequal fissions are energetically more favorable because this allows one product to be closer to the energetic minimum near mass 60
Polish-Jewish and other Eastern European Jewish origins. His great-grandfather emigrated from Antwerp, Belgium, to the UK before 1914 and his grandfather eventually settled in the south of England in the Hampshire city of Portsmouth and established a chain of grocery stores. Gaiman's grandfather changed his original family name of Chaiman to Gaiman. His father, David Bernard Gaiman, worked in the same chain of stores; his mother, Sheila Gaiman (née Goldman), was a pharmacist. He has two younger sisters, Claire and Lizzy. After living for a period in the nearby town of Portchester, Hampshire, where Neil was born in 1960, the Gaimans moved in 1965 to the West Sussex town of East Grinstead, where his parents studied Dianetics at the Scientology centre in the town; one of Gaiman's sisters works for the Church of Scientology in Los Angeles. His other sister, Lizzy Calcioli, has said, "Most of our social activities were involved with Scientology or our Jewish family. It would get very confusing when people would ask my religion as a kid. I'd say, 'I'm a Jewish Scientologist. Gaiman says that he is not a Scientologist, and that like Judaism, Scientology is his family's religion. About his personal views, Gaiman has stated, "I think we can say that God exists in the DC Universe. I would not stand up and beat the drum for the existence of God in this universe. I don't know, I think there's probably a 50/50 chance. It doesn't really matter to me." Gaiman was able to read at the age of four. He said, "I was a reader. I loved reading. Reading things gave me pleasure. I was very good at most subjects in school, not because I had any particular aptitude in them, but because normally on the first day of school they'd hand out schoolbooks, and I'd read them—which would mean that I'd know what was coming up, because I'd read it." When he was about ten years old, he read his way through the works of Dennis Wheatley, where especially The Ka of Gifford Hillary and The Haunting of Toby Jugg made an impact on him. One work that made a particular impression on him was J. R. R. Tolkien's The Lord of the Rings from his school library. Although the library only had the first two of the novel's three volumes, Gaiman consistently checked them out and read them. He later won the school English prize and the school reading prize, enabling him to finally acquire the third volume. For his seventh birthday, Gaiman received C. S. Lewis's The Chronicles of Narnia series. He later recalled that "I admired his use of parenthetical statements to the reader, where he would just talk to you ... I'd think, 'Oh, my gosh, that is so cool! I want to do that! When I become an author, I want to be able to do things in parentheses.' I liked the power of putting things in brackets." Narnia also introduced him to literary awards, specifically the 1956 Carnegie Medal won by the concluding volume. When Gaiman won the 2010 Medal himself, the press reported him recalling, "it had to be the most important literary award there ever was" and observing, "if you can make yourself aged seven happy, you're really doing well – it's like writing a letter to yourself aged seven." Lewis Carroll's Alice's Adventures in Wonderland was another childhood favourite, and "a favourite forever. Alice was default reading to the point where I knew it by heart." He also enjoyed Batman comics as a child. Gaiman was educated at several Church of England schools, including Fonthill School in East Grinstead, Ardingly College (1970–1974), and Whitgift School in Croydon (1974–1977). His father's position as a public relations official of the Church of Scientology was the cause of the seven-year-old Gaiman being forced to withdraw from Fonthill School and remain at the school that he had previously been attending. He lived in East Grinstead for many years, from 1965 to 1980 and again from 1984 to 1987. He met his first wife, Mary McGrath, while she was studying Scientology and living in a house in East Grinstead that was owned by his father. The couple were married in 1985 after having their first child, Michael. Career Journalism, early writings, and literary influences Writers that Gaiman has mentioned as significant influences include C. S. Lewis, J. R. R. Tolkien, Lewis Carroll, Mary Shelley, Rudyard Kipling, Edgar Allan Poe, Michael Moorcock, Dave Sim, Alan Moore, Steve Ditko, Will Eisner, Ursula K. Le Guin, Harlan Ellison, Lord Dunsany, and G. K. Chesterton. A lifetime fan of the Monty Python comedy troupe, as a teenager he owned a copy of Monty Python's Big Red Book. During a trip to France when he was 13, Gaiman became fascinated with the visually fantastic world in the stories of Metal Hurlant, even though he could not understand the words. When he was 19–20 years old, he contacted his favourite science fiction writer, R. A. Lafferty, whom he discovered when he was nine, and asked for advice on becoming an author along with a Lafferty pastiche he had written. The writer sent Gaiman an encouraging and informative letter back, along with literary advice. Gaiman has said Roger Zelazny was the author who influenced him the most, with this influence particularly seen in Gaiman's literary style and the topics he writes about. Other authors Gaiman says "furnished the inside of my mind and set me to writing" include Moorcock, Ellison, Samuel R. Delany, Angela Carter, Lafferty, and Le Guin. Neil Gaiman has also taken inspiration from the folk tales tradition, citing Otta F Swire's book on the legends of the Isle of Skye as his inspiration for The Truth Is a Cave in the Black Mountains. In the early 1980s, Gaiman pursued journalism, conducting interviews and writing book reviews, as a means to learn about the world and to make connections that he hoped would later assist him in getting published. He wrote and reviewed extensively for the British Fantasy Society. His first professional short story publication was "Featherquest", a fantasy story, in Imagine Magazine in May 1984. When waiting for a train at London's Victoria Station in 1984, Gaiman noticed a copy of Swamp Thing written by Alan Moore, and carefully read it. Moore's fresh and vigorous approach to comics had such an impact on Gaiman that he later wrote "that was the final straw, what was left of my resistance crumbled. I proceeded to make regular and frequent visits to London's Forbidden Planet shop to buy comics". In 1984, he wrote his first book, a biography of the band Duran Duran, as well as Ghastly Beyond Belief, a book of quotations, with Kim Newman. Even though Gaiman thought he had done a terrible job, the book's first edition sold out very quickly. When he went to relinquish his rights to the book, he discovered the publisher had gone bankrupt. After this, he was offered a job by Penthouse. He refused the offer. He also wrote interviews and articles for many British magazines, including Knave. During this he sometimes wrote under pseudonyms, including Gerry Musgrave, Richard Grey, and "a couple of house names". Gaiman has said he ended his journalism career in 1987 because British newspapers regularly publish untruths as fact. In the late 1980s, he wrote Don't Panic: The Official Hitchhiker's Guide to the Galaxy Companion in what he calls a "classic English humour" style. Following this, he wrote the opening of what became his collaboration with fellow English author Terry Pratchett on the comic novel Good Omens about the impending apocalypse. Comics After forming a friendship with comic-book writer Alan Moore, Gaiman started writing comic books, picking up Miracleman after Moore finished his run on the series. Gaiman and artist Mark Buckingham collaborated on several issues of the series before its publisher, Eclipse Comics, collapsed, leaving the series unfinished. His first published comic strips were four short Future Shocks for 2000 AD in 1986–87. He wrote three graphic novels with his favourite collaborator and long-time friend Dave McKean: Violent Cases, Signal to Noise, and The Tragical Comedy or Comical Tragedy of Mr. Punch. Impressed with his work, DC Comics hired him in February 1987, and he wrote the limited series Black Orchid. Karen Berger, who later became head of DC Comics's Vertigo, read Black Orchid and offered Gaiman a job: to re-write an old character, The Sandman, but to put his own spin on him. The Sandman tells the tale of the ageless, anthropomorphic personification of Dream that is known by many names, including Morpheus. The series began in January 1989 and concluded in March 1996. In the eighth issue of The Sandman, Gaiman and artist Mike Dringenberg introduced Death, the older sister of Dream, who became as popular as the series' title character. The limited series Death: The High Cost of Living launched DC's Vertigo line in 1993. The 75 issues of the regular series, along with an illustrated prose text and a special containing seven short stories, have been collected into 12 volumes that remain in print, 14 if the Death: The High Cost of Living and Death: The Time of Your Life spin-offs are included. Artists include Sam Kieth, Mike Dringenberg, Jill Thompson, Shawn McManus, Marc Hempel and Michael Zulli, lettering by Todd Klein, colours by Daniel Vozzo, and covers by Dave McKean. The series became one of DC's top selling titles, eclipsing even Batman and Superman. Comics historian Les Daniels called Gaiman's work "astonishing" and noted that The Sandman was "a mixture of fantasy, horror, and ironic humor such as comic books had never seen before". DC Comics writer and executive Paul Levitz observed that "The Sandman became the first extraordinary success as a series of graphic novel collections, reaching out and converting new readers to the medium, particularly young women on college campuses, and making Gaiman himself into an iconic cultural figure." Gaiman and Jamie Delano were to become co-writers of the Swamp Thing series following Rick Veitch. An editorial decision by DC to censor Veitch's final storyline caused both Gaiman and Delano to withdraw from the title. Gaiman produced two stories for DC's Secret Origins series in 1989. A Poison Ivy tale drawn by Mark Buckingham and a Riddler story illustrated by Bernie Mireault and Matt Wagner. A story that Gaiman originally wrote for Action Comics Weekly in 1989 was shelved due to editorial concerns but it was finally published in 2000 as Green Lantern/Superman: Legend of the Green Flame. In 1990, Gaiman wrote The Books of Magic, a four-part mini-series that provided a tour of the mythological and magical parts of the DC Universe through a frame story about an English teenager who discovers that he is destined to be the world's greatest wizard. The miniseries was popular, and sired an ongoing series written by John Ney Rieber. Gaiman's adaptation of Sweeney Todd, illustrated by Michael Zulli for Stephen R. Bissette's publication Taboo, was stopped when the anthology itself was discontinued. In the mid-1990s, he also created a number of new characters and a setting that was to be featured in a title published by Tekno Comix. The concepts were then altered and split between three titles set in the same continuity: Lady Justice, Mr. Hero the Newmatic Man, and Teknophage, and tie-ins. Although Gaiman's name appeared prominently as creator of the characters, he was not involved in writing any of the above-mentioned books. Gaiman wrote a semi-autobiographical story about a boy's fascination with Michael Moorcock's anti-hero Elric of Melniboné for Ed Kramer's anthology Tales of the White Wolf. In 1996, Gaiman and Ed Kramer co-edited The Sandman: Book of Dreams. Nominated for the British Fantasy Award, the original fiction anthology featured stories and contributions by Tori Amos, Clive Barker, Gene Wolfe, Tad Williams, and others. Asked why he likes comics more than other forms of storytelling, Gaiman said: Gaiman wrote two series for Marvel Comics. Marvel 1602 was an eight-issue limited series published from November 2003 to June 2004 with art by Andy Kubert and Richard Isanove. The Eternals was a seven-issue limited series drawn by John Romita Jr., which was published from August 2006 to March 2007. In 2009, Gaiman wrote a two-part Batman story for DC Comics to follow Batman R.I.P. titled "Whatever Happened to the Caped Crusader?" a play-off of the classic Superman story "Whatever Happened to the Man of Tomorrow?" by Alan Moore. He contributed a twelve-part Metamorpho serial drawn by Mike Allred for Wednesday Comics, a weekly newspaper-style series. Gaiman and Paul Cornell co-wrote Action Comics #894 (December 2010), which featured an appearance by Death. In October 2013, DC Comics released The Sandman: Overture with art by J. H. Williams III. Gaiman's Angela character was introduced into the Marvel Universe in the last issue of the Age of Ultron miniseries in 2013. Gaiman oversaw The Sandman Universe, a line of comic books published by Vertigo. The four series — House of Whispers, Lucifer, The Books of Magic, and The Dreaming — were written by new creative teams. The line launched on 8 August 2018. Novels In a collaboration with author Terry Pratchett, best known for his series of Discworld novels, Gaiman's first novel Good Omens was published in 1990. In 2011 Pratchett said that while the entire novel was a collaborative effort and most of the ideas could be credited to both of them, Pratchett did a larger portion of writing and editing if for no other reason than Gaiman's scheduled involvement with Sandman. The 1996 novelisation of Gaiman's teleplay for the BBC mini-series Neverwhere was his first solo novel. The novel was released in tandem with the television series though it presents some notable differences from the television series. Gaiman has since revised the novel twice, the first time for an American audience unfamiliar with the London Underground, the second time because he felt unsatisfied with the originals. In 1999, first printings of his fantasy novel Stardust were released. The novel has been released both as a standard novel and in an illustrated text edition. This novel was highly influenced by Victorian fairytales and culture. American Gods became one of Gaiman's best-selling and multi-award-winning novels upon its release in 2001. A special 10th Anniversary edition was released, with the "author's preferred text" 12,000 words longer than the original mass-market editions. Gaiman has not written a direct sequel to American Gods but he has revisited the characters. A glimpse at Shadow's travels in Europe is found in a short story which finds him in Scotland, applying the same concepts developed in American Gods to the story of Beowulf. The 2005 novel Anansi Boys deals with Anansi ('Mr. Nancy'), tracing the relationship of his two sons, one semi-divine and the other an unassuming Englishman, as they explore their common heritage. It debuted at number one on The New York Times Best Seller list. In late 2008, Gaiman released a new children's book, The Graveyard Book. It follows the adventures of a boy named Bod after his family is murdered and he is left to be brought up by a graveyard. It is heavily influenced by Rudyard Kipling's The Jungle Book. , it had been on The New York Times Bestseller children's list for fifteen weeks. In 2013, The Ocean at the End of the Lane was voted Book of the Year in the British National Book Awards. The novel follows an unnamed man who returns to his hometown for a funeral and remembers events that began forty
giving over one million signatures. Filmography Personal life Gaiman has lived near Menomonie, Wisconsin, since 1992. Gaiman moved there to be close to the family of his then-wife, Mary McGrath, with whom he has three children. , Gaiman also resides in Cambridge, Massachusetts. In 2014, he took up a five-year appointment as professor in the arts at Bard College, in Annandale-on-Hudson, New York. Gaiman is married to songwriter and performer Amanda Palmer, with whom he has an open marriage. The couple announced that they were dating in June 2009, and announced their engagement on Twitter on 1 January 2010. On 16 November 2010, Palmer hosted a non-legally binding flash mob wedding for Gaiman's birthday in New Orleans. They were legally married on 2 January 2011. The wedding took place in the parlour of writers Ayelet Waldman and Michael Chabon. On marrying Palmer, he took her middle name, MacKinnon, as one of his names. In September 2015, they had a son. In May 2020, he traveled from New Zealand to his holiday home on the Isle of Skye, breaking lockdown rules imposed during the COVID-19 pandemic. Ross, Skye and Lochaber MP Ian Blackford described his behaviour as unacceptable and dangerous. Gaiman published an apology on his website, saying he had endangered the local community. Advocacy In 2016, Gaiman, as well as Cate Blanchett, Chiwetel Ejiofor, Peter Capaldi, Douglas Booth, Jesse Eisenberg, Keira Knightley, Juliet Stevenson, Kit Harington, and Stanley Tucci, appeared in the video "What They Took With Them", from the United Nations' refugee agency UNHCR, to help raise awareness of the issue of global refugees. Gaiman is a supporter of the Comic Book Legal Defense Fund and has served on its board of directors. In 2013, Gaiman was named co-chair of the organization's newly formed advisory board. Friendship with Tori Amos One of Gaiman's most commented-upon friendships is with the musician Tori Amos, a Sandman fan who became friends with Gaiman after making a reference to "Neil and the Dream King" on her 1991 demo tape. He included her in turn as a character (a talking tree) in his novel Stardust. Amos also mentions Gaiman in her songs, "Tear in Your Hand" ("If you need me, me and Neil'll be hangin' out with the dream king. Neil says hi by the way"), "Space Dog" ("Where's Neil when you need him?"), "Horses" ("But will you find me if Neil makes me a tree?"), "Carbon" ("Get me Neil on the line, no I can't hold. Have him read, 'Snow, Glass, Apples' where nothing is what it seems"), "Sweet Dreams" ("You're forgetting to fly, darling, when you sleep"), and "Not Dying Today" ("Neil is thrilled he can claim he's mammalian, 'but the bad news,' he said, 'girl you're a dandelion'"). He also wrote stories for the tour book of Boys for Pele and Scarlet's Walk, a letter for the tour book of American Doll Posse, and the stories behind each girl in her album Strange Little Girls. Amos penned the introduction for his novel Death: the High Cost of Living, and posed for the cover. She also wrote a song called "Sister Named Desire" based on his Sandman character, which was included on his anthology, Where's Neil When You Need Him?. Gaiman is godfather to Tori Amos's daughter Tash, and wrote a poem called "Blueberry Girl" for Tori and Tash. The poem has been turned into a book by the illustrator Charles Vess. Gaiman read the poem aloud to an audience at the Sundance Kabuki Theater in San Francisco on 5 October 2008 during his book reading tour for The Graveyard Book. It was published in March 2009 with the title Blueberry Girl. Litigation In 1993, Gaiman was contracted by Todd McFarlane to write a single issue of Spawn, a popular title at the newly created Image Comics company. McFarlane was promoting his new title by having guest authors Gaiman, Alan Moore, Frank Miller, and Dave Sim each write a single issue. In issue No. 9 of the series, Gaiman introduced the characters Angela, Cogliostro, and Medieval Spawn. Prior to this issue, Spawn was an assassin who worked for the government and came back as a reluctant agent of Hell but had no real direction in his actions. In Angela, a cruel and malicious angel, Gaiman introduced a character who threatened Spawn's existence, as well as providing a moral opposite. Cogliostro was introduced as a mentor character for exposition and instruction, providing guidance. Medieval Spawn introduced a history and precedent that not all Spawns were self-serving or evil, giving additional character development to Malebolgia, the demon that creates Hellspawn. As intended, all three characters were used repeatedly throughout the next decade by Todd McFarlane within the wider Spawn universe. In papers filed by Gaiman in early 2002, however, he claimed that the characters were jointly owned by their scripter (himself) and artist (McFarlane), not merely by McFarlane in his role as the creator of the series. Disagreement over who owned the rights to a character was the primary motivation for McFarlane and other artists to form Image Comics (although that argument related more towards disagreements between writers and artists as character creators). As McFarlane used the characters without Gaiman's permission or royalty payments, Gaiman believed his copyrighted work was being infringed upon, which violated their original oral agreement. McFarlane initially agreed that Gaiman had not signed away any rights to the characters, and negotiated with Gaiman to effectively 'swap' McFarlane's interest in the character Marvelman. McFarlane had purchased interest in the character when Eclipse Comics was liquidated while Gaiman was interested in being able to continue his aborted run of the Marvelman title. McFarlane later changed his initial position, claiming that Gaiman's work had only been work-for-hire and that McFarlane owned all of Gaiman's creations entirely. The presiding judge, however, ruled against their agreement being work for hire, based in large part on the legal requirement that "copyright assignments must be in writing." The Seventh Circuit Court of Appeals upheld the district court ruling in February 2004 granting joint ownership of the characters to Gaiman and McFarlane. On the specific issue of Cogliostro, presiding Judge John C. Shabaz proclaimed, "The expressive work that is the comic-book character Count Nicholas Cogliostro was the joint work of Gaiman and McFarlane—their contributions strike us as quite equal—and both are entitled to ownership of the copyright". Similar analysis led to similar results for the other two characters, Angela and Medieval Spawn. This legal battle was brought by Gaiman and the specifically formed Marvels and Miracles, LLC, which Gaiman had previously created to help sort out the legal rights surrounding Marvelman. Gaiman had written Marvel 1602 in 2003 to help fund this project and all of Gaiman's profits for the original issues of the series were donated to Marvels and Miracles. Marvelman was eventually purchased by Marvel Comics in 2009. Gaiman returned to court again over the Spawn characters Dark Ages Spawn, Domina and Tiffany, claiming that they were "derivative of the three he co-created with McFarlane." The judge ruled that Gaiman was right in these claims as well and gave McFarlane until the beginning of September 2010 to settle the matter. Audiobooks The Sandman (narrated by Neil Gaiman), Audible Originals 2021, Stardust (read by Neil Gaiman), HarperAudio 2013, Norse Mythology (read by Neil Gaiman), HarperAudio 2018, Literary allusions Gaiman's work is known for a high degree of allusiveness. Dr. Meredith Collins, for instance, has commented upon the degree to which his novel Stardust depends on allusions to Victorian fairy tales and culture. Particularly in The Sandman, literary figures and characters appear often; the character of Fiddler's Green is modelled visually on G. K. Chesterton, both William Shakespeare and Geoffrey Chaucer appear as characters, as do several characters from within A Midsummer Night's Dream and The Tempest. The comic also draws from numerous mythologies and historical periods. Analyzing Gaiman's The Graveyard Book, bibliographer and librarian Richard Bleiler detects patterns of and allusions to the Gothic novel, from Horace Walpole's The Castle of Otranto to Shirley Jackson's The Haunting of Hill House. He concludes that Gaiman is "utilizing works, characters, themes, and settings that generations of scholars have identified and classified as Gothic, ... [yet] subverts them and develops the novel by focusing on the positive aspects of maturation, concentrating on the values of learning, friendship, and sacrifice." Regarding another work's assumed connection and allusions to this form, Gaiman himself quipped: "I've never been able to figure out whether Sandman is a gothic." Clay Smith has argued that this sort of allusiveness serves to situate Gaiman as a strong authorial presence in his own works, often to the exclusion of his collaborators. However, Smith's viewpoint is in the minority: to many, if there is a problem with Gaiman scholarship and intertextuality it is that "... his literary merit and vast popularity have propelled him into the nascent comics canon so quickly that there is not yet a basis of critical scholarship about his work." David Rudd takes a more generous view in his study of the novel Coraline, where he argues that the work plays and riffs productively on Sigmund Freud's notion of the Uncanny, or the Unheimlich. Though Gaiman's work is frequently seen as exemplifying the monomyth structure laid out in Joseph Campbell's The Hero with a Thousand Faces, Gaiman says that he started reading The Hero with a Thousand Faces but refused to finish it: "I think I got about half way through The Hero with a Thousand Faces and found myself thinking if this is true – I don't want to know. I really would rather not know this stuff. I'd rather do it because it's true and because I accidentally wind up creating something that falls into this pattern than be told what the pattern is." Selected awards and honours From 1991 to 1993, Gaiman won Harvey Awards in the following categories: 1991 Best Writer for The Sandman 1992 Best Writer for The Sandman 1993 Best Continuing or Limited Series for The Sandman From 1991 to 2014, Gaiman won Locus Awards in the following categories: 1991 Best Fantasy Novel (runner-up) for Good Omens by Gaiman and Terry Pratchett 1999 Best Fantasy Novel (runner-up) for Stardust 2002 Best Fantasy Novel for American Gods 2003 Best Young Adult Book for Coraline 2004 Best Novelette for "A Study in Emerald" 2005 Best Short Story for "Forbidden Brides of the Faceless Slaves in the Nameless House of the Night of Dread Desire" 2006 Best Fantasy novel for Anansi Boys. The book was also nominated for a Hugo Award, but Gaiman asked for it to be withdrawn from the list, stating that he wanted to give other writers a chance and that it was really more fantasy than science fiction. 2006 Best Short Story for "Sunbird" 2007 Best Short Story for "How to Talk to Girls at Parties" 2007 Best Collection for Fragile Things 2009 Best Young Adult novel for The Graveyard Book 2010 Best Short Story for An Invocation of Incuriosity, published in Songs of the Dying Earth 2011 Best Short Story for The Thing About Cassandra, published in Songs of Love and Death 2011 Best Novelette for The Truth Is A Cave In The Black Mountains, published in Stories 2014 Best Fantasy Novel for The Ocean at the End of the Lane From 1991 to 2009, Gaiman won Eisner Awards in the following categories: 1991 Best Continuing Series: Sandman, by Neil Gaiman and various artists (DC) 1991 Best Graphic Album–Reprint: Sandman: The Doll's House by Neil Gaiman and various artists (DC) 1991 Best Writer: Neil Gaiman, Sandman (DC) 1992 Best Single Issue or Story: Sandman #22-#28: "Season of Mists," by Neil Gaiman and various artists (DC) 1992 Best Continuing Series: Sandman, by Neil Gaiman and various artists (DC) 1992 Best Writer: Neil Gaiman, Sandman, Books of Magic (DC), Miracleman (Eclipse) 1993 Best Continuing Series: Sandman by Neil Gaiman and various artists (DC) 1993 Best Graphic Album–New: Signal to Noise by Neil Gaiman and Dave McKean (VG Graphics/Dark Horse) 1993 Best Writer: Neil Gaiman, Miracleman (Eclipse); Sandman (DC) 1994 Best Writer: Neil Gaiman, Sandman (DC/Vertigo); Death: The High Cost of Living (DC/Vertigo) 2000 Best Comics-Related Book: The Sandman: The Dream Hunters, by Neil Gaiman and Yoshitaka Amano (DC/Vertigo) 2004 Best Short Story: "Death," by Neil Gaiman and P. Craig Russell, in The Sandman: Endless Nights (Vertigo/DC) 2004 Best Anthology: The Sandman: Endless Nights, by Neil Gaiman and others, edited by Karen Berger and Shelly Bond (Vertigo/DC) 2007 Best Archival Collection/Project–Comic Books: Absolute Sandman, vol. 1, by Neil Gaiman and various (Vertigo/DC) 2009 Best Publication for Teens/Tweens: Coraline, by Neil Gaiman, adapted by P. Craig Russell (HarperCollins Children's Books) In 1991, Gaiman received an Inkpot Award at the San Diego Comic-Con International From 2000 to 2004, Gaiman won Bram Stoker Awards in the following categories: 2000 Best Illustrated Narrative for The Sandman: The Dream Hunters 2001 Best Novel for American Gods 2003 Best Work for Young Readers for Coraline 2004 Best Illustrated Narrative for The Sandman: Endless Nights From 2002 to 2020, Gaiman won Hugo Awards in the following categories: 2002 Best Novel for American Gods 2003 Best Novella for Coraline 2004 Best Short Story for A Study in Emerald (in a ceremony the author presided over himself, having volunteered for the job before his story was nominated) 2009 Best Novel for The Graveyard Book presented at the 2009 Worldcon in Montreal where he was also the Professional Guest of Honor. 2012 Best Dramatic Presentation (Short Form) for "The Doctor's Wife" 2016 Best Graphic Story for The Sandman: Overture 2020 Best Dramatic Presentation, Long Form, for Good Omens From 2002 to 2003, Gaiman won Nebula Awards in the following categories: 2002 Best Novel for American Gods 2003 Best Novella for Coraline From 2006 to 2010, Gaiman won British Fantasy Awards in the following categories: 2006 Best Novel for Anansi Boys 2007 British Fantasy Award, collection, for Fragile Things 2009 British Fantasy Award for Best Novel shortlist for The Graveyard Book 2010 British Fantasy Award, comic/graphic novel, Whatever Happened to the Caped Crusader?, by Gaiman and Andy Kubert In 2010, Gaiman won Shirley Jackson Awards in the following categories: 2010 Best Novelette for "The Truth Is A Cave In The Black Mountains" 2010 Best Edited Anthology for Stories: All New Tales, edited by Neil Gaiman and Al Sarrantonio (William Morrow) 1991 World Fantasy Award for short fiction for the Sandman issue, "A Midsummer Night's Dream", by Gaiman and Charles Vess 1991–1993 Comics Buyer's Guide Award for Favorite Writer 1997–2000 Comics Buyer's Guide Award for Favorite Writer nominations 1997 Comic Book Legal Defense Fund Defender of Liberty award 1999 Mythopoeic Fantasy Award for Adult Literature for the illustrated version of Stardust 2003 British Science Fiction Association Award, short fiction, for Coraline 2004 Angoulême International Comics Festival Prize for Scenario for The Sandman: Season of Mists 2005 The William Shatner Golden Groundhog Award for Best Underground Movie, nomination for MirrorMask. The other nominated films were Green Street Hooligans, Nine Lives, Up for Grabs, and Opie Gets Laid. 2005 Quill Book Award for Graphic Novels for Marvel 1602 2006 Mythopoeic Fantasy Award for Adult Literature for Anansi Boys 2007 Bob Clampett Humanitarian Award 2007 Comic-Con Icon award presented at the Scream Awards. 2009 Newbery Medal for The Graveyard Book 2009 Audie Award: Children's 8–12 and Audiobook of the year for the audio version of The Graveyard Book. 2009 The Booktrust Teenage Prize for The Graveyard Book 2010 Gaiman was selected as the Honorary Chair of National Library Week by the American Library Association. 2010 Carnegie Medal for The Graveyard Book, becoming the first author to have won both the Carnegie and Newbery Medals for the same work. 2011 Ray Bradbury Award for Outstanding Dramatic Presentation (with Richard Clark) for The Doctor's Wife 2012 Honorary Doctorate of Arts from the University of the Arts 2013 National Book Awards (British), Book of the Year winner for The Ocean at the End of the Lane 2016 University of St Andrews Honorary degree of Doctor of Letters 2018 Nomination for the New Academy Prize in Literature. 2019 Barnes & Noble Writers for Writers Award, "celebrat[ing] authors who have given generously to other writers or to the broader literary community." Gaiman was given the award "for advocating for freedom of expression worldwide and inspiring countless writers." 2020 Best Adaptation from Another Medium: Neil Gaiman's Snow, Glass,
of the night. They might appear in a whirlwind. Such encounters could be dangerous, bringing dumbness, besotted infatuation, madness or stroke to the unfortunate man. When parents believed their child to be nereid-struck, they would pray to Saint Artemidos. Nymphs and fairies Nymphs are often depicted in classic works across art, literature, mythology, and fiction. They are often associated with the medieval romances or Renaissance literature of the elusive fairies or elves. Sleeping nymph A motif that entered European art during the Renaissance was the idea of a statue of a nymph sleeping in a grotto or spring. This motif supposedly came from an Italian report of a Roman sculpture of a nymph at a fountain above the River Danube. The report, and an accompanying poem supposedly on the fountain describing the sleeping nymph, are now generally concluded to be a fifteenth-century forgery, but the motif proved influential among artists and landscape gardeners for several centuries after, with copies seen at neoclassical gardens such as the grotto at Stourhead. List All the names for various classes of nymphs have plural feminine adjectives, most agreeing with the substantive numbers and groups of nymphai. There is no single adopted classification that could be seen as canonical and exhaustive. Some classes of nymphs tend to overlap, which complicates the task of precise classification. e.g. dryads and hamadryads as nymphs of trees generally, meliai as nymphs of ash trees. By dwelling or affinity The following is not the authentic Greek classification, but is intended as a guide: By location The following is a list of individual nymphs or groups thereof associated with this or that particular location. Nymphs in such groups could belong to any of the classes mentioned above (Naiades, Oreades, and so on). Others The following is a selection of names of the nymphs whose class was not specified in the source texts. For lists of Naiads, Oceanids, Dryades etc., see respective articles. In non-Greek tales influenced by Greek mythology Sabrina (the river Severn) Tágides (Tagus River) Modern Use In modern usage, "Nymph" is used in two senses different from the original Greek meaning. "Nymph" can be used to describe an attractive, sexually mature young woman. For example, the title of the Perry Mason novel "The Case of the Negligent Nymph" () refers to a such a young woman who in the book's plot suddenly swims to Mason's canoe. The term can have pejorative connotations regarding the sexual behavior of such women, and derived from it is the term "Nymphomania" referring to female Hypersexuality. In biology, "Nymph" describes an immature form of an insect that does not change greatly as it grows, e.g. a dragonfly, mayfly, or locust. Gallery See also Animism Apsaras Kami Houri List of Greek mythological figures Nymphaeum Pitsa panels Rå Yakshini Xian Notes References Grimal,
were also spirits invariably bound to places, not unlike the Latin genius loci, and sometimes this produced complicated myths like the cult of Arethusa to Sicily. In some of the works of the Greek-educated Latin poets, the nymphs gradually absorbed into their ranks the indigenous Italian divinities of springs and streams (Juturna, Egeria, Carmentis, Fontus) while the Lymphae (originally Lumpae), Italian water goddesses, owing to the accidental similarity of their names, could be identified with the Greek Nymphae. The classical mythologies of the Roman poets were unlikely to have affected the rites and cults of individual nymphs venerated by country people in the springs and clefts of Latium. Among the Roman literate class, their sphere of influence was restricted and they appear almost exclusively as divinities of the watery element. Greek folk religion The ancient Greek belief in nymphs survived in many parts of the country into the early years of the twentieth century when they were usually known as "nereids". Nymphs often tended to frequent areas distant from humans but could be encountered by lone travelers outside the village, where their music might be heard, and the traveler could spy on their dancing or bathing in a stream or pool, either during the noon heat or in the middle of the night. They might appear in a whirlwind. Such encounters could be dangerous, bringing dumbness, besotted infatuation, madness or stroke to the unfortunate man. When parents believed their child to be nereid-struck, they would pray to Saint Artemidos. Nymphs and fairies Nymphs are often depicted in classic works across art, literature, mythology, and fiction. They are often associated with the medieval romances or Renaissance literature of the elusive fairies or elves. Sleeping nymph A motif that entered European art during the Renaissance was the idea of a statue of a nymph sleeping in a grotto or spring. This motif supposedly came from an Italian report of a Roman sculpture of a nymph at a fountain above the River Danube. The report, and an accompanying poem supposedly on the fountain describing the sleeping nymph, are now generally concluded to be a fifteenth-century forgery, but the motif proved influential among artists and landscape gardeners for several centuries after, with copies seen at neoclassical gardens such as the grotto at Stourhead. List All the names for various classes of nymphs have plural feminine adjectives, most agreeing with the substantive numbers and groups of nymphai. There is no single adopted classification that could be seen as canonical and exhaustive. Some classes of nymphs tend to overlap, which complicates the task of precise classification. e.g. dryads and hamadryads as nymphs of trees generally, meliai as nymphs of ash trees. By dwelling or affinity The following is not the authentic Greek classification, but is intended as a guide: By location The following is a list of individual nymphs or groups thereof associated with this or that particular location. Nymphs in such groups
Shetland and Orkney, off the north coast of mainland Scotland, and in Caithness Old East Norse, the eastern dialect of Old Norse, spoken in Denmark, Sweden and areas under their influence Location Norse, Texas, a ghost town founded by Nordic pioneers Nordic countries Scandinavia Companies Norse Atlantic Airways, a Norwegian airline Norse Projects, a Danish clothing brand Sport Luther College Norse, the intercollegiate athletic program of Luther College Mesabi Range Norse, the intercollegiate athletic
a medieval North Germanic ethnolinguistic group ancestral to modern Scandinavians, defined as speakers of Old Norse from about the 9th to the 13th centuries. Norse may also refer to: Culture and religion Norse mythology Norse paganism Norse art Norse activity in the British Isles Vikings Language Proto-Norse language, the Germanic language predecessor of Old Norse Old Norse, a North Germanic language spoken in Scandinavia and areas under Scandinavian influence from c. 800 AD to c. 1300
commanding what is virtuous [honesta] and forbidding the contrary.'" Fortescue cited the great Italian Leonardo Bruni for his statement that "virtue alone produces happiness." Christopher St. Germain's The Doctor and Student was a classic of English jurisprudence, and it was thoroughly annotated by Thomas Jefferson. St. Germain informs his readers that English lawyers generally don't use the phrase "law of nature," but rather use "reason" as the preferred synonym. Norman Doe notes that St. Germain's view "is essentially Thomist," quoting Thomas Aquinas's definition of law as "an ordinance of reason made for the common good by him who has charge of the community, and promulgated." Sir Edward Coke was the preeminent jurist of his time. Coke's preeminence extended across the ocean: "For the American revolutionary leaders, 'law' meant Sir Edward Coke's custom and right reason." Coke defined law as "perfect reason, which commands those things that are proper and necessary and which prohibits contrary things." For Coke, human nature determined the purpose of law; and law was superior to any one person's reason or will. Coke's discussion of natural law appears in his report of Calvin's Case (1608): "The law of nature is that which God at the time of creation of the nature of man infused into his heart, for his preservation and direction." In this case the judges found that "the ligeance or faith of the subject is due unto the King by the law of nature: secondly, that the law of nature is part of the law of England: thirdly, that the law of nature was before any judicial or municipal law: fourthly, that the law of nature is immutable." To support these findings, the assembled judges (as reported by Coke, who was one of them) cited as authorities Aristotle, Cicero, and the Apostle Paul; as well as Bracton, Fortescue, and St. Germain. After Coke, the most famous common law jurist of the seventeenth century is Sir Matthew Hale. Hale wrote a treatise on natural law that circulated among English lawyers in the eighteenth century and survives in three manuscript copies. This natural-law treatise has been published as Of the Law of Nature (2015). Hale's definition of the natural law reads: "It is the Law of Almighty God given by him to Man with his Nature discovering the morall good and moral evill of Moral Actions, commanding the former, and forbidding the latter by the secret voice or dictate of his implanted nature, his reason, and his concience." He viewed natural law as antecedent, preparatory, and subsequent to civil government, and stated that human law "cannot forbid what the Law of Nature injoins, nor Command what the Law of Nature prohibits." He cited as authorities Plato, Aristotle, Cicero, Seneca, Epictetus, and the Apostle Paul. He was critical of Hobbes's reduction of natural law to self-preservation and Hobbes's account of the state of nature, but drew positively on Hugo Grotius's De jure belli ac pacis, Francisco Suárez's Tractatus de legibus ac deo legislatore, and John Selden's De jure naturali et gentium juxta disciplinam Ebraeorum. As early as the thirteenth century, it was held that "the law of nature...is the ground of all laws" and by the Chancellor and Judges that "it is required by the law of nature that every person, before he can be punish'd, ought to be present; and if absent by contumacy, he ought to be summoned and make default." Further, in 1824, we find it held that "proceedings in our Courts are founded upon the law of England, and that law is again founded upon the law of nature and the revealed law of God. If the right sought to be enforced is inconsistent with either of these, the English municipal courts cannot recognize it." Hobbes By the 17th century, the medieval teleological view came under intense criticism from some quarters. Thomas Hobbes instead founded a contractarian theory of legal positivism on what all men could agree upon: what they sought (happiness) was subject to contention, but a broad consensus could form around what they feared (violent death at the hands of another). The natural law was how a rational human being, seeking to survive and prosper, would act. Natural law, therefore, was discovered by considering humankind's natural rights, whereas previously it could be said that natural rights were discovered by considering the natural law. In Hobbes' opinion, the only way natural law could prevail was for men to submit to the commands of the sovereign. Because the ultimate source of law now comes from the sovereign, and the sovereign's decisions need not be grounded in morality, legal positivism is born. Jeremy Bentham's modifications on legal positivism further developed the theory. As used by Thomas Hobbes in his treatises Leviathan and De Cive, natural law is "a precept, or general rule, found out by reason, by which a man is forbidden to do that which is destructive of his life, or takes away the means of preserving the same; and to omit that by which he thinks it may best be preserved." According to Hobbes, there are nineteen Laws. The first two are expounded in chapter XIV of Leviathan ("of the first and second natural laws; and of contracts"); the others in chapter XV ("of other laws of nature"). The first law of nature is that every man ought to endeavour peace, as far as he has hope of obtaining it; and when he cannot obtain it, that he may seek and use all helps and advantages of war. The second law of nature is that a man be willing, when others are so too, as far forth, as for peace, and defence of himself he shall think it necessary, to lay down this right to all things; and be contented with so much liberty against other men, as he would allow other men against himself. The third law is that men perform their covenants made. In this law of nature consisteth the fountain and original of justice... when a covenant is made, then to break it is unjust and the definition of injustice is no other than the not performance of covenant. And whatsoever is not unjust is just. The fourth law is that a man which receiveth benefit from another of mere grace, endeavour that he which giveth it, have no reasonable cause to repent him of his good will. Breach of this law is called ingratitude. The fifth law is complaisance: that every man strive to accommodate himself to the rest. The observers of this law may be called sociable; the contrary, stubborn, insociable, forward, intractable. The sixth law is that upon caution of the future time, a man ought to pardon the offences past of them that repenting, desire it. The seventh law is that in revenges, men look not at the greatness of the evil past, but the greatness of the good to follow. The eighth law is that no man by deed, word, countenance, or gesture, declare hatred or contempt of another. The breach of which law is commonly called contumely. The ninth law is that every man acknowledge another for his equal by nature. The breach of this precept is pride. The tenth law is that at the entrance into the conditions of peace, no man require to reserve to himself any right, which he is not content should be reserved to every one of the rest. The breach of this precept is arrogance, and observers of the precept are called modest. The eleventh law is that if a man be trusted to judge between man and man, that he deal equally between them. The twelfth law is that such things as cannot be divided, be enjoyed in common, if it can be; and if the quantity of the thing permit, without stint; otherwise proportionably to the number of them that have right. The thirteenth law is the entire right, or else...the first possession (in the case of alternating use), of a thing that can neither be divided nor enjoyed in common should be determined by lottery. The fourteenth law is that those things which cannot be enjoyed in common, nor divided, ought to be adjudged to the first possessor; and in some cases to the first born, as acquired by lot. The fifteenth law is that all men that mediate peace be allowed safe conduct. The sixteenth law is that they that are at controversie, submit their Right to the judgement of an Arbitrator. The seventeenth law is that no man is a fit Arbitrator in his own cause. The eighteenth law is that no man should serve as a judge in a case if greater profit, or honour, or pleasure apparently ariseth [for him] out of the victory of one party, than of the other. The nineteenth law is that in a disagreement of fact, the judge should not give more weight to the testimony of one party than another, and absent other evidence, should give credit to the testimony of other witnesses. Hobbes's philosophy includes a frontal assault on the founding principles of the earlier natural legal tradition, disregarding the traditional association of virtue with happiness, and likewise re-defining "law" to remove any notion of the promotion of the common good. Hobbes has no use for Aristotle's association of nature with human perfection, inverting Aristotle's use of the word "nature." Hobbes posits a primitive, unconnected state of nature in which men, having a "natural proclivity...to hurt each other" also have "a Right to every thing, even to one anothers body"; and "nothing can be Unjust" in this "warre of every man against every man" in which human life is "solitary, poore, nasty, brutish, and short." Rejecting Cicero's view that people join in society primarily through "a certain social spirit which nature has implanted in man," Hobbes declares that men join in society simply for the purpose of "getting themselves out from that miserable condition of Warre, which is necessarily consequent...to the naturall Passions of men, when there is no visible Power to keep them in awe." As part of his campaign against the classical idea of natural human sociability, Hobbes inverts that fundamental natural legal maxim, the Golden Rule. Hobbes's version is "Do not that to another, which thou wouldst not have done to thy selfe." Cumberland's rebuttal of Hobbes The English cleric Richard Cumberland wrote a lengthy and influential attack on Hobbes's depiction of individual self-interest as the essential feature of human motivation. Historian Knud Haakonssen has noted that in the eighteenth century, Cumberland was commonly placed alongside Alberico Gentili, Hugo Grotius and Samuel Pufendorf "in the triumvirate of seventeenth-century founders of the 'modern' school of natural law." The eighteenth-century philosophers Shaftesbury and Hutcheson "were obviously inspired in part by Cumberland." Historian Jon Parkin likewise describes Cumberland's work as "one of the most important works of ethical and political theory of the seventeenth century." Parkin observes that much of Cumberland's material "is derived from Roman Stoicism, particularly from the work of Cicero, as "Cumberland deliberately cast his engagement with Hobbes in the mould of Cicero's debate between the Stoics, who believed that nature could provide an objective morality, and Epicureans, who argued that morality was human, conventional and self-interested." In doing so, Cumberland de-emphasized the overlay of Christian dogma (in particular, the doctrine of "original sin" and the corresponding presumption that humans are incapable of "perfecting" themselves without divine intervention) that had accreted to natural law in the Middle Ages. By way of contrast to Hobbes's multiplicity of laws, Cumberland states in the very first sentence of his Treatise of the Laws of Nature that "all the Laws of Nature are reduc'd to that one, of Benevolence toward all Rationals." He later clarifies: "By the name Rationals I beg leave to understand, as well God as Man; and I do it upon the Authority of Cicero." Cumberland argues that the mature development ("perfection") of human nature involves the individual human willing and acting for the common good. For Cumberland, human interdependence precludes Hobbes's natural right of each individual to wage war against all the rest for personal survival. However, Haakonssen warns against reading Cumberland as a proponent of "enlightened self-interest." Rather, the "proper moral love of humanity" is "a disinterested love of God through love of humanity in ourselves as well as others." Cumberland concludes that actions "principally conducive to our Happiness" are those that promote "the Honour and Glory of God" and also "Charity and Justice towards men." Cumberland emphasizes that desiring the well-being of our fellow humans is essential to the "pursuit of our own Happiness." He cites "reason" as the authority for his conclusion that happiness consists in "the most extensive Benevolence," but he also mentions as "Essential Ingredients of Happiness" the "Benevolent Affections," meaning "Love and Benevolence towards others," as well as "that Joy, which arises from their Happiness." American jurisprudence The U.S. Declaration of Independence states that it has become necessary for the people of the United States to assume "the separate and equal station to which the Laws of Nature and of Nature's God entitle them." Some early American lawyers and judges perceived natural law as too tenuous, amorphous, and evanescent a legal basis for grounding concrete rights and governmental limitations. Natural law did, however, serve as authority for legal claims and rights in some judicial decisions, legislative acts, and legal pronouncements. Robert Lowry Clinton argues that the U.S. Constitution rests on a common law foundation and the common law, in turn, rests on a classical natural law foundation. European liberal natural law Liberal natural law grew out of the medieval Christian natural law theories and out of Hobbes' revision of natural law, sometimes in an uneasy balance of the two. Sir Alberico Gentili and Hugo Grotius based their philosophies of international law on natural law. In particular, Grotius's writings on freedom of the seas and just war theory directly appealed to natural law. About natural law itself, he wrote that "even the will of an omnipotent being cannot change or abrogate" natural law, which "would maintain its objective validity even if we should assume the impossible, that there is no God or that he does not care for human affairs." (De iure belli ac pacis, Prolegomeni XI). This is the famous argument etiamsi daremus (non esse Deum), that made natural law no longer dependent on theology. However, German church-historians Ernst Wolf and M. Elze disagreed and claimed that Grotius' concept of natural law did have a theological basis. In Grotius' view, the Old Testament contained moral precepts (e.g. the Decalogue) which Christ confirmed and therefore were still valid. Moreover, they were useful in explaining the content of natural law. Both biblical revelation and natural law originated in God and could therefore not contradict each other. In a similar way, Samuel Pufendorf gave natural law a theological foundation and applied it to his concepts of government and international law. John Locke incorporated natural law into many of his theories and philosophy, especially in Two Treatises of Government. There is considerable debate about whether his conception of natural law was more akin to that of Aquinas (filtered through Richard Hooker) or Hobbes' radical reinterpretation, though the effect of Locke's understanding is usually phrased in terms of a revision of Hobbes upon Hobbesian contractarian grounds. Locke turned Hobbes' prescription around, saying that if the ruler went against natural law and failed to protect "life, liberty, and property," people could justifiably overthrow the existing state and create a new one. While Locke spoke in the language of natural law, the content of this law was by and large protective of natural rights, and it was this language that later liberal thinkers preferred. Political philosopher Jeremy Waldron has pointed out that Locke's political thought was based on "a particular set of Protestant Christian assumptions." To Locke, the content of natural law was identical with biblical ethics as laid down especially in the Decalogue, Christ's teaching and exemplary life, and St. Paul's admonitions. Locke derived the concept of basic human equality, including the equality of the sexes ("Adam and Eve"), from Genesis 1, 26–28, the starting-point of the theological doctrine of Imago Dei. One of the consequences is that as all humans are created equally free, governments need the consent of the governed. Thomas Jefferson, arguably echoing Locke, appealed to unalienable rights in the Declaration of Independence, "We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness." The Lockean idea that governments need the consent of the governed was also fundamental to the Declaration of Independence, as the American Revolutionaries used it as justification for their separation from the British crown. The Belgian philosopher of law Frank van Dun is one among those who are elaborating a secular conception of natural law in the liberal tradition. Anarcho-capitalist theorist Murray Rothbard argues that "the very existence of a natural law discoverable by reason is a potentially powerful threat to the status quo and a standing reproach to the reign of blindly traditional custom or the arbitrary will of the State apparatus." Austrian school economist Ludwig von Mises states that he relaid the general sociological and economic foundations of the liberal doctrine upon utilitarianism, rather than natural law, but R. A. Gonce argues that "the reality of the argument constituting his system overwhelms his denial." Murray Rothbard, however, says that Gonce makes a lot of errors and distortions in the analysis of Mises's works, including making confusions about the term which Mises uses to refer to scientific laws, "laws of nature," saying it characterizes Mises as a natural law philosopher. David Gordon notes, "When most people speak of natural law, what they have in mind is the contention that morality can be derived from human nature. If human beings are rational animals of such-and-such a sort, then the moral virtues are...(filling in the blanks is the difficult part)." Nobel Prize winning Austrian economist and social theorist F. A. Hayek said that, originally, "the term 'natural' was used to describe an orderliness or regularity that was not the product of deliberate human will. Together with 'organism' it was one of the two terms generally understood to refer to the spontaneously grown in contrast to the invented or designed. Its use in this sense had been inherited from the stoic philosophy, had been revived in the twelfth century, and it was finally under its flag that the late Spanish Schoolmen developed the foundations of the genesis and functioning of spontaneously formed social institutions." The idea that 'natural' was "the product of designing reason" is a product of a seventeenth century rationalist reinterpretation of the law of nature. Luis Molina, for example, when referred to the 'natural' price, explained that it is "so called because 'it results from the thing itself without regard to laws and decrees, but is dependent on many circumstances which alter it, such as the sentiments of men, their estimation of different uses, often even in consequence of whims and pleasures." And even John Locke, when talking about the foundations of natural law and explaining what he thought when citing "reason," said: "By reason, however, I do not think is meant here that faculty of the understanding which forms traint of thought and deduces proofs, but certain definite principles of action from which spring all virtues and whatever is necessary for the proper moulding of morals." This anti-rationalist approach to human affairs, for Hayek, was the same which guided Scottish enlightenment thinkers, such as Adam Smith, David Hume and Adam Ferguson, to make their case for liberty. For them, no one can have the knowledge necessary to plan society, and this "natural" or "spontaneous" order of society shows how it can efficiently "plan" bottom-up. Also, the idea that law is just a product of deliberate design, denied by natural law and linked to legal positivism, can easily generate totalitarianism: "If law is wholly the product of deliberate design, whatever the designer decrees to be law is just by definition and unjust law becomes a contradiction in terms. The will of the duly authorized legislator is then wholly unfettered and guided solely by his concrete interests." This idea is wrong because law cannot be just a product of "reason": "no system of articulated law can be applied except within a framework of generally recognized but often unarticulated rules of justice." However, a secular critique of the natural law doctrine was stated by Pierre Charron in his De la sagesse (1601): "The sign of a natural law must be the universal respect in which it is held, for if there was anything that nature had truly commanded us to do, we would undoubtedly obey it universally: not only would every nation respect it, but every individual. Instead there is nothing in the world that is not subject to contradiction and dispute, nothing that is not rejected, not just by one nation, but by many; equally, there is nothing that is strange and (in the opinion of many) unnatural that is not approved in many countries, and authorized by their customs." Contemporary jurisprudence One modern articulation of the concept of natural laws was given by Belina and Dzudzek: "By constant repetition, those practices develop into structures in the form of discourses which can become so natural that we abstract from their societal origins, that the latter are forgotten and seem to be natural laws." In jurisprudence, natural law can refer to the several doctrines: That just laws are immanent in nature; that is, they can be "discovered" or "found" but not "created" by such things as a bill of rights; That they can emerge by the natural process of resolving conflicts, as embodied by the evolutionary process of the common law; or That the meaning of law is such that its content cannot be determined except by reference to moral principles. These meanings can either oppose or complement each other, although they share the common trait that they rely on inherence as opposed to design in finding just laws. Whereas legal positivism would say that a law can be unjust without it being any less a law, a natural law jurisprudence would say that there is something legally deficient about an unjust norm. Besides utilitarianism and Kantianism, natural law jurisprudence has in common with virtue ethics that it is a live option for a first principles ethics theory in analytic philosophy. The concept of natural law was very important in the development of the English common law. In the struggles between Parliament and the monarch, Parliament often made reference to the Fundamental Laws of England, which were at times said to embody natural law principles since time immemorial and set limits on the power of the monarchy. According to William Blackstone, however, natural law might be useful in determining the content of the common law and in deciding cases of equity, but was not itself identical with the laws of England. Nonetheless, the implication of natural law in the common law tradition has meant that the great opponents of natural law and advocates of legal positivism, like Jeremy Bentham, have also been staunch critics of the common law. Natural law jurisprudence is currently undergoing a period of reformulation (as is legal positivism). The most prominent contemporary natural law jurist, Australian John Finnis, is based in Oxford, but there are also Americans Germain Grisez, Robert P. George, and Canadian Joseph Boyle and Brazilian Emídio Brasileiro. All have tried to construct a new version of natural law. The 19th-century anarchist and legal theorist, Lysander Spooner, was also a figure in the expression of modern natural law. "New Natural Law" as it is sometimes called, originated with Grisez. It focuses on "basic human goods," such as human life, knowledge, and aesthetic experience, which are self-evidently and intrinsically worthwhile, and states that these goods reveal themselves as being incommensurable with one another. The tensions between natural law and positive law have played, and continue to play, a key role in the development of international law. U.S. Supreme Court justices Clarence Thomas and Neil Gorsuch are proponents of natural law. See also Notes References Adams, John. 1797. A Defence of the Constitutions of Government of the United States of America. 3rd edition. Philadelphia; repr. Darmstadt, Germany: Scientia Verlag Aalen, 1979. Aristotle. Nicomachean Ethics. Aristotle. Rhetoric. Aristotle. Politics. Aquinas. Summa Theologica. Barham, Francis.Introduction to The Political Works of Marcus Tullius Cicero. Blackstone, William. 1765–9. Commentaries on the Laws of England. Botein, Stephen. 1978. "Cicero as Role Model for Early American Lawyers: A Case Study in Classical 'Influence'". The Classical Journal 73, no. 4 (April–May). Boyer, Allen D. 2004. "Sir Edward Coke, Ciceronianus: Classical Rhetoric and the Common Law Tradition." in Law, Liberty, and Parliament: Selected Essays on the Writings of Sir Edward Coke, ed. Allen D. Boyer. Indianapolis: Liberty Fund. Burlamaqui, Jean Jacques. 1763. The Principles of Natural and Politic Law. Trans. Thomas Nugent. Repr., Indianapolis: The Liberty Fund, 2006. Burns, Tony. 2000. "Aquinas's Two Doctrines of Natural Law." Political Studies 48. pp. 929–46. Carlyle, A. J. 1903. A History of Medieval Political Theory in the West. vol. 1. Edinburgh. Cicero. De Legibus. Cochrane, Charles Norris. 1957. Christianity and Classical Culture: A Study of Thought and Action from Augustus to Augustine. Oxford: Oxford University Press. Corbett, R. J. 2009. "The Question of Natural Law in Aristotle." History of Political Thought 30, no. 2 (Summer): 229–50 Corwin, Edward S. 1955. The "Higher Law" Background of American Constitutional Law. Ithaca, N.Y.: Cornell University Press. Edlin, Douglas E. 2006. "Judicial Review Without a Constitution." Polity 38, no. 3 (July): 345–68. Farrell, James M. 1989. "John Adams's Autobiography: The Ciceronian Paradigm and the Quest for Fame." The New England Quarterly 62, no. 4 (Dec. ). Gert, Bernard, [1998] 2005. Morality: Its Nature and Justification. Description & outline. Revised Edition, Oxford University Press. Haakonssen, Knud. 1996. Natural Law and Moral Philosophy: From Grotius to the Scottish Enlightenment. Cambridge, UK: Cambridge University Press. Haakonssen, Knud. 2000. "The Character and Obligation of Natural Law according to Richard Cumberland." In English Philosophy in the Age of Locke, ed. M.A. Stewart. Oxford. Heinze, Eric, 2013. The Concept of Injustice (Routledge) Jaffa, Harry V. 1952. Thomism and Aristotelianism. Chicago: University of Chicago Press. Jefferson's Literary Commonplace Book. Trans. and ed. Douglas L. Wilson. Princeton, N.J.: Princeton University Press, 1989. McIlwain, Charles Howard. 1932. The Growth of Political Thought in the West: From the Greeks to the End of the Middle Ages. New York: The Macmillan Company. "Natural Law." International Encyclopedia of the Social Sciences. New York, 1968. Reinhold, Meyer. 1984. Classica Americana: The Greek and Roman Heritage in the United States. Detroit: Wayne State University Press. Rommen, Heinrich A. 1947. The Natural Law: A Study in Legal and Social History and Philosophy. Trans. and rev. Thomas R. Hanley. B. Herder Book Co.; repr. Indianapolis: Liberty Fund, 1998. Scott, William Robert. 1900. Francis Hutcheson: His Life, Teaching, and Position in the History of Philosophy Cambridge; repr. New York: Augustus M. Kelley, 1966. Shellens, Max Salomon. 1959. "Aristotle on Natural Law." Natural Law Forum 4, no. 1. pp. 72–100. Skinner, Quentin. 1978. The Foundations of Modern Political Thought. Cambridge. Waldron, Jeremy. 2002. God, Locke, and Equality: Christian Foundations in Locke's Political Thought. Cambridge University Press, Cambridge (UK). . Wijngaards, John, AMRUTHA. What the Pope's man found out about the Law of Nature, Author House 2011. Wilson, James. 1967. The Works of James Wilson. Ed. Robert Green McCloskey. Cambridge, Mass.: Harvard University Press. Woo, B. Hoon. 2012. "Pannenberg's Understanding of the Natural Law." Studies in Christian Ethics 25, no. 3: 288–90. Zippelius, Reinhold. Rechtsphilosophie, 6th edition, § 12. C.H. Beck, Munich, 2011. . External links Stanford Encyclopedia of Philosophy: The Natural Law Tradition in Ethics, by Mark Murphy, 2002. Aquinas' Moral, Political, and Legal Philosophy, by John Finnis, 2005. Natural Law Theories, by John Finnis, 2007. Internet Encyclopedia of Philosophy Entry 'Natural Law' by Kenneth Einar Himma Aquinas on natural law Natural Law explained, evaluated and applied A clear introduction to Natural Law Jonathan Dolhenty, Ph.D., "An Overview of Natural Law" Catholic Encyclopedia "Natural Law" McElroy, Wendy "The Non-Absurdity of Natural Law," The Freeman, February 1998, Vol. 48, No.
freedom must be submitted. The natural law consists, for the Catholic Church, of one supreme and universal principle from which are derived all our natural moral obligations or duties. Thomas Aquinas resumes the various ideas of Catholic moral thinkers about what this principle is: since good is what primarily falls under the apprehension of the practical reason, the supreme principle of moral action must have the good as its central idea, and therefore the supreme principle is that good is to be done and evil avoided. Islamic natural law Abū Rayhān al-Bīrūnī, a medieval scholar, scientist, and polymath, understood "natural law" as the survival of the fittest. He argued that the antagonism between human beings can be overcome only through a divine law, which he believed to have been sent through prophets. This is also said to be the general position of the Ashari school, the largest school of Sunni theology, as well as Ibn Hazm. Conceptualized thus, all "laws" are viewed as originating from subjective attitudes actuated by cultural conceptions and individual preferences, and so the notion of "divine revelation" is justified as some kind of "divine intervention" that replaces human positive laws, which are criticized as being relative, with a single divine positive law. This, however, also entails that anything may be included in "the divine law" as it would in "human laws," but unlike the latter, "God's law" is seen as binding regardless of the nature of the commands by virtue of "God's might": since God is not subject to human laws and conventions, He may command what He wills just as He may do what He wills. The Maturidi school, the second-largest school of Sunni theology, as well as the Mu'tazilites, posits the existence of a form of natural, or "objective," law that humans can comprehend. Abu Mansur al-Maturidi stated that the human mind could know of the existence of God and the major forms of "good" and "evil" without the help of revelation. Al-Maturidi gives the example of stealing, which, he believes, is known to be evil by reason alone due to people's working hard for their property. Similarly, killing, fornication, and drunkenness are all "discernible evils" that the human mind could know of according to al-Maturidi. Likewise, Averroes (Ibn Rushd), in his treatise on Justice and Jihad and his commentary on Plato's Republic, writes that the human mind can know of the unlawfulness of killing and stealing and thus of the five maqasid or higher intents of the Islamic sharia, or the protection of religion, life, property, offspring, and reason. His Aristotelian commentaries also influenced the subsequent Averroist movement and the writings of Thomas Aquinas. Ibn Qayyim Al-Jawziyya also posited that human reason could discern between "great sins" and "good deeds." Nonetheless, he, like Ibn Taymiyah, emphasized the authority of "divine revelation" and asserted that it must be followed even if it "seems" to contradict human reason, though he stressed that most, if not all, of "God's commands" are both sensible (that is, rationalizable) and advantageous to humans in both "this life" and "the hereafter." The concept of Istislah in Islamic law bears some similarities to the natural law tradition in the West, as exemplified by Thomas Aquinas. However, whereas natural law deems good what is self-evidently good, according as it tends towards the fulfillment of the person, istislah typically calls good whatever is related to one of five "basic goods." Many jurists, theologians, and philosophers attempted to abstract these "basic and fundamental goods" from legal precepts. Al-Ghazali, for instance, defined them as religion, life, reason, lineage, and property, while others add "honor" also. Brehon law Early Irish law, An Senchus Mor (The Great Tradition) mentions in a number of places recht aicned or natural law. This is a concept predating European legal theory, and reflects a type of law that is universal and may be determined by reason and observation of natural action. Neil McLeod identifies concepts that law must accord with: fír (truth) and dliged (right or entitlement). These two terms occur frequently, though Irish law never strictly defines them. Similarly, the term córus (law in accordance with proper order) occurs in some places, and even in the titles of certain texts. These were two very real concepts to the jurists and the value of a given judgment with respect to them was apparently ascertainable. McLeod has also suggested that most of the specific laws mentioned have passed the test of time and thus their truth has been confirmed, while other provisions are justified in other ways because they are younger and have not been tested over time The laws were written in the oldest dialect of the Irish language, called Bérla Féini [Bairla-faina], which even at the time was so difficult that persons about to become brehons had to be specially instructed in it, the length of time from beginning to becoming a learned Brehon was usually 20 years. Although under the law any third person could fulfill the duty if both parties agreed, and both were sane. It has been included in an Ethno-Celtic breakaway subculture, as it has religious undertones and freedom of religious expression allows it to once again be used as a valid system in Western Europe. English jurisprudence Heinrich A. Rommen remarked upon "the tenacity with which the spirit of the English common law retained the conceptions of natural law and equity which it had assimilated during the Catholic Middle Ages, thanks especially to the influence of Henry de Bracton (d. 1268) and Sir John Fortescue (d. cir. 1476)." Bracton's translator notes that Bracton "was a trained jurist with the principles and distinctions of Roman jurisprudence firmly in mind"; but Bracton adapted such principles to English purposes rather than copying slavishly. In particular, Bracton turned the imperial Roman maxim that "the will of the prince is law" on its head, insisting that the king is under the law. The legal historian Charles F. Mullett has noted Bracton's "ethical definition of law, his recognition of justice, and finally his devotion to natural rights." Bracton considered justice to be the "fountain-head" from which "all rights arise." For his definition of justice, Bracton quoted the twelfth-century Italian jurist Azo: "'Justice is the constant and unfailing will to give to each his right.'" Bracton's work was the second legal treatise studied by the young apprentice lawyer Thomas Jefferson. Fortescue stressed "the supreme importance of the law of God and of nature" in works that "profoundly influenced the course of legal development in the following centuries." The legal scholar Ellis Sandoz has noted that "the historically ancient and the ontologically higher law—eternal, divine, natural—are woven together to compose a single harmonious texture in Fortescue's account of English law." As the legal historian Norman Doe explains: "Fortescue follows the general pattern set by Aquinas. The objective of every legislator is to dispose people to virtue. It is by means of law that this is accomplished. Fortescue's definition of law (also found in Accursius and Bracton), after all, was 'a sacred sanction commanding what is virtuous [honesta] and forbidding the contrary.'" Fortescue cited the great Italian Leonardo Bruni for his statement that "virtue alone produces happiness." Christopher St. Germain's The Doctor and Student was a classic of English jurisprudence, and it was thoroughly annotated by Thomas Jefferson. St. Germain informs his readers that English lawyers generally don't use the phrase "law of nature," but rather use "reason" as the preferred synonym. Norman Doe notes that St. Germain's view "is essentially Thomist," quoting Thomas Aquinas's definition of law as "an ordinance of reason made for the common good by him who has charge of the community, and promulgated." Sir Edward Coke was the preeminent jurist of his time. Coke's preeminence extended across the ocean: "For the American revolutionary leaders, 'law' meant Sir Edward Coke's custom and right reason." Coke defined law as "perfect reason, which commands those things that are proper and necessary and which prohibits contrary things." For Coke, human nature determined the purpose of law; and law was superior to any one person's reason or will. Coke's discussion of natural law appears in his report of Calvin's Case (1608): "The law of nature is that which God at the time of creation of the nature of man infused into his heart, for his preservation and direction." In this case the judges found that "the ligeance or faith of the subject is due unto the King by the law of nature: secondly, that the law of nature is part of the law of England: thirdly, that the law of nature was before any judicial or municipal law: fourthly, that the law of nature is immutable." To support these findings, the assembled judges (as reported by Coke, who was one of them) cited as authorities Aristotle, Cicero, and the Apostle Paul; as well as Bracton, Fortescue, and St. Germain. After Coke, the most famous common law jurist of the seventeenth century is Sir Matthew Hale. Hale wrote a treatise on natural law that circulated among English lawyers in the eighteenth century and survives in three manuscript copies. This natural-law treatise has been published as Of the Law of Nature (2015). Hale's definition of the natural law reads: "It is the Law of Almighty God given by him to Man with his Nature discovering the morall good and moral evill of Moral Actions, commanding the former, and forbidding the latter by the secret voice or dictate of his implanted nature, his reason, and his concience." He viewed natural law as antecedent, preparatory, and subsequent to civil government, and stated that human law "cannot forbid what the Law of Nature injoins, nor Command what the Law of Nature prohibits." He cited as authorities Plato, Aristotle, Cicero, Seneca, Epictetus, and the Apostle Paul. He was critical of Hobbes's reduction of natural law to self-preservation and Hobbes's account of the state of nature, but drew positively on Hugo Grotius's De jure belli ac pacis, Francisco Suárez's Tractatus de legibus ac deo legislatore, and John Selden's De jure naturali et gentium juxta disciplinam Ebraeorum. As early as the thirteenth century, it was held that "the law of nature...is the ground of all laws" and by the Chancellor and Judges that "it is required by the law of nature that every person, before he can be punish'd, ought to be present; and if absent by contumacy, he ought to be summoned and make default." Further, in 1824, we find it held that "proceedings in our Courts are founded upon the law of England, and that law is again founded upon the law of nature and the revealed law of God. If the right sought to be enforced is inconsistent with either of these, the English municipal courts cannot recognize it." Hobbes By the 17th century, the medieval teleological view came under intense criticism from some quarters. Thomas Hobbes instead founded a contractarian theory of legal positivism on what all men could agree upon: what they sought (happiness) was subject to contention, but a broad consensus could form around what they feared (violent death at the hands of another). The natural law was how a rational human being, seeking to survive and prosper, would act. Natural law, therefore, was discovered by considering humankind's natural rights, whereas previously it could be said that natural rights were discovered by considering the natural law. In Hobbes' opinion, the only way natural law could prevail was for men to submit to the commands of the sovereign. Because the ultimate source of law now comes from the sovereign, and the sovereign's decisions need not be grounded in morality, legal positivism is born. Jeremy Bentham's modifications on legal positivism further developed the theory. As used by Thomas Hobbes in his treatises Leviathan and De Cive, natural law is "a precept, or general rule, found out by reason, by which a man is forbidden to do that which is destructive of his life, or takes away the means of preserving the same; and to omit that by which he thinks it may best be preserved." According to Hobbes, there are nineteen Laws. The first two are expounded in chapter XIV of Leviathan ("of the first and second natural laws; and of contracts"); the others in chapter XV ("of other laws of nature"). The first law of nature is that every man ought to endeavour peace, as far as he has hope of obtaining it; and when he cannot obtain it, that he may seek and use all helps and advantages of war. The second law of nature is that a man be willing, when others are so too, as far forth, as for peace, and defence of himself he shall think it necessary, to lay down this right to all things; and be contented with so much liberty against other men, as he would allow other men against himself. The third law is that men perform their covenants made. In this law of nature consisteth the fountain and original of justice... when a covenant is made, then to break it is unjust and the definition of injustice is no other than the not performance of covenant. And whatsoever is not unjust is just. The fourth law is that a man which receiveth benefit from another of mere grace, endeavour that he which giveth it, have no reasonable cause to repent him of his good will. Breach of this law is called ingratitude. The fifth law is complaisance: that every man strive to accommodate himself to the rest. The observers of this law may be called sociable; the contrary, stubborn, insociable, forward, intractable. The sixth law is that upon caution of the future time, a man ought to pardon the offences past of them that repenting, desire it. The seventh law is that in revenges, men look not at the greatness of the evil past, but the greatness of the good to follow. The eighth law is that no man by deed, word, countenance, or gesture, declare hatred or contempt of another. The breach of which law is commonly called contumely. The ninth law is that every man acknowledge another for his equal by nature. The breach of this precept is pride. The tenth law is that at the entrance into the conditions of peace, no man require to reserve to himself any right, which he is not content should be reserved to every one of the rest. The breach of this precept is arrogance, and observers of the precept are called modest. The eleventh law is that if a man be trusted to judge between man and man, that he deal equally between them. The twelfth law is that such things as cannot be divided, be enjoyed in common, if it can be; and if the quantity of the thing permit, without stint; otherwise proportionably to the number of them that have right. The thirteenth law is the entire right, or else...the first possession (in the case of alternating use), of a thing that can neither be divided nor enjoyed in common should be determined by lottery. The fourteenth law is that those things which cannot be enjoyed in common, nor divided, ought to be adjudged to the first possessor; and in some cases to the first born, as acquired by lot. The fifteenth law is that all men that mediate peace be allowed safe conduct. The sixteenth law is that they that are at controversie, submit their Right to the judgement of an Arbitrator. The seventeenth law is that no man is a fit Arbitrator in his own cause. The eighteenth law is that no man should serve as a judge in a case if greater profit, or honour, or pleasure apparently ariseth [for him] out of the victory of one party, than of the other. The nineteenth law is that in a disagreement of fact, the judge should not give more weight to the testimony of one party than another, and absent other evidence, should give credit to the testimony of other witnesses. Hobbes's philosophy includes a frontal assault on the founding principles of the earlier natural legal tradition, disregarding the traditional association of virtue with happiness, and likewise re-defining "law" to remove any notion of the promotion of the common good. Hobbes has no use for Aristotle's association of nature with human perfection, inverting Aristotle's use of the word "nature." Hobbes posits a primitive, unconnected state of nature in which men, having a "natural proclivity...to hurt each other" also have "a Right to every thing, even to one anothers body"; and "nothing can be Unjust" in this "warre of every man against every man" in which human life is "solitary, poore, nasty, brutish, and short." Rejecting Cicero's view that people join in society primarily through "a certain social spirit which nature has implanted in man," Hobbes declares that men join in society simply for the purpose of "getting themselves out from that miserable condition of Warre, which is necessarily consequent...to the naturall Passions of men, when there is no visible Power to keep them in awe." As part of his campaign against the classical idea of natural human sociability, Hobbes inverts that fundamental natural legal maxim, the Golden Rule. Hobbes's version is "Do not that to another, which thou wouldst not have done to thy selfe." Cumberland's rebuttal of Hobbes The English cleric Richard Cumberland wrote a lengthy and influential attack on Hobbes's depiction of individual self-interest as the essential feature of human motivation. Historian Knud Haakonssen has noted that in the eighteenth century, Cumberland was commonly placed alongside Alberico Gentili, Hugo Grotius and Samuel Pufendorf "in the triumvirate of seventeenth-century founders of the 'modern' school of natural law." The eighteenth-century philosophers Shaftesbury and Hutcheson "were obviously inspired in part by Cumberland." Historian Jon Parkin likewise describes Cumberland's work as "one of the most important works of ethical and political theory of the seventeenth century." Parkin observes that much of Cumberland's material "is derived from Roman Stoicism, particularly from the work of Cicero, as "Cumberland deliberately cast his engagement with Hobbes in the mould of Cicero's debate between the Stoics, who believed that nature could provide an objective morality, and Epicureans, who argued that morality was human, conventional and self-interested." In doing so, Cumberland de-emphasized the overlay of Christian dogma (in particular, the doctrine of "original sin" and the corresponding presumption that humans are incapable of "perfecting" themselves without divine intervention) that had accreted to natural law in the Middle Ages. By way of contrast to Hobbes's multiplicity of laws, Cumberland states in the very first sentence of his Treatise of the Laws of Nature that "all the Laws of Nature are reduc'd to that one, of Benevolence toward all Rationals." He later clarifies: "By the name Rationals I beg leave to understand, as well God as Man; and I do it upon the Authority of Cicero." Cumberland argues that the mature development ("perfection") of human nature involves the individual human willing and acting for the common good. For Cumberland, human interdependence precludes Hobbes's natural right of each individual to wage war against all the rest for personal survival. However, Haakonssen warns against reading Cumberland as a proponent of "enlightened self-interest." Rather, the "proper moral love of humanity" is "a disinterested love of God through love of humanity in ourselves as well as others." Cumberland concludes that actions "principally conducive to our Happiness" are those that promote "the Honour and Glory of God" and also "Charity and Justice towards men." Cumberland emphasizes that desiring the well-being of our fellow humans is essential to the "pursuit of our own Happiness." He cites "reason" as the authority for his conclusion that happiness consists in "the most extensive Benevolence," but he also mentions as "Essential Ingredients of Happiness" the "Benevolent Affections," meaning "Love and Benevolence towards others," as well as "that Joy, which arises from their Happiness." American jurisprudence The U.S. Declaration of Independence states that it has become necessary for the people of the United States to assume "the separate and equal station to which the Laws of Nature and of Nature's God entitle them." Some early American lawyers and judges perceived natural law as too tenuous, amorphous, and evanescent a legal basis for grounding concrete rights and governmental limitations. Natural law did, however, serve as authority for legal claims and rights in some judicial decisions, legislative acts, and legal pronouncements. Robert Lowry Clinton argues that the U.S. Constitution rests on a common law foundation and the common law, in turn, rests on a classical natural law foundation. European liberal natural law Liberal natural law grew out of the medieval Christian natural law theories and out of Hobbes' revision of natural law, sometimes in an uneasy balance of the two. Sir Alberico Gentili and Hugo Grotius based their philosophies of international law on natural law. In particular, Grotius's writings on freedom of the seas and just war theory directly appealed to natural law. About natural law itself, he wrote that "even the will of an omnipotent being cannot change or abrogate" natural law, which "would maintain its objective validity even if we should assume the impossible, that there is no God or that he does not care for human affairs." (De iure belli ac pacis, Prolegomeni XI). This is the famous argument etiamsi daremus (non esse Deum), that made natural law no longer dependent on theology. However, German church-historians Ernst Wolf and M. Elze disagreed and claimed that Grotius' concept of natural law did have a theological basis. In Grotius' view, the Old Testament contained moral precepts (e.g. the Decalogue) which Christ confirmed and therefore were still valid. Moreover, they were useful in explaining the content of natural law. Both biblical revelation and natural law originated in God and could therefore not contradict each other. In a similar way, Samuel Pufendorf gave natural law a theological foundation and applied it to his concepts of government and international law. John Locke incorporated natural law into many of his theories and philosophy, especially in Two Treatises of Government. There is considerable debate about whether his conception of natural law was more akin to that of Aquinas (filtered through Richard Hooker) or Hobbes' radical reinterpretation, though the effect of Locke's understanding is usually phrased in terms of a revision of Hobbes upon Hobbesian contractarian grounds. Locke turned Hobbes' prescription around, saying that if the ruler went against natural law and failed to protect "life, liberty, and property," people could justifiably overthrow the existing state and create a new one. While Locke spoke in the language of natural law, the content of this law was by and large protective of natural rights, and it was this language that later liberal thinkers preferred. Political philosopher Jeremy Waldron has pointed out that Locke's political thought was based on "a particular set of Protestant Christian assumptions." To Locke, the content of natural law was identical with biblical ethics as laid down especially in the Decalogue, Christ's teaching and exemplary life, and St. Paul's admonitions. Locke derived the concept of basic human equality, including the equality of the sexes ("Adam and Eve"), from Genesis 1, 26–28, the starting-point of the theological doctrine of Imago Dei. One of the consequences is that as all humans are created equally free, governments need the consent of the governed. Thomas Jefferson, arguably echoing Locke, appealed to unalienable rights in the Declaration of Independence, "We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness." The Lockean idea that governments need the consent of the governed was also fundamental to the Declaration of Independence, as the American Revolutionaries used it as justification for their separation from the British crown. The Belgian philosopher of law Frank van Dun is one among those who are elaborating a secular conception of natural law in the liberal tradition. Anarcho-capitalist theorist Murray Rothbard argues that "the very existence of a natural law discoverable by reason is a potentially powerful threat to the status quo and a standing reproach to the reign of blindly traditional custom or the arbitrary will of the State apparatus." Austrian school economist Ludwig von Mises states that he relaid the general sociological and economic foundations of the liberal doctrine upon utilitarianism, rather than natural law, but R. A. Gonce argues that "the reality of the argument constituting his system overwhelms his denial." Murray Rothbard, however, says that Gonce makes a lot of errors and distortions in the analysis of Mises's works, including making confusions about the term which Mises uses to refer to scientific laws, "laws of nature," saying it characterizes Mises as a natural law philosopher. David Gordon notes, "When most people speak of natural law, what they have in mind is the contention that morality can be derived from human nature. If human beings are rational animals of such-and-such a sort, then the moral virtues are...(filling in the blanks is the difficult part)." Nobel Prize winning Austrian economist and social theorist F. A. Hayek said that, originally, "the term 'natural' was used to describe an orderliness or regularity that was not the product of deliberate human will. Together with 'organism' it was one of the two terms generally understood to refer to the spontaneously grown in contrast to the invented or designed. Its use in this sense had been inherited from the stoic philosophy, had been revived in the twelfth century, and it was finally under its flag that the late Spanish Schoolmen developed the foundations of the genesis and functioning of spontaneously formed social institutions." The idea that 'natural' was "the product of designing reason" is a product of a seventeenth century rationalist reinterpretation of the law of nature. Luis Molina, for example, when referred to the 'natural' price, explained that it is "so called because 'it results from the thing itself without regard to laws and decrees, but is dependent on many circumstances which alter
of all of Central and even parts of East Asia—were all but wiped out. Isolated pockets of Christianity survived only in India. The religious victors on the vast Central Asian mission field of the Nestorians were Islam and Buddhism". Doctrine Nestorianism is a radical form of dyophysitism, differing from orthodox dyophysitism on several points, mainly by opposition to the concept of hypostatic union. It can be seen as the antithesis to Eutychian Monophysitism, which emerged in reaction to Nestorianism. Where Nestorianism holds that Christ had two loosely united natures, divine and human, Monophysitism holds that he had but a single nature, his human nature being absorbed into his divinity. A brief definition of Nestorian Christology can be given as: "Jesus Christ, who is not identical with the Son but personally united with the Son, who lives in him, is one hypostasis and one nature: human." This contrasts with Nestorius' own teaching that the Word, which is eternal, and the Flesh, which is not, came together in a hypostatic union, 'Jesus Christ', Jesus thus being both fully man and God, of two ousia () (essences) but of one prosopon (person). Both Nestorianism and Monophysitism were condemned as heretical at the Council of Chalcedon. Nestorius developed his Christological views as an attempt to understand and explain rationally the incarnation of the divine Logos, the Second Person of the Holy Trinity as the man Jesus. He had studied at the School of Antioch where his mentor had been Theodore of Mopsuestia; Theodore and other Antioch theologians had long taught a literalist interpretation of the Bible and stressed the distinctiveness of the human and divine natures of Jesus. Nestorius took his Antiochene leanings with him when he was appointed Patriarch of Constantinople by Byzantine emperor Theodosius II in 428. Nestorius's teachings became the root of controversy when he publicly challenged the long-used title Theotokos ("God-Bearer") for Mary. He suggested that the title denied Christ's full humanity, arguing instead that Jesus had two persons (dyoprosopism), the divine Logos and the human Jesus. As a result of this prosopic duality, he proposed Christotokos (Christ-Bearer) as a more suitable title for Mary. Nestorius' opponents found his teaching too close to the heresy of adoptionism – the idea that Christ had been born a man who had later been "adopted" as God's son. Nestorius was especially criticized by Cyril, Patriarch of Alexandria, who argued that Nestorius's teachings undermined the unity of Christ's divine and human natures at the Incarnation. Some of Nestorius's opponents argued that he put too much emphasis on the human nature of Christ, and others debated that the difference that Nestorius implied between the human nature and the divine nature created a fracture in the singularity of Christ, thus creating two Christ figures. Nestorius himself always insisted that his views were orthodox, though they were deemed heretical at the Council of Ephesus in 431, leading to the Nestorian Schism, when churches supportive of Nestorius and the rest of the Christian Church separated. However, this formulation was never adopted by all churches termed "Nestorian". Indeed, the modern Assyrian Church of the East, which reveres Nestorius, does not fully subscribe to Nestorian doctrine, though it does not employ the title Theotokos. Nestorian Schism Nestorianism became a distinct sect following the Nestorian Schism, beginning in the 430s. Nestorius had come under fire from Western theologians, most notably Cyril of Alexandria. Cyril had both theological and political reasons for attacking Nestorius; on top of feeling that Nestorianism was an error against true belief, he also wanted to denigrate the head of a competing patriarchate. Cyril and Nestorius asked Pope Celestine I to weigh in on the matter. Celestine found that the title Theotokos was orthodox, and authorized Cyril to ask Nestorius to recant. Cyril, however, used the opportunity to further attack Nestorius, who pleaded with Emperor Theodosius II to call a council so that all grievances could be aired. In 431 Theodosius called the Council of Ephesus. However, the council ultimately sided with Cyril, who held that the Christ contained two natures in one divine person (hypostasis, unity of subsistence), and that the Virgin Mary, conceiving and bearing this divine person, is truly called the Mother of God (Theotokos, meaning, God-bearer). The council accused Nestorius of heresy, and deposed him as patriarch. Upon returning to his monastery in 436, he was banished to Upper Egypt. Nestorianism was officially anathematized, a ruling reiterated at the Council of Chalcedon in 451. However, a number of churches, particularly those associated with the School of Edessa, supported Nestorius – though not necessarily his doctrine – and broke with the churches of the West. Many of Nestorius' supporters relocated to the Sasanian Empire of Iran, home to a vibrant but persecuted Christian minority. In Upper Egypt, Nestorius wrote his Book of Heraclides, responding to the two councils at Ephesus (431, 449). Christian denomination tree Church of the East The western provinces of the Persian Empire had been home to Christian communities, headed by metropolitans, and later patriarchs of Seleucia-Ctesiphon. The Christian minority in Persia was frequently persecuted by the Zoroastrian majority, which was accusing local Christians of political leanings towards the Roman Empire. In 424, the Church in Persia declared itself independent, in order to ward off allegations of any foreign allegiance. By the end of the 5th century, the Persian Church increasingly aligned itself with the teachings of Theodore of Mopsuestia and his followers, many of whom became dissidents after the councils of Ephesus (431), and Chalcedon (451). Persian Church became increasingly opposed to doctrines promoted by those councils, thus furthering the divide between Chalcedonian Christianity and Christianity in Persia. In 486, the Metropolitan Barsauma of Nisibis publicly accepted Nestorius' mentor Theodore of Mopsuestia as a spiritual authority. In 489, when the School of Edessa in Mesopotamia was closed by Byzantine Emperor Zeno for its pro-Nestorian teachings, the school relocated to its original home of Nisibis, becoming again the School of Nisibis, leading to a wave of Christian dissidents immigration into Persia. The Persian patriarch Babai (497–502) reiterated and expanded upon the church's esteem for Theodore of Mopsuestia. Now firmly established in Persia, with centers in Nisibis, Ctesiphon, and Gundeshapur, and several metropoleis, the Persian Church began to branch out beyond the Sasanian Empire. However, through the sixth century, the church was frequently beset with internal strife and persecution by Zoroastrians. The infighting led to a schism, which lasted from 521 until around 539 when
gradual development led to the creation of specific doctrinal views within the Church of the East. Evolution of those views was finalized by prominent East Syriac theologian Babai the Great (d. 628) who was using the specific Syriac term qnoma (ܩܢܘܡܐ) as a designation for dual (divine and human) substances within one prosopon (person or hypostasis) of Christ. Such views were officially adopted by the Church of the East at a council held in 612. Opponents of such views labeled them as "Nestorian" thus creating the practice of misnaming the Church of the East as Nestorian. For a long time, such labeling seemed appropriate, since Nestorius was officially venerated as a saint in the Church of the East. In modern religious studies, this label has been criticized as improper and misleading. As a consequence, the use of Nestorian label in scholarly literature, and also in the field of inter-denominational relations, is gradually being reduced to its primary meaning, focused on the original teachings of Nestorius. History Nestorianism was condemned as heresy at the Council of Ephesus (431). The Armenian Church rejected the Council of Chalcedon (451) because they believed Chalcedonian Definition was too similar to Nestorianism. The Persian Nestorian Church, on the other hand, supported the spread of Nestorianism in Persarmenia. The Armenian Church and other eastern churches saw the rise of Nestorianism as a threat to the independence of their Church. Peter the Iberian, a Georgian prince, also strongly opposed the Chalcedonian Creed. Thus, in 491, Catholicos Babken I of Armenia, along with the Albanian and Iberian bishops met in Vagharshapat and issued a condemnation of the Chalcedonian Definition. Nestorians held that the Council of Chalcedon proved the orthodoxy of their faith and had started persecuting non-Chalcedonian or Monophysite Syrian Christians during the reign of Peroz I. In response to pleas for assistance from the Syrian Church, Armenian prelates issued a letter addressed to Persian Christians reaffirming their condemnation of the Nestorianism as heresy. Following the exodus to Persia, scholars expanded on the teachings of Nestorius and his mentors, particularly after the relocation of the School of Edessa to the (then) Persian city of Nisibis (modern-day Nusaybin in Turkey) in 489, where it became known as the School of Nisibis. Nestorian monasteries propagating the teachings of the Nisibis school flourished in 6th century Persarmenia. Despite this initial Eastern expansion, the Nestorians' missionary success was eventually deterred. David J. Bosch observes, "By the end of the fourteenth century, however, the Nestorian and other churches—which at one time had dotted the landscape of all of Central and even parts of East Asia—were all but wiped out. Isolated pockets of Christianity survived only in India. The religious victors on the vast Central Asian mission field of the Nestorians were Islam and Buddhism". Doctrine Nestorianism is a radical form of dyophysitism, differing from orthodox dyophysitism on several points, mainly by opposition to the concept of hypostatic union. It can be seen as the antithesis to Eutychian Monophysitism, which emerged in reaction to Nestorianism. Where Nestorianism holds that Christ had two loosely united natures, divine and human, Monophysitism holds that he had but a single nature, his human nature being absorbed into his divinity. A brief definition of Nestorian Christology can be given as: "Jesus Christ, who is not identical with the Son but personally united with the Son, who lives in him, is one hypostasis and one nature: human." This contrasts with Nestorius' own teaching that the Word, which is eternal, and the Flesh, which is not, came together in a hypostatic union, 'Jesus Christ', Jesus thus being both fully man and God, of two ousia () (essences) but of one prosopon (person). Both Nestorianism and Monophysitism were condemned as heretical at the Council of Chalcedon. Nestorius developed his Christological views as an attempt to understand and explain rationally the incarnation of the divine Logos, the Second Person of the Holy Trinity as the man Jesus. He had studied at the School of Antioch where his mentor had been Theodore of Mopsuestia; Theodore and other Antioch theologians had long taught a literalist interpretation of the Bible and stressed the distinctiveness of the human and divine natures of Jesus. Nestorius took his Antiochene leanings with him when he was appointed Patriarch of Constantinople by Byzantine emperor Theodosius II in 428. Nestorius's teachings became the root of controversy when he publicly challenged the long-used title Theotokos ("God-Bearer") for Mary. He suggested that the title denied Christ's full humanity, arguing instead that Jesus had two persons (dyoprosopism), the divine Logos and the human Jesus. As a result of this prosopic duality, he proposed Christotokos (Christ-Bearer) as a more suitable title for Mary. Nestorius' opponents found his teaching too close to the heresy of adoptionism – the idea that Christ had been born a man who had later been "adopted" as God's son. Nestorius was especially criticized by Cyril, Patriarch of Alexandria, who argued that Nestorius's teachings undermined the unity of Christ's divine and human natures at the Incarnation. Some of Nestorius's opponents argued that he put too much emphasis on the human nature of Christ, and others debated that the difference that Nestorius implied between the human nature and the divine nature created a fracture in the singularity of Christ, thus creating two Christ figures. Nestorius himself always insisted that his views were orthodox, though they were deemed heretical at the Council of Ephesus in 431, leading to the Nestorian Schism, when churches supportive of Nestorius and the rest of the Christian Church separated. However, this formulation was never adopted by all churches termed "Nestorian". Indeed, the modern Assyrian Church of the East, which reveres Nestorius, does not fully subscribe to Nestorian doctrine, though it does not employ the title Theotokos. Nestorian Schism Nestorianism became a distinct sect following the Nestorian Schism, beginning in the 430s. Nestorius had come under fire from Western theologians, most notably Cyril of Alexandria. Cyril had both theological and political reasons for attacking Nestorius; on top of feeling
a markup construct nCr or nCr, mathematical notation for combinations San Carlos Airport (Nicaragua) IATA code N for "National" National Capital Region, a conurbation surrounding capital National Capital Region (Canada) National Capital Region (India) National Capital Region (Japan) National Capital Region (Philippines) National Capital
construct nCr or nCr, mathematical notation for combinations San Carlos Airport (Nicaragua) IATA code N for "National" National Capital Region, a conurbation surrounding capital National Capital Region (Canada) National Capital Region (India) National Capital Region (Japan) National Capital Region (Philippines) National Capital Region (United States) National Catholic Reporter,
of large corporations. She also accuses several such corporations of unethically exploiting workers in the world's poorest countries in pursuit of greater profits. In this book, Klein criticized Nike so severely that Nike published a point-by-point response. No Logo became an international bestseller, selling over one million copies in over 28 languages. Fences and Windows Klein's Fences and Windows (2002) is a collection of her articles and speeches written on behalf of the anti-globalization movement (all proceeds from the book go to benefit activist organizations through The Fences and Windows Fund). The Take The Take (2004), a documentary film collaboration by Klein and Lewis, concerns factory workers in Argentina who took over a closed plant and resumed production, operating as a collective. The first African screening was in the Kennedy Road shack settlement in the South African city of Durban, where the Abahlali baseMjondolo movement began. An article in Z Communications criticized The Take for its portrayal of the Argentine general and politician Juan Domingo Perón arguing that he was falsely portrayed as a social democrat. The Shock Doctrine Klein's third book, The Shock Doctrine: The Rise of Disaster Capitalism, was published on September 4, 2007. The book argues that the free market policies of Nobel Laureate Milton Friedman and the Chicago School of Economics have risen to prominence in countries such as Chile, under Pinochet, Poland, Russia, under Yeltsin. The book also argues that policy initiatives (for instance, the privatization of Iraq's economy under the Coalition Provisional Authority) were rushed through while the citizens of these countries were in shock from disasters, upheavals, or invasion. The book became an international and New York Times bestseller and was translated into 28 languages. Central to the book's thesis is the contention that those who wish to implement unpopular free market policies now routinely do so by taking advantage of certain features of the aftermath of major disasters, be they economic, political, military or natural. The suggestion is that when a society experiences a major 'shock' there is a widespread desire for a rapid and decisive response to correct the situation; this desire for bold and immediate action provides an opportunity for unscrupulous actors to implement policies which go far beyond a legitimate response to disaster. The book suggests that when the rush to act means the specifics of a response will go unscrutinized, that is the moment when unpopular and unrelated policies will intentionally be rushed into effect. The book appears to claim that these shocks are in some cases intentionally encouraged or even manufactured. Klein identifies the "shock doctrine", elaborating on Joseph Schumpeter, as the latest in capitalism's phases of "creative destruction". The Shock Doctrine was adapted into a short film of the same name, released onto YouTube. The original is no longer available on the site, however, a duplicate was published in 2008. The film was directed by Jonás Cuarón, produced and co-written by his father Alfonso Cuarón. The original video was viewed over one million times. The publication of The Shock Doctrine increased Klein's prominence, with The New Yorker judging her "the most visible and influential figure on the American left—what Howard Zinn and Noam Chomsky were thirty years ago." On February 24, 2009, the book was awarded the inaugural Warwick Prize for Writing from the University of Warwick in England. The prize carried a cash award of £50,000. This Changes Everything: Capitalism vs. the Climate Klein's fourth book, This Changes Everything: Capitalism vs. the Climate was published in September 2014. The book puts forth the argument that the hegemony of neoliberal market fundamentalism is blocking any serious reforms to halt climate change and protect the environment. Questioned about Klein's claim that capitalism and controlling climate change were incompatible, Benoit Blarel, manager of the Environment and Natural Resources global practice at the World Bank, said that the write-off of fossil fuels necessary to control climate change "will have a huge impact all over" and that the World Bank was "starting work on this". The book won the 2014 Hilary Weston Writers' Trust Prize for Nonfiction, and was a shortlisted nominee for the 2015 Shaughnessy Cohen Prize for Political Writing. No Is Not Enough: Resisting Trump's Shock Politics and Winning the World We Need Klein's fifth book, No Is Not Enough: Resisting Trump's Shock Politics and Winning the World We Need was published in June 2017. It has also been published Internationally with the alternative subtitle Defeating the New Shock Politics. The Battle for Paradise: Puerto Rico Takes on the Disaster Capitalists Released in June 2018 as paperback and e-book, The Battle for Paradise: Puerto Rico Takes on the Disaster Capitalists covers what San Juan Mayor Carmen Yulín Cruz refers to as the post-Hurricane Maria unmasked colonialism leading to inequality and "creating a fierce humanitarian crisis." On Fire: The (Burning) Case for a Green New Deal In April 2019, Simon & Schuster announced they would be publishing Klein's seventh book, On Fire: The (Burning) Case for a Green New Deal, which was published on September 17, 2019. On Fire is a collection of essays focusing on climate change and the urgent actions needed to preserve the world. Klein relates her meeting with Greta Thunberg in the opening essay in which she discusses the entrance of young people into those speaking out for climate awareness and change. She supports the Green New Deal throughout the book and in the final essay she discusses the 2020 U.S. election stating: "The stakes of the election are almost unbearably high. It's why I wrote the book and decided to put it out now and why I'll be doing whatever I can to help push people toward supporting a candidate with the most ambitious Green New Deal platform—so that they win the primaries and then the general." Views Iraq War criticism Klein has written about the Iraq War. In "Baghdad Year Zero" (Harper's Magazine, September 2004), Klein argues that, contrary to popular belief, the George W. Bush administration did have a clear plan for post-invasion Iraq: to build a completely unconstrained free market economy. She describes plans to allow foreigners to extract wealth from Iraq and the methods used to achieve those goals. Her "Baghdad Year Zero" was one
Bonnie Sherr Klein, is best known for her anti-pornography film Not a Love Story. Her father, Michael Klein, is a physician and a member of Physicians for Social Responsibility. Her brother, Seth Klein, is an author and the former director of the British Columbia office of the Canadian Centre for Policy Alternatives. Before World War II, her paternal grandparents were communists, but they began to turn against the Soviet Union after the Molotov–Ribbentrop Pact in 1939. In 1942, her grandfather, an animator at Disney, was fired after the 1941 strike, and had to switch to working in a shipyard instead. By 1956 they had abandoned communism. Klein's father grew up surrounded by ideas of social justice and racial equality, but found it "difficult and frightening to be the child of Communists", a so-called red diaper baby. Klein's husband, Avi Lewis, was born into a political and journalistic family. His grandfather, David Lewis, was an architect and leader of the federal New Democratic Party, while his father, Stephen Lewis, was a leader of the Ontario New Democratic Party. Avi Lewis works as a TV journalist and documentary filmmaker. The couple's only child, son Toma, was born on June 13, 2012. Early life Klein spent much of her teenage years in shopping malls, obsessed with designer labels. As a child and teenager, she found it "very oppressive to have a very public feminist mother" and she rejected politics, instead embracing "full-on consumerism". She has attributed her change in worldview to two catalysts. One was when she was 17 and preparing for the University of Toronto, her mother had a stroke and became severely disabled. Naomi, her father, and her brother took care of Bonnie through the period in hospital and at home, making educational sacrifices to do so. That year off prevented her "from being such a brat". The next year, after beginning her studies at the University of Toronto, the second catalyst occurred: the 1989 École Polytechnique massacre of female engineering students, which proved to be a wake-up call to feminism. Klein's writing career began with contributions to The Varsity, a student newspaper, where she served as editor-in-chief. After her third year at the University of Toronto, she dropped out of university to take a job at The Globe and Mail, followed by an editorship at This Magazine. In 1995, she returned to the University of Toronto with the intention of finishing her degree but left University to pursue an internship in journalism before acquiring the final credits required to complete her degree. Works No Logo In 1999 Klein published the book No Logo, which for many became a manifesto of the anti-globalization movement. In it, she attacks brand-oriented consumer culture and the operations of large corporations. She also accuses several such corporations of unethically exploiting workers in the world's poorest countries in pursuit of greater profits. In this book, Klein criticized Nike so severely that Nike published a point-by-point response. No Logo became an international bestseller, selling over one million copies in over 28 languages. Fences and Windows Klein's Fences and Windows (2002) is a collection of her articles and speeches written on behalf of the anti-globalization movement (all proceeds from the book go to benefit activist organizations through The Fences and Windows Fund). The Take The Take (2004), a documentary film collaboration by Klein and Lewis, concerns factory workers in Argentina who took over a closed plant and resumed production, operating as a collective. The first African screening was in the Kennedy Road shack settlement in the South African city of Durban, where the Abahlali baseMjondolo movement began. An article in Z Communications criticized The Take for its portrayal of the Argentine general and politician Juan Domingo Perón arguing that he was falsely portrayed as a social democrat. The Shock Doctrine Klein's third book, The Shock Doctrine: The Rise of Disaster Capitalism, was published on September 4, 2007. The book argues that the free market policies of Nobel Laureate Milton Friedman and the Chicago School of Economics have risen to prominence in countries such as Chile, under Pinochet, Poland, Russia, under Yeltsin. The book also argues that policy initiatives (for instance, the privatization of Iraq's economy under the Coalition Provisional Authority) were rushed through while the citizens of these countries were in shock from disasters, upheavals, or invasion. The book became an international and New York Times bestseller and was translated into 28 languages. Central to the book's thesis is the contention that those who wish to implement unpopular free market policies now routinely do so by taking advantage of certain features of the aftermath of major disasters, be they economic, political, military or natural. The suggestion is that when a society experiences a major 'shock' there is a widespread desire for a rapid and decisive response to correct the situation; this desire for bold and immediate action provides an opportunity for unscrupulous actors to implement policies which go far beyond a legitimate response to disaster. The book suggests that when the rush to act means the specifics of a response will go unscrutinized, that is the moment when unpopular and unrelated policies will intentionally be rushed into effect. The book appears to claim that these shocks are in some cases intentionally encouraged or even manufactured. Klein identifies the "shock doctrine", elaborating on Joseph Schumpeter, as the latest in capitalism's phases of "creative destruction". The Shock Doctrine was adapted into a short film of the same name, released onto YouTube. The original is no longer available on the site, however, a duplicate was published in 2008. The film was directed by Jonás Cuarón, produced and co-written by his father Alfonso Cuarón. The original video was viewed over one million times. The publication of The Shock Doctrine increased Klein's prominence, with The New Yorker judging her "the most visible and influential figure on the American left—what Howard Zinn and Noam Chomsky were thirty years ago." On February 24, 2009, the book was awarded the inaugural Warwick Prize for Writing from the University of Warwick in England. The prize carried a cash award of £50,000. This Changes Everything: Capitalism vs. the Climate Klein's fourth book, This Changes Everything: Capitalism vs. the Climate was published in September 2014. The book puts forth the argument that the hegemony of neoliberal market fundamentalism is blocking any serious reforms to halt climate change and protect the environment. Questioned about Klein's claim that capitalism and controlling climate change were incompatible, Benoit Blarel, manager of the Environment and Natural Resources global practice at the World Bank, said that the write-off of fossil fuels necessary to control climate change "will have a huge impact all over" and that the World Bank was "starting work on this". The book won the 2014 Hilary Weston Writers' Trust Prize for Nonfiction, and was a shortlisted nominee for the 2015 Shaughnessy Cohen Prize for Political Writing. No Is Not Enough: Resisting Trump's Shock Politics and Winning the World We Need Klein's fifth book, No Is Not Enough: Resisting Trump's Shock Politics and Winning the World We Need was published in June 2017. It has also been published Internationally with the alternative subtitle Defeating the New Shock Politics. The Battle for Paradise: Puerto Rico Takes on the Disaster Capitalists Released in June 2018 as paperback and e-book, The Battle for Paradise: Puerto Rico Takes on the Disaster Capitalists covers what San Juan Mayor Carmen Yulín Cruz refers to as the post-Hurricane Maria unmasked colonialism leading to inequality and "creating a fierce humanitarian crisis." On Fire: The (Burning) Case for a Green New Deal In April 2019, Simon & Schuster announced they would be publishing Klein's seventh book, On Fire: The (Burning) Case for a Green New Deal, which was published on September 17, 2019. On Fire is a collection of essays focusing on climate change and the urgent actions needed to preserve the world. Klein relates her meeting with Greta Thunberg in the opening essay in which she discusses the entrance of young people into those speaking out for climate awareness and change. She supports the Green New Deal throughout the book and in the final essay she discusses the 2020 U.S. election stating: "The stakes of the election are almost unbearably high. It's why I wrote the book and decided to put it out now and why I'll be doing whatever I can to help push people toward supporting a candidate with the most ambitious Green New Deal platform—so that they win the primaries and then the general." Views Iraq War criticism Klein has written about the Iraq War. In "Baghdad Year Zero" (Harper's Magazine, September 2004), Klein argues that, contrary to popular belief, the George W. Bush administration did have a clear plan for post-invasion Iraq: to build a completely unconstrained free market economy. She describes plans to allow foreigners to extract wealth from Iraq and the methods used to achieve those goals. Her "Baghdad Year Zero" was one of the inspirations for the 2008 film War, Inc. Klein's "Bring Najaf to New York" (The Nation, August 2004) argued that Muqtada Al Sadr's Mahdi Army "represents the overwhelmingly mainstream sentiment in Iraq" and that, if he were elected, "Sadr would try to turn Iraq into a theocracy like Iran," although his immediate demands were for "direct elections and an end to foreign occupation". Marc Cooper, a former Nation columnist, attacked the assertion that Al Sadr represented mainstream Iraqi sentiment and that American forces had brought the war to the holy city of Najaf. "Klein should know better," he
pain) Metastatic bone pain Postoperative pain Muscle stiffness and pain due to Parkinson's disease Pyrexia (fever) Ileus Renal colic Macular edema Traumatic injury Chronic pain and cancer-related pain The effectiveness of NSAIDs for treating non-cancer chronic pain and cancer-related pain in children and adolescents is not clear. There have not been sufficient numbers of high-quality randomised controlled trials conducted. Inflammation Differences in anti-inflammatory activity between the various individual NSAIDs are small, but there is considerable variation in individual patient response, and tolerance to these drugs. About 60% of patients will respond to any NSAID; of the others, those who do not respond to one may well respond to another. Pain relief starts soon after taking the first dose, and a full analgesic effect should normally be obtained within a week, whereas an anti-inflammatory effect may not be achieved (or may not be clinically assessable) for up to three weeks. If appropriate responses are not obtained within these times, another NSAID should be tried. Surgical pain Pain following surgery can be significant, and many people require strong pain medications such as opioids. There is some low-certainty evidence that starting NSAID painkiller medications in adults early, before surgery, may help reduce post-operative pain, and also reduce the dose or quantity of opioid medications required after surgery. Any increase risk of surgical bleeding, bleeding in the gastrointestinal system, myocardial infarctions, or injury to the kidneys has not been well studied. When used in combination with paracetamol, the analgesic effect on post-operative pain may be improved. Aspirin Aspirin, the only NSAID able to irreversibly inhibit COX-1, is also indicated for antithrombosis through inhibition of platelet aggregation. This is useful for the management of arterial thrombosis, and prevention of adverse cardiovascular events like heart attacks. Aspirin inhibits platelet aggregation by inhibiting the action of thromboxane A2. Dentistry NSAIDs are useful in the management of post-operative dental pain following invasive dental procedures such as dental extraction. When not contra-indicated, they are favoured over the use of paracetamol alone due to the anti-inflammatory effect they provide. There is weak evidence suggesting that taking pre-operative analgesia can reduce the length of post operative pain associated with placing orthodontic spacers under local anaesthetic. Contraindications NSAIDs may be used with caution by people with the following conditions: Irritable bowel syndrome (IBS) Persons who are over age 50, and who have a family history of gastrointestinal (GI) problems Persons who have had previous gastrointestinal problems from NSAID use NSAIDs should usually be avoided by people with the following conditions: Peptic ulcer or stomach bleeding Uncontrolled hypertension Kidney disease People that suffer with inflammatory bowel disease (Crohn's disease or ulcerative colitis) Past transient ischemic attack (excluding aspirin) Past stroke (excluding aspirin) Past myocardial infarction (excluding aspirin) Coronary artery disease (excluding aspirin) Undergoing coronary artery bypass surgery Congestive heart failure (excluding low-dose aspirin) In third trimester of pregnancy Persons who have undergone gastric bypass surgery Persons who have a history of allergic or allergic-type NSAID hypersensitivity reactions, e.g. aspirin-induced asthma Adverse effects The widespread use of NSAIDs has meant that the adverse effects of these drugs have become increasingly common. Use of NSAIDs increases risk of a range of gastrointestinal (GI) problems, kidney disease and adverse cardiovascular events. As commonly used for post-operative pain, there is evidence of increased risk of kidney complications. Their use following gastrointestinal surgery remains controversial, given mixed evidence of increased risk of leakage from any bowel anastomosis created. An estimated 10–20% of people taking NSAIDs experience indigestion. In the 1990s high doses of prescription NSAIDs were associated with serious upper gastrointestinal adverse events, including bleeding. NSAIDs, like all medications, may interact with other medications. For example, concurrent use of NSAIDs and quinolone antibiotics may increase the risk of quinolones' adverse central nervous system effects, including seizure. There is an argument over the benefits and risks of NSAIDs for treating chronic musculoskeletal pain. Each drug has a benefit-risk profile and balancing the risk of no treatment with the competing potential risks of various therapies should be considered. For people over the age of 65 years old, the balance between the benefits of pain-relief medications such as NSAIDS and the potential for adverse effects has not been well determined. In October 2020, the U.S. Food and Drug Administration (FDA) required the drug label to be updated for all nonsteroidal anti-inflammatory medications to describe the risk of kidney problems in unborn babies that result in low amniotic fluid. They are recommending avoiding NSAIDs in pregnant women at 20 weeks or later in pregnancy. Combinational risk If a COX-2 inhibitor is taken, a traditional NSAID (prescription or over-the-counter) should not be taken at the same time. In addition, people on daily aspirin therapy (e.g., for reducing cardiovascular risk) must be careful if they also use other NSAIDs, as these may inhibit the cardioprotective effects of aspirin. Rofecoxib (Vioxx) was shown to produce significantly fewer gastrointestinal adverse drug reactions (ADRs) compared with naproxen. The study, the VIGOR trial, raised the issue of the cardiovascular safety of the coxibs (COX-2 inhibitors). A statistically significant increase in the incidence of myocardial infarctions was observed in patients on rofecoxib. Further data, from the APPROVe trial, showed a statistically significant relative risk of cardiovascular events of 1.97 versus placebo—which caused a worldwide withdrawal of rofecoxib in October 2004. Use of methotrexate together with NSAIDs in rheumatoid arthritis is safe, if adequate monitoring is done. Cardiovascular NSAIDs, aside from aspirin, increase the risk of myocardial infarction and stroke. This occurs at least within a week of use. They are not recommended in those who have had a previous heart attack as they increase the risk of death or recurrent MI. Evidence indicates that naproxen may be the least harmful out of these. NSAIDs aside from (low-dose) aspirin are associated with a doubled risk of heart failure in people without a history of cardiac disease. In people with such a history, use of NSAIDs (aside from low-dose aspirin) was associated with a more than 10-fold increase in heart failure. If this link is proven causal, researchers estimate that NSAIDs would be responsible for up to 20 percent of hospital admissions for congestive heart failure. In people with heart failure, NSAIDs increase mortality risk (hazard ratio) by approximately 1.2–1.3 for naproxen and ibuprofen, 1.7 for rofecoxib and celecoxib, and 2.1 for diclofenac. On 9 July 2015, the Food and Drug Administration (FDA) toughened warnings of increased heart attack and stroke risk associated with nonsteroidal anti-inflammatory drugs (NSAIDs) other than aspirin. Possible erectile dysfunction risk A 2005 Finnish survey study found an association between long term (over 3 months) use of NSAIDs and erectile dysfunction. A 2011 publication in The Journal of Urology received widespread publicity. According to the study, men who used NSAIDs regularly were at significantly increased risk of erectile dysfunction. A link between NSAID use and erectile dysfunction still existed after controlling for several conditions. However, the study was observational and not controlled, with low original participation rate, potential participation bias, and other uncontrolled factors. The authors warned against drawing any conclusion regarding cause. Gastrointestinal The main adverse drug reactions (ADRs) associated with NSAID use relate to direct and indirect irritation of the gastrointestinal (GI) tract. NSAIDs cause a dual assault on the GI tract: the acidic molecules directly irritate the gastric mucosa, and inhibition of COX-1 and COX-2 reduces the levels of protective prostaglandins. Inhibition of prostaglandin synthesis in the GI tract causes increased gastric acid secretion, diminished bicarbonate secretion, diminished mucus secretion and diminished trophic effects on the epithelial mucosa. Common gastrointestinal side effects include: Nausea or vomiting Indigestion Gastric ulceration or bleeding Diarrhea Clinical NSAID ulcers are related to the systemic effects of NSAID administration. Such damage occurs irrespective of the route of administration of the NSAID (e.g., oral, rectal, or parenteral) and can occur even in people who have achlorhydria. Ulceration risk increases with therapy duration, and with higher doses. To minimize GI side effects, it is prudent to use the lowest effective dose for the shortest period of time—a practice that studies show is often not followed. Over 50% of patients who take NSAIDs have sustained some mucosal damage to their small intestine. The risk and rate of gastric adverse effects is different depending on the type of NSAID medication a person is taking. Indomethacin, ketoprofen, and piroxicam use appear to lead to the highest rate of gastric adverse effects, while ibuprofen (lower doses) and diclofenac appear to have lower rates. Certain NSAIDs, such as aspirin, have been marketed in enteric-coated formulations that manufacturers claim reduce the incidence of gastrointestinal ADRs. Similarly, some believe that rectal formulations may reduce gastrointestinal ADRs. However, consistent with the systemic mechanism of such ADRs, and in clinical practice, these formulations have not demonstrated a reduced risk of GI ulceration. Numerous "gastro-protective" drugs have been developed with the goal of preventing gastrointestinal toxicity in people who need to take NSAIDs on a regular basis. Gastric adverse effects may be reduced by taking medications that suppress acid production such as proton pump inhibitors (e.g.: omeprazole and esomeprazole), or by treatment with a drug that mimics prostaglandin in order to restore the lining of the GI tract (e.g.: a prostaglandin analog misoprostol). Diarrhea is a common side effect of misoprostol, however, higher doses of misoprostol have been shown to reduce the risk of a person having a complication related to a gastric ulcer while taking NSAIDs. While these techniques may be effective, they are expensive for maintenance therapy. Hydrogen sulfide NSAID hybrids prevent the gastric ulceration/bleeding associated with taking the NSAIDs alone. Hydrogen sulfide is known to have a protective effect on the cardiovascular and gastrointestinal system. Inflammatory bowel disease NSAIDs should be used with caution in individuals with inflammatory bowel disease (e.g., Crohn's disease or ulcerative colitis) due to their tendency to cause gastric bleeding and form ulceration in the gastric lining. Renal NSAIDs are also associated with a fairly high incidence of adverse drug reactions (ADRs) on the kidney and over time can lead to chronic kidney disease. The mechanism of these kidney ADRs is due to changes in kidney blood flow. Prostaglandins normally dilate the afferent arterioles of the glomeruli. This helps maintain normal glomerular perfusion and glomerular filtration rate (GFR), an indicator of kidney function. This is particularly important in kidney failure where the kidney is trying to maintain renal perfusion pressure by elevated angiotensin II levels. At these elevated levels, angiotensin II also constricts the afferent arteriole into the glomerulus in addition to the efferent arteriole it normally constricts. Since NSAIDs block this prostaglandin-mediated effect of afferent arteriole dilation, particularly in kidney failure, NSAIDs cause unopposed constriction of the afferent arteriole and decreased RPF (renal perfusion flow) and GFR. Common ADRs associated with altered kidney function include: Sodium and fluid retention Hypertension (high blood pressure) These agents may also cause kidney impairment, especially in combination with other nephrotoxic agents. Kidney failure is especially a risk if the patient is also concomitantly taking an ACE inhibitor (which removes angiotensin II's vasoconstriction of the efferent arteriole) and a diuretic (which drops plasma volume, and thereby RPF)—the so-called "triple whammy" effect. In rarer instances NSAIDs may also cause more severe kidney conditions: Interstitial nephritis Nephrotic syndrome Acute kidney injury Acute tubular necrosis Renal papillary necrosis NSAIDs in combination with excessive use of phenacetin or paracetamol (acetaminophen) may lead to analgesic nephropathy. Photosensitivity Photosensitivity is a commonly overlooked adverse effect of many of the NSAIDs. The 2-arylpropionic acids are the most likely to produce photosensitivity reactions, but other NSAIDs have also been implicated including piroxicam, diclofenac, and benzydamine. Benoxaprofen, since withdrawn due to its liver toxicity, was the most photoactive NSAID observed. The mechanism of photosensitivity, responsible for the high photoactivity of the 2-arylpropionic acids, is the ready decarboxylation of the carboxylic acid moiety. The specific absorbance characteristics of the different chromophoric 2-aryl substituents, affects the decarboxylation mechanism. During pregnancy While NSAIDs as
stroke (excluding aspirin) Past myocardial infarction (excluding aspirin) Coronary artery disease (excluding aspirin) Undergoing coronary artery bypass surgery Congestive heart failure (excluding low-dose aspirin) In third trimester of pregnancy Persons who have undergone gastric bypass surgery Persons who have a history of allergic or allergic-type NSAID hypersensitivity reactions, e.g. aspirin-induced asthma Adverse effects The widespread use of NSAIDs has meant that the adverse effects of these drugs have become increasingly common. Use of NSAIDs increases risk of a range of gastrointestinal (GI) problems, kidney disease and adverse cardiovascular events. As commonly used for post-operative pain, there is evidence of increased risk of kidney complications. Their use following gastrointestinal surgery remains controversial, given mixed evidence of increased risk of leakage from any bowel anastomosis created. An estimated 10–20% of people taking NSAIDs experience indigestion. In the 1990s high doses of prescription NSAIDs were associated with serious upper gastrointestinal adverse events, including bleeding. NSAIDs, like all medications, may interact with other medications. For example, concurrent use of NSAIDs and quinolone antibiotics may increase the risk of quinolones' adverse central nervous system effects, including seizure. There is an argument over the benefits and risks of NSAIDs for treating chronic musculoskeletal pain. Each drug has a benefit-risk profile and balancing the risk of no treatment with the competing potential risks of various therapies should be considered. For people over the age of 65 years old, the balance between the benefits of pain-relief medications such as NSAIDS and the potential for adverse effects has not been well determined. In October 2020, the U.S. Food and Drug Administration (FDA) required the drug label to be updated for all nonsteroidal anti-inflammatory medications to describe the risk of kidney problems in unborn babies that result in low amniotic fluid. They are recommending avoiding NSAIDs in pregnant women at 20 weeks or later in pregnancy. Combinational risk If a COX-2 inhibitor is taken, a traditional NSAID (prescription or over-the-counter) should not be taken at the same time. In addition, people on daily aspirin therapy (e.g., for reducing cardiovascular risk) must be careful if they also use other NSAIDs, as these may inhibit the cardioprotective effects of aspirin. Rofecoxib (Vioxx) was shown to produce significantly fewer gastrointestinal adverse drug reactions (ADRs) compared with naproxen. The study, the VIGOR trial, raised the issue of the cardiovascular safety of the coxibs (COX-2 inhibitors). A statistically significant increase in the incidence of myocardial infarctions was observed in patients on rofecoxib. Further data, from the APPROVe trial, showed a statistically significant relative risk of cardiovascular events of 1.97 versus placebo—which caused a worldwide withdrawal of rofecoxib in October 2004. Use of methotrexate together with NSAIDs in rheumatoid arthritis is safe, if adequate monitoring is done. Cardiovascular NSAIDs, aside from aspirin, increase the risk of myocardial infarction and stroke. This occurs at least within a week of use. They are not recommended in those who have had a previous heart attack as they increase the risk of death or recurrent MI. Evidence indicates that naproxen may be the least harmful out of these. NSAIDs aside from (low-dose) aspirin are associated with a doubled risk of heart failure in people without a history of cardiac disease. In people with such a history, use of NSAIDs (aside from low-dose aspirin) was associated with a more than 10-fold increase in heart failure. If this link is proven causal, researchers estimate that NSAIDs would be responsible for up to 20 percent of hospital admissions for congestive heart failure. In people with heart failure, NSAIDs increase mortality risk (hazard ratio) by approximately 1.2–1.3 for naproxen and ibuprofen, 1.7 for rofecoxib and celecoxib, and 2.1 for diclofenac. On 9 July 2015, the Food and Drug Administration (FDA) toughened warnings of increased heart attack and stroke risk associated with nonsteroidal anti-inflammatory drugs (NSAIDs) other than aspirin. Possible erectile dysfunction risk A 2005 Finnish survey study found an association between long term (over 3 months) use of NSAIDs and erectile dysfunction. A 2011 publication in The Journal of Urology received widespread publicity. According to the study, men who used NSAIDs regularly were at significantly increased risk of erectile dysfunction. A link between NSAID use and erectile dysfunction still existed after controlling for several conditions. However, the study was observational and not controlled, with low original participation rate, potential participation bias, and other uncontrolled factors. The authors warned against drawing any conclusion regarding cause. Gastrointestinal The main adverse drug reactions (ADRs) associated with NSAID use relate to direct and indirect irritation of the gastrointestinal (GI) tract. NSAIDs cause a dual assault on the GI tract: the acidic molecules directly irritate the gastric mucosa, and inhibition of COX-1 and COX-2 reduces the levels of protective prostaglandins. Inhibition of prostaglandin synthesis in the GI tract causes increased gastric acid secretion, diminished bicarbonate secretion, diminished mucus secretion and diminished trophic effects on the epithelial mucosa. Common gastrointestinal side effects include: Nausea or vomiting Indigestion Gastric ulceration or bleeding Diarrhea Clinical NSAID ulcers are related to the systemic effects of NSAID administration. Such damage occurs irrespective of the route of administration of the NSAID (e.g., oral, rectal, or parenteral) and can occur even in people who have achlorhydria. Ulceration risk increases with therapy duration, and with higher doses. To minimize GI side effects, it is prudent to use the lowest effective dose for the shortest period of time—a practice that studies show is often not followed. Over 50% of patients who take NSAIDs have sustained some mucosal damage to their small intestine. The risk and rate of gastric adverse effects is different depending on the type of NSAID medication a person is taking. Indomethacin, ketoprofen, and piroxicam use appear to lead to the highest rate of gastric adverse effects, while ibuprofen (lower doses) and diclofenac appear to have lower rates. Certain NSAIDs, such as aspirin, have been marketed in enteric-coated formulations that manufacturers claim reduce the incidence of gastrointestinal ADRs. Similarly, some believe that rectal formulations may reduce gastrointestinal ADRs. However, consistent with the systemic mechanism of such ADRs, and in clinical practice, these formulations have not demonstrated a reduced risk of GI ulceration. Numerous "gastro-protective" drugs have been developed with the goal of preventing gastrointestinal toxicity in people who need to take NSAIDs on a regular basis. Gastric adverse effects may be reduced by taking medications that suppress acid production such as proton pump inhibitors (e.g.: omeprazole and esomeprazole), or by treatment with a drug that mimics prostaglandin in order to restore the lining of the GI tract (e.g.: a prostaglandin analog misoprostol). Diarrhea is a common side effect of misoprostol, however, higher doses of misoprostol have been shown to reduce the risk of a person having a complication related to a gastric ulcer while taking NSAIDs. While these techniques may be effective, they are expensive for maintenance therapy. Hydrogen sulfide NSAID hybrids prevent the gastric ulceration/bleeding associated with taking the NSAIDs alone. Hydrogen sulfide is known to have a protective effect on the cardiovascular and gastrointestinal system. Inflammatory bowel disease NSAIDs should be used with caution in individuals with inflammatory bowel disease (e.g., Crohn's disease or ulcerative colitis) due to their tendency to cause gastric bleeding and form ulceration in the gastric lining. Renal NSAIDs are also associated with a fairly high incidence of adverse drug reactions (ADRs) on the kidney and over time can lead to chronic kidney disease. The mechanism of these kidney ADRs is due to changes in kidney blood flow. Prostaglandins normally dilate the afferent arterioles of the glomeruli. This helps maintain normal glomerular perfusion and glomerular filtration rate (GFR), an indicator of kidney function. This is particularly important in kidney failure where the kidney is trying to maintain renal perfusion pressure by elevated angiotensin II levels. At these elevated levels, angiotensin II also constricts the afferent arteriole into the glomerulus in addition to the efferent arteriole it normally constricts. Since NSAIDs block this prostaglandin-mediated effect of afferent arteriole dilation, particularly in kidney failure, NSAIDs cause unopposed constriction of the afferent arteriole and decreased RPF (renal perfusion flow) and GFR. Common ADRs associated with altered kidney function include: Sodium and fluid retention Hypertension (high blood pressure) These agents may also cause kidney impairment, especially in combination with other nephrotoxic agents. Kidney failure is especially a risk if the patient is also concomitantly taking an ACE inhibitor (which removes angiotensin II's vasoconstriction of the efferent arteriole) and a diuretic (which drops plasma volume, and thereby RPF)—the so-called "triple whammy" effect. In rarer instances NSAIDs may also cause more severe kidney conditions: Interstitial nephritis Nephrotic syndrome Acute kidney injury Acute tubular necrosis Renal papillary necrosis NSAIDs in combination with excessive use of phenacetin or paracetamol (acetaminophen) may lead to analgesic nephropathy. Photosensitivity Photosensitivity is a commonly overlooked adverse effect of many of the NSAIDs. The 2-arylpropionic acids are the most likely to produce photosensitivity reactions, but other NSAIDs have also been implicated including piroxicam, diclofenac, and benzydamine. Benoxaprofen, since withdrawn due to its liver toxicity, was the most photoactive NSAID observed. The mechanism of photosensitivity, responsible for the high photoactivity of the 2-arylpropionic acids, is the ready decarboxylation of the carboxylic acid moiety. The specific absorbance characteristics of the different chromophoric 2-aryl substituents, affects the decarboxylation mechanism. During pregnancy While NSAIDs as a class are not direct teratogens, use of NSAIDs in late pregnancy can cause premature closure of the fetal ductus arteriosus and kidney ADRs in the fetus. Thus, NSAIDs are not recommended during the third trimester of pregnancy because of the increased risk of premature constriction of the ductus arteriosus. Additionally, they are linked with premature birth and miscarriage. Aspirin, however, is used together with heparin in pregnant women with antiphospholipid syndrome. Additionally, indomethacin can be used in pregnancy to treat polyhydramnios by reducing fetal urine production via inhibiting fetal renal blood flow. In contrast, paracetamol (acetaminophen) is regarded as being safe and well tolerated during pregnancy, but Leffers et al. released a study in 2010, indicating that there may be associated male infertility in the unborn. Doses should be taken as prescribed, due to risk of liver toxicity with overdoses. In France, the country's health agency contraindicates the use of NSAIDs, including aspirin, after the sixth month of pregnancy. In October 2020, the U.S. Food and Drug Administration (FDA) required the drug label to be updated for all nonsteroidal anti-inflammatory medications to describe the risk of kidney problems in unborn babies that result in low amniotic fluid. They are recommending avoiding NSAIDs in pregnant women at 20 weeks or later in pregnancy. Allergy and allergy-like hypersensitivity reactions A variety of allergic or allergic-like NSAID hypersensitivity reactions follow the ingestion of NSAIDs. These hypersensitivity reactions differ from the other adverse reactions listed here which are toxicity reactions, i.e. unwanted reactions that result from the pharmacological action of a drug, are dose-related, and can occur in any treated individual; hypersensitivity reactions are idiosyncratic reactions to a drug. Some NSAID hypersensitivity reactions are truly allergic in origin: 1) repetitive IgE-mediated urticarial skin eruptions, angioedema, and anaphylaxis following immediately to hours after ingesting one structural type of NSAID but not after ingesting structurally unrelated NSAIDs; and 2) Comparatively mild to moderately severe T cell-mediated delayed onset (usually more than 24 hour), skin reactions such as maculopapular rash, fixed drug eruptions, photosensitivity reactions, delayed urticaria, and contact dermatitis; or 3) far more severe and potentially life-threatening t-cell-mediated delayed systemic reactions such as the DRESS syndrome, acute generalized exanthematous pustulosis, the Stevens–Johnson syndrome, and toxic epidermal necrolysis. Other NSAID hypersensitivity reactions are allergy-like symptoms but do not involve true allergic mechanisms; rather, they appear due to the ability of NSAIDs to alter the metabolism of arachidonic acid in favor of forming metabolites that promote allergic symptoms. Afflicted individuals may be abnormally sensitive to these provocative metabolites or overproduce them and typically are susceptible to a wide range of structurally dissimilar NSAIDs, particularly those that inhibit COX1. Symptoms, which develop immediately to hours after ingesting any of various NSAIDs that inhibit COX-1, are: 1) exacerbations of asthmatic and rhinitis (see aspirin-induced asthma) symptoms in individuals with a history of asthma or rhinitis and 2) exacerbation or first-time development of wheals or angioedema in individuals with or without a history of chronic urticarial lesions or angioedema. Possible effects on bone and soft tissue healing It has been hypothesized that NSAIDs may delay healing from bone and soft-tissue injuries by inhibiting inflammation. On the other hand, it has also been hypothesized that NSAIDs might speed recovery from soft tissue injuries by preventing inflammatory processes from damaging adjacent, non-injured muscles. There is moderate evidence that they delay bone healing. Their overall effect on soft-tissue healing is unclear. Ototoxicity Long-term use of NSAID analgesics and paracetamol is associated with an increased risk of hearing loss. Other The use of NSAIDs for analgesia following gastrointestinal surgery remains controversial, given mixed evidence of an increased risk of leakage from any bowel anastomosis created. This risk may vary according to the class of NSAID prescribed. Common adverse drug reactions (ADR), other than listed above, include: raised liver enzymes, headache, dizziness. Uncommon ADRs include an abnormally high level of potassium in the blood, confusion, spasm of the airways, and rash. Ibuprofen may also rarely cause irritable bowel syndrome symptoms. NSAIDs are also implicated in some cases of Stevens–Johnson syndrome. Most NSAIDs penetrate poorly into the central nervous system (CNS). However, the COX enzymes are expressed constitutively in some areas of the CNS, meaning that
many problems, including Integer addition, multiplication and division; Matrix multiplication, determinant, inverse, rank; Polynomial GCD, by a reduction to linear algebra using Sylvester matrix Finding a maximal matching. Often algorithms for those problems had to be separately invented and could not be naïvely adapted from well-known algorithms – Gaussian elimination and Euclidean algorithm rely on operations performed in sequence. One might contrast ripple carry adder with a carry-lookahead adder. Example An example of problem in NC1 is the parity check on a bit string. The problem consists in counting the number of 1s in a string made of 1 and 0. A simple solution consists in summing all the string's bits. Since addition is associative, . Recursively applying such property, it is possible to build a binary tree of length in which every sum between two bits and is expressible by means of basic logical operators, e.g. through the boolean expression . The NC hierarchy NCi is the class of decision problems decidable by uniform boolean circuits with a polynomial number of gates of at most two inputs and depth , or the class of decision problems solvable in time O(logi n) on a parallel computer with a polynomial number of processors. Clearly, we have which forms the NC-hierarchy. We can relate the NC classes to the space classes L and NL and AC. The NC classes are related to the AC classes, which are defined similarly, but with gates having unbounded fan-in. For each i, we have As an immediate consequence of this, we have that NC = AC. It is known that both inclusions are strict for i = 0. Similarly, we have that NC is equivalent to the problems solvable on an alternating Turing machine restricted to at most two options at each step with O(log n) space and alternations. Open problem: Is NC proper? One major open question in complexity theory is whether or not every containment in the NC hierarchy is proper. It was observed by Papadimitriou that, if NCi = NCi+1 for some i, then NCi = NCj for all j ≥ i, and as a result, NCi = NC. This observation is known as NC-hierarchy collapse because even a single equality in the chain of containments implies that the entire NC hierarchy "collapses" down to some level i. Thus, there are 2 possibilities: It is
elimination and Euclidean algorithm rely on operations performed in sequence. One might contrast ripple carry adder with a carry-lookahead adder. Example An example of problem in NC1 is the parity check on a bit string. The problem consists in counting the number of 1s in a string made of 1 and 0. A simple solution consists in summing all the string's bits. Since addition is associative, . Recursively applying such property, it is possible to build a binary tree of length in which every sum between two bits and is expressible by means of basic logical operators, e.g. through the boolean expression . The NC hierarchy NCi is the class of decision problems decidable by uniform boolean circuits with a polynomial number of gates of at most two inputs and depth , or the class of decision problems solvable in time O(logi n) on a parallel computer with a polynomial number of processors. Clearly, we have which forms the NC-hierarchy. We can relate the NC classes to the space classes L and NL and AC. The NC classes are related to the AC classes, which are defined similarly, but with gates having unbounded fan-in. For each i, we have As an immediate consequence of this, we have that NC = AC. It is known that both inclusions are strict for i = 0. Similarly, we have that NC is equivalent to the problems solvable on an alternating Turing machine restricted to at most two options at each step with O(log n) space and alternations. Open problem: Is NC proper? One major open question in complexity theory is whether or not every containment in the NC hierarchy is proper. It was observed by Papadimitriou that, if NCi = NCi+1 for some i, then NCi = NCj for all j ≥ i, and as a result, NCi = NC. This observation is known as NC-hierarchy collapse because even a single equality in the chain of containments implies that the entire NC hierarchy "collapses" down to some level i. Thus, there are 2 possibilities: It is widely believed that (1) is the case, although no proof as to the truth of either statement has yet been discovered. NC0 The special class NC0 operates only on a constant length of input bits. It is therefore described as the class of functions definable by uniform boolean circuits with constant depth and bounded fan-in. Barrington's theorem A branching program with n variables of width k and length m consists of a sequence of m instructions. Each of the instructions is a tuple (i, p, q) where i is the index of variable to check (1 ≤ i ≤ n), and p and q are functions from {1, 2, ..., k} to {1, 2, ..., k}. Numbers 1, 2, ..., k are called states of the branching program. The program initially starts in state 1, and each instruction (i, p, q) changes the state from x to p(x) or q(x), depending on whether the ith variable is 0 or 1. The function mapping an input to a final state of the program is called the yield of the program (more precisely, the yield on an input is the function mapping any initial state to the corresponding final state). The program accepts a set of variable values when there is some set of functions such that a variable sequence is in A precisely when its yield is in F. A family of branching programs consists of a branching program with n variables for each n. It accepts a language when the n variable program accepts the language restricted to length n inputs. It is easy to show that every language L on {0,1} can be recognized by a family of branching programs of width 5 and exponential length, or by a family of exponential width and linear length. Every regular language on {0,1} can be recognized by a
to seaweeds, including hijiki. One of the oldest descriptions of nori is dated to around the 8th century. In the Taihō Code enacted in 701, nori was already included in the form of taxation. Local people have been described as drying nori in Hitachi Province fudoki (721–721), and harvesting of nori was mentioned in Izumo Province fudoki (713–733), showing that nori was used as food from ancient times. In Utsubo Monogatari, written around 987, nori was recognized as a common food. Nori had been consumed as paste form until the sheet form was invented in Asakusa, Edo (contemporary Tokyo), around 1750 in the Edo period through the method of Japanese paper-making. The word "nori" first appeared in an English-language publication in C. P. Thunberg's Trav., published in 1796. It was used in conjugation as "Awa nori", probably referring to what is now called aonori. The Japanese nori industry was in decline after WWII, when Japan was in need of all food that could be produced. The decline was due to a lack of understanding of nori's three-stage life cycle, such that local people did not understand why traditional cultivation methods were not effective. The industry was rescued by knowledge deriving from the work of British phycologist Kathleen Mary Drew-Baker, who had been researching the organism Porphyria umbilicalis, which grew in the seas around Wales and was harvested for food (bara lafwr or bara lawr), as in Japan. Her work was discovered by Japanese scientists who applied it to artificial methods of seeding and growing the nori, rescuing the industry. Kathleen Baker was hailed as the "Mother of the Sea" in Japan and a statue erected in her memory; she is still revered as the savior of the Japanese nori industry. In the 21st century, the Japanese nori industry faces a new decline due to increased competition from seaweed producers in China and Korea and domestic sales tax hikes. The word nori started to be used widely in the United States, and the product (imported in dry form from Japan) became widely available at natural food stores and Asian-American grocery stores
Trav., published in 1796. It was used in conjugation as "Awa nori", probably referring to what is now called aonori. The Japanese nori industry was in decline after WWII, when Japan was in need of all food that could be produced. The decline was due to a lack of understanding of nori's three-stage life cycle, such that local people did not understand why traditional cultivation methods were not effective. The industry was rescued by knowledge deriving from the work of British phycologist Kathleen Mary Drew-Baker, who had been researching the organism Porphyria umbilicalis, which grew in the seas around Wales and was harvested for food (bara lafwr or bara lawr), as in Japan. Her work was discovered by Japanese scientists who applied it to artificial methods of seeding and growing the nori, rescuing the industry. Kathleen Baker was hailed as the "Mother of the Sea" in Japan and a statue erected in her memory; she is still revered as the savior of the Japanese nori industry. In the 21st century, the Japanese nori industry faces a new decline due to increased competition from seaweed producers in China and Korea and domestic sales tax hikes. The word nori started to be used widely in the United States, and the product (imported in dry form from Japan) became widely available at natural food stores and Asian-American grocery stores in the 1960s due to the macrobiotic movement and in the 1970s with the increase of sushi bars and Japanese restaurants. Production Production and processing of nori is an advanced form of agriculture. The biology of Pyropia, although complicated, is well understood, and this knowledge is used to control the production process. Farming takes place in the sea
branch of philosophical ethics that investigates the questions that arise regarding how one ought to act, in a moral sense. Normative ethics is distinct from meta-ethics in that the former examines standards for the rightness and wrongness of actions, whereas the latter studies the meaning of moral language and the metaphysics of moral facts. Likewise, normative ethics is distinct from applied ethics in that the former is more concerned with 'who ought one be' rather than the ethics of a specific issue (e.g. if, or when, abortion is acceptable). Normative ethics is also distinct from descriptive ethics, as the latter is an empirical investigation of people's moral beliefs. In this context normative ethics is sometimes called prescriptive, as opposed to descriptive ethics. However, on certain versions of the meta-ethical view of moral realism, moral facts are both descriptive and prescriptive at the same time. An adequate justification for a group of principles needs an explanation of those principles. It must be an explanation of why precisely these goals, prohibitions, and so on, should be given weight, and not others. Unless a coherent explanation of the principles (or demonstrate that they require no additional justification) can be given, they cannot be considered justified, and there may be reason to reject them. Therefore, there is a requirement for explanation in moral theory. Most traditional moral theories rest on principles that determine whether an action is right or wrong. Classical theories in this vein include utilitarianism, Kantianism, and some forms of contractarianism. These theories mainly offered the use of overarching moral principles to resolve difficult moral decisions. Normative ethical theories There are disagreements about what precisely gives an action, rule, or disposition its ethical force. There are three competing views on how moral questions should be answered, along with hybrid positions that combine some elements of each: virtue ethics, deontological ethics; and consequentialism. The former focuses on the character of those who are acting. In contrast, both deontological ethics and consequentialism focus on the status of the action, rule, or disposition itself, and come in various forms. Virtue ethics Virtue ethics, advocated by Aristotle with some aspects being supported by Saint Thomas Aquinas, focuses on the inherent
philosophers as G. E. M. Anscombe, Philippa Foot, Alasdair Macintyre, Mortimer J. Adler, Jacques Maritain, Yves Simon, and Rosalind Hursthouse. Deontological ethics Deontology argues that decisions should be made considering the factors of one's duties and one's rights. Some deontological theories include: Immanuel Kant's categorical imperative, which roots morality in humanity's rational capacity and asserts certain inviolable moral laws. The contractualism of John Rawls, which holds that the moral acts are those that we would all agree to if we were unbiased, behind a "veil of ignorance." Natural rights theories, such that of John Locke or Robert Nozick, which hold that human beings have absolute, natural rights. Consequentialism Consequentialism argues that the morality of an action is contingent on the action's outcome or result. Consequentialist theories, varying in what they consider to be valuable (i.e., axiology), include: Utilitarianism holds that an action is right if it leads to the most happiness for the greatest number of people. Prior to the coining of the term "consequentialism" by G. E. M. Anscombe in 1958 and the adoption of that term in the literature that followed, utilitarianism was the generic term for consequentialism, referring to all theories that promoted maximizing any form of utility, not just those that promoted maximizing happiness. State consequentialism, or Mohist consequentialism, holds that an action is right if it leads to state welfare, through order, material wealth, and population growth. Egoism, the belief that the moral person is the self-interested person, holds that an action is right if it maximizes good for the self. Situational ethics emphasizes the particular context of an act when evaluating it ethically. Specifically, Christian forms of situational ethics hold that the correct action is the one that creates the most loving result, and that love should always be people's goal. Intellectualism dictates that the best action is the one that best fosters and promotes knowledge. Welfarism, which argues that the best action is the one that most increases economic well-being or welfare. Preference utilitarianism, which holds that the best action is the one that leads to the most overall preference satisfaction. Other theories Ethics of care, or relational ethics, founded by feminist theorists, notably Carol Gilligan, argues that morality arises out of the experiences of empathy and compassion. It emphasizes the importance of interdependence and relationships in achieving ethical goals. Pragmatic ethics is difficult to classify fully within any of the four preceding conceptions. This view argues that moral correctness evolves similarly to other kinds of knowledge—socially over the course of many lifetimes—and that norms, principles, and moral criteria are likely to be improved as a result of inquiry. Charles Sanders Peirce, William James, and John Dewey are known as the founders of pragmatism; a more recent proponent of pragmatic ethics was James D. Wallace. Role ethics is based on the concept of family roles. Morality as a binding force It can be unclear what it means to say that a person "ought to do X because it is moral, whether they like it or not." Morality is
does not boost personal gains, as angry negotiators do not succeed. Moreover, negative emotions lead to acceptance of settlements that are not in the positive utility function but rather have a negative utility. However, expression of negative emotions during negotiation can sometimes be beneficial: legitimately expressed anger can be an effective way to show one's commitment, sincerity, and needs. Moreover, although NA reduces gains in integrative tasks, it is a better strategy than PA in distributive tasks (such as zero-sum). In his work on negative affect arousal and white noise, Seidner found support for the existence of a negative affect arousal mechanism through observations regarding the devaluation of speakers from other ethnic origins. Negotiation may be negatively affected, in turn, by submerged hostility toward an ethnic or gender group. Conditions for emotion affect Research indicates that negotiator's emotions do not necessarily affect the negotiation process. Albarracın et al. (2003) suggested that there are two conditions for emotional affect, both related to the ability (presence of environmental or cognitive disturbances) and the motivation: Identification of the affect: requires high motivation, high ability or both. Determination that the affect is relevant and important for the judgment: requires that either the motivation, the ability or both are low. According to this model, emotions affect negotiations only when one is high and the other is low. When both ability and motivation are low, the affect is identified, and when both are high the affect is identified but discounted as irrelevant to judgment. A possible implication of this model is, for example, that the positive effects PA has on negotiations (as described above) is seen only when either motivation or ability are low. Effect of partner's emotions Most studies on emotion in negotiations focus on the effect of the negotiator's own emotions on the process. However, what the other party feels might be just as important, as group emotions are known to affect processes both at the group and the personal levels. When it comes to negotiations, trust in the other party is a necessary condition for its emotion to affect, and visibility enhances the effect. Emotions contribute to negotiation processes by signaling what one feels and thinks and can thus prevent the other party from engaging in destructive behaviors and to indicate what steps should be taken next: PA signals to keep in the same way, while NA points that mental or behavioral adjustments are needed. Partner's emotions can have two basic effects on negotiator's emotions and behavior: mimetic/ reciprocal or complementary. For example, disappointment or sadness might lead to compassion and more cooperation. In a study by Butt et al. (2005) that simulated real multi-phase negotiation, most people reacted to the partner's emotions in reciprocal, rather than complementary, manner. Specific emotions were found to have different effects on the opponent's feelings and strategies chosen: Anger caused the opponents to place lower demands and to concede more in a zero-sum negotiation, but also to evaluate the negotiation less favorably. It provoked both dominating and yielding behaviors of the opponent. Pride led to more integrative and compromise strategies by the partner. Guilt or regret expressed by the negotiator led to better impression of him by the opponent, however it also led the opponent to place higher demands. On the other hand, personal guilt was related to more satisfaction with what one achieved. Worry or disappointment left bad impression on the opponent, but led to relatively lower demands by the opponent. Dealing with emotions Make emotions explicit and validate - Taking a more proactive approach in discussing one's emotions can allow for a negotiation to focus on the problem itself, rather than any unexpressed feelings. It is important to allow both parties to share their emotions. Allow time to let off steam - It is possible that one party may feel angry or frustrated at some point during the negotiation. Rather than try to avoid discussing those feelings, allow the individual to talk it out. Sitting and listening, without providing too much feedback to the substance itself, can offer enough support for the person to feel better. Once the grievances are released, it may become easier to negotiate. Symbolic gestures - Consider that an apology, or any other simple act, may be one of the most effective and low cost means to reduce any negative emotions between parties. Problems with laboratory studies Negotiation is a rather complex interaction. Capturing all its complexity is a very difficult task, let alone isolating and controlling only certain aspects of it. For this reason most negotiation studies are done under laboratory conditions, and focus only on some aspects. Although lab studies have their advantages, they do have major drawbacks when studying emotions: Emotions in lab studies are usually manipulated and are therefore relatively 'cold' (not intense). Although those 'cold' emotions might be enough to show effects, they are qualitatively different from the 'hot' emotions often experienced during negotiations. In real life, people select which negotiations to enter, which affects emotional commitment, motivation and interests —but this is not the case in lab studies. Lab studies tend to focus on relatively few well-defined emotions. Real-life scenarios provoke a much wider scale of emotions. Coding the emotions has a double catch: if done by a third side, some emotions might not be detected as the negotiator sublimates them for strategic reasons. Self-report measures might overcome this, but they are usually filled only before or after the process, and if filled during the process might interfere with it. Neil Rackham is a rare researcher who studied and compared real world labor mgt negotiation case studies to identify the variables distinguishing the most sustainable and satisfying negotiations. He found the best negotiators asked more questions than poor, listened actively and zealously sought common grounds as well as creative solutions that were mutually beneficial. Group composition Multi-party While negotiations involving more than two parties is less often researched, some results from two-party negotiations still apply with more than two parties. One such result is that in negotiations it is common to see language similarity arise between the two negotiating parties. In three-party negotiations, language similarity still arose, and results were particularly efficient when the party with the most to gain from the negotiation adopted language similarities from the other parties. Team Due to globalization and growing business trends, negotiation in the form of teams is becoming widely adopted. Teams can effectively collaborate to break down a complex negotiation. There is more knowledge and wisdom dispersed in a team than in a single mind. Writing, listening, and talking, are specific roles team members must satisfy. The capacity base of a team reduces the amount of blunder, and increases familiarity in a negotiation. However, unless a team can appropriately utilize the full capacity of its potential, effectiveness can suffer. One factor in the effectiveness of team negotiation is a problem that occurs through solidarity behavior. Solidarity behavior occurs when one team member reduces his or her own utility (benefit) in order to increase the benefits of other team members. This behavior is likely to occur when interest conflicts rise. When the utility/needs of the negotiation opponent does not align with every team member's interests, team members begin to make concessions and balance the benefits gained among the team. Intuitively, this may feel like a cooperative approach. However, though a team may aim to negotiate in a cooperative or collaborative nature, the outcome may be less successful than is possible, especially when integration is possible. Integrative potential is possible when different negotiation issues are of different importance to each team member. Integrative potential is often missed due to the lack of awareness of each member's interests and preferences. Ultimately, this leads to a poorer negotiation result. Thus, a team can perform more effectively if each member discloses his or her preferences prior to the negotiation. This step will allow the team to recognize and organize the team's joint priorities, which they can take into consideration when engaging with the opposing negotiation party. Because a team is more likely to discuss shared information and common interests, teams must make an active effort to foster and incorporate unique viewpoints from experts from different fields. Research by Daniel Thiemann, which largely focused on computer-supported collaborative tasks, found that the Preference Awareness method is an effective tool for fostering the knowledge about joint priorities and further helps the team judge which negotiation issues were of highest importance. Women Many of the strategies in negotiation vary across genders, and this leads to variations in outcomes for different genders, often with women experiencing less success in negotiations as a consequence. This is due to a number of factors, including that it has been shown that it is more difficult for women to be self-advocating when they are negotiating. Many of the implications of these findings have strong financial impacts in addition to the social backlash faced by self-advocating women in negotiations, as compared to other advocating women, self-advocating men, and other advocating men. Research in this area has been studied across platforms, in addition to more specific areas like women as physician assistants. The backlash associated with this type of behavior is attributed to the fact that to be self-advocated is considered masculine, whereas the alternative, being accommodating, is considered more feminine. Males, however, do not appear to face any type of backlash for not being self-advocating. This research has been supported by multiple studies, including one which evaluated candidates participating in a negotiation regarding compensation. This study showed that women who initiated negotiations were evaluated more poorly than men who initiated negotiations. In another variation of this particular setup, men and women evaluated videos of men and women either accepting a compensation package or initiating negotiations. Men evaluated women more poorly for initiating negotiations, while women evaluated both men and women more poorly for initiating negotiations. In this particular experiment, women were less likely to initiate a negotiation with a male, citing nervousness, but there was no variation with the negotiation was initiated with another female. Research also supports the notion that the way individuals respond in a negotiation varies depending on the gender of the opposite party. In all-male groups, the use of deception showed no variation upon the level of trust between negotiating parties, however in mixed-sex groups there was an increase in deceptive tactics when it was perceived that the opposite party was using an accommodating strategy. In all-female groups, there were many shifts in when individuals did and did not employ deception in their negotiation tactics. Academic negotiation The academic world contains a unique management system, wherein faculty members, some of which have tenure, reside in academic units (e.g. departments) and are overseen by chairs, or heads. These chairs/heads are in turn supervised by deans of the college where their academic unit resides. Negotiation is an area where faculty, chairs/heads and their deans have little preparation; their doctoral degrees are typically in a highly specialized area according to their academic expertise. However, the academic environment frequently presents with situations where negotiation takes place. For example, many faculty are hired with an expectation that they will conduct research and publish scholarly works. For these faculty, where their research requires equipment, space, and/or funding, negotiation of a "start-up" package is critical for their success and future promotion. Also, department chairs often find themselves in situations, typically involving resource redistribution where they must negotiate with their dean, on behalf of their unit. And deans oversee colleges where they must optimize limited resources, such as research space or operating funds while at the same time creating an environment that fosters student success, research accomplishments and more. Integrative negotiation is the type predominately found in academic negotiation – where trust and long-term relationships between personnel are valued. Techniques found to be particularly useful in academic settings include: (1) doing your homework – grounding your request in facts; (2) knowing your value; (3) listening actively and acknowledging what is being said, (4) putting yourself in their shoes, (5) asking – negotiation begins with an ask, (6) not committing immediately, (7) managing emotion and (8) keeping in mind the principle of a "wise agreement", with its associated emphasis on meeting the interests of both parties to the extent possible as a key working point. The articles by Callahan, et al. and Amekudzi-Kennedy, et al. contain several case studies of academic negotiations. Etymology The word "negotiation" originated in the early 15th century from the Old French negociacion from Latin negotiatio from neg- "no" and otium "leisure". These terms mean "business, trade, traffic". By the late 1570s negotiation had the definition, "to communicate in search of mutual agreement". With this new introduction and this meaning, it showed a shift in "doing business" to "bargaining about" business. See also Alternative dispute resolution Alternating offers protocol Collaborative software Collective action Conciliation Conflict resolution research Consistency (negotiation) Contract Cross-cultural Cross-cultural differences in decision-making Delaying tactic Diplomacy Dispute resolution Expert determination Flipism Game theory Impasse International relations Leadership Multilateralism Nash equilibrium Principled negotiation Prisoner's dilemma Program on Negotiation References Further reading Camp, Jim. (2007). No, The Only Negotiating System You Need For Work Or Home. Crown Business. New York. Movius, H. and Susskind, L. E. (2009) Built to Win: Creating a World Class Negotiating Organization. Cambridge, MA: Harvard Business Press. Roger Dawson, Secrets of Power Negotiating - Inside Secrets from a Master Negotiator. Career Press, 1999. Davérède, Alberto L. "Negotiations, Secret", Max Planck Encyclopedia of Public International Law Ronald M. Shapiro and Mark A. Jankowski, The Power of Nice: How to Negotiate So Everyone Wins - Especially You!, John Wiley & Sons, Inc., 1998, Roger Fisher and Daniel Shapiro, Beyond Reason: Using Emotions as You Negotiate, Viking/Penguin, 2005. Douglas Stone, Bruce Patton, and Sheila Heen, foreword by Roger Fisher, Difficult Conversations: How to Discuss What Matters Most, Penguin, 1999, Catherine Morris, ed. Negotiation in Conflict Transformation and Peacebuilding: A Selected Bibliography. Victoria, Canada: Peacemakers Trust. Howard Raiffa, The Art and Science of Negotiation, Belknap Press 1982, David Churchman, "Negotiation Tactics" University Press of America, Inc. 1993 William Ury, Getting Past No: Negotiating Your Way from Confrontation to Cooperation, revised second edition, Bantam, 1993, trade paperback, ; 1st edition under the title, Getting Past No: Negotiating with Difficult People, Bantam, 1991, hardcover, 161 pages, William Ury, Roger Fisher and Bruce Patton, Getting to Yes: Negotiating Agreement Without Giving in, Revised 2nd edition, Penguin USA, 1991, trade paperback, ; Houghton Mifflin, 1992, hardcover,
risk alienating other important potential supporters, while avoiding any unexpected new policies that could also limit the size of their growing coalition. Bad faith When a party pretends to negotiate, but secretly has no intention of compromising, the party is considered negotiating in bad faith. Bad faith is a concept in negotiation theory whereby parties pretend to reason to reach settlement, but have no intention to do so, for example, one political party may pretend to negotiate, with no intention to compromise, for political effect. Bad faith negotiations are often used in political science and political psychology to refer to negotiating strategies in which there is no real intention to reach compromise, or a model of information processing. A state is presumed implacably hostile, and contra-indicators of this are ignored. They are dismissed as propaganda ploys or signs of weakness. Examples are John Foster Dulles' position regarding the Soviet Union. Negotiation Pie The total of advantages and disadvantages to be distributed in a negotiation is illustrated with the term negotiation pie. The course of the negotiation can either lead to an increase, shrinking, or stagnation of these values. If the negotiation parties are able to expand the total pie a win-win situation is possible assuming that both parties profit from the expansion of the pie. In practice, however, this maximisation approach is oftentimes impeded by the so-called small pie bias, i.e. the psychological underestimation of the negotiation pie's size. Likewise, the possibility to increase the pie may be underestimated due to the so-called incompatibility bias. Contrary to enlarging the pie, the pie may also shrink during negotiations e.g. due to (excessive) negotiation costs. In litigation, a negotiation pie is shared when parties settle outside the court. It is possible to quantify the conditions under which parties will agree to settle and how legal expenses and the absolute coefficient of risk aversion affects the size of the pie and the decision to settle outside the court. Strategies There are many different ways to categorize the essential elements of negotiation. One view of negotiation involves three basic elements: process, behavior and substance. The process refers to how the parties negotiate: the context of the negotiations, the parties to the negotiations, the tactics used by the parties, and the sequence and stages in which all of these play out. Behavior refers to the relationships among these parties, the communication between them and the styles they adopt. The substance refers to what the parties negotiate over: the agenda, the issues (positions and – more helpfully – interests), the options, and the agreement(s) reached at the end. Another view of negotiation comprises four elements: strategy, process, tools, and tactics. Strategy comprises the top level goals – typically including relationship and the final outcome. Processes and tools include the steps to follow and roles to take in preparing for and negotiating with the other parties. Tactics include more detailed statements and actions and responses to others' statements and actions. Some add to this persuasion and influence, asserting that these have become integral to modern day negotiation success, and so should not be omitted. Strategic approaches to concession-making include consideration of the optimum time to make a concession, making concessions in installments, not all at once, and ensuring that the opponent is aware that a concession has been made, rather than a re-expression of a position already outlined, and aware of the cost incurred in making the concession, especially where the other party is generally less aware of the nature of the business or the product being negotiated. Employing an advocate A skilled negotiator may serve as an advocate for one party to the negotiation. The advocate attempts to obtain the most favorable outcomes possible for that party. In this process the negotiator attempts to determine the minimum outcome(s) the other party is (or parties are) willing to accept, then adjusts their demands accordingly. A "successful" negotiation in the advocacy approach is when the negotiator is able to obtain all or most of the outcomes their party desires, but without driving the other party to permanently break off negotiations. Skilled negotiators may use a variety of tactics ranging from negotiation hypnosis, to a straightforward presentation of demands or setting of preconditions, to more deceptive approaches such as cherry picking. Intimidation and salami tactics may also play a part in swaying the outcome of negotiations. Another negotiation tactic is bad guy/good guy. Bad guy/good guy is when one negotiator acts as a bad guy by using anger and threats. The other negotiator acts as a good guy by being considerate and understanding. The good guy blames the bad guy for all the difficulties while trying to solicit concessions and agreement from the opponent. BATNA The best alternative to a negotiated agreement, or BATNA, is the most advantageous alternative course of action a negotiator can take should the current negotiation end without reaching agreement. The quality of a BATNA has the potential to improve a party's negotiation outcome. Understanding one's BATNA can empower an individual and allow him or her to set higher goals when moving forward. Alternatives need to be actual and actionable to be of value. Negotiators may also consider the other party's BATNA and how it compares to what they are offering during the negotiation. Conflict styles Kenneth W. Thomas identified five styles or responses to negotiation. These five strategies have been frequently described in the literature and are based on the dual-concern model. The dual concern model of conflict resolution is a perspective that assumes individuals' preferred method of dealing with conflict is based on two themes or dimensions: A concern for self (i.e., assertiveness), and A concern for others (i.e., empathy). Based on this model, individuals balance the concern for personal needs and interests with the needs and interests of others. The following five styles can be used based on individuals' preferences depending on their pro-self or pro-social goals. These styles can change over time, and individuals can have strong dispositions towards numerous styles. Types of negotiators Three basic kinds of negotiators have been identified by researchers involved in The Harvard Negotiation Project. These types of negotiators are: soft bargainers, hard bargainers, and principled bargainers. Soft These people see negotiation as too close to competition, so they choose a gentle style of bargaining. The offers they make are not in their best interests, they yield to others' demands, avoid confrontation, and they maintain good relations with fellow negotiators. Their perception of others is one of friendship, and their goal is agreement. They do not separate the people from the problem but are soft on both. They avoid contests of wills and insist on the agreement, offering solutions and easily trusting others and changing their opinions. Hard These people use contentious strategies to influence, utilizing phrases such as "this is my final offer" and "take it or leave it". They make threats, are distrustful of others, insist on their position, and apply pressure to negotiate. They see others as adversaries and their ultimate goal is victory. Additionally, they search for one single answer, and insist you agree on it. They do not separate the people from the problem (as with soft bargainers), but they are hard on both the people involved and the problem. Principled Individuals who bargain this way seek integrative solutions, and do so by sidestepping commitment to specific positions. They focus on the problem rather than the intentions, motives, and needs of the people involved. They separate the people from the problem, explore interests, avoid bottom lines, and reach results based on standards independent of personal will. They base their choices on objective criteria rather than power, pressure, self-interest, or an arbitrary decisional procedure. These criteria may be drawn from moral standards, principles of fairness, professional standards, and tradition. Researchers from The Harvard Negotiation Project recommend that negotiators explore a number of alternatives to the problems they face in order to reach the best solution, but this is often not the case (as when you may be dealing with an individual using soft or hard-bargaining tactics) (Forsyth, 2010). Tactics are always an important part of the negotiating process. More often than not they are subtle, difficult to identify and used for multiple purposes. Tactics are more frequently used in distributive negotiations and when the focus in on taking as much value off the table as possible. Many negotiation tactics exist. Below are a few commonly used tactics. Auction: The bidding process is designed to create competition. When multiple parties want the same thing, pit them against one another. When people know that they may lose out on something, they want it even more. Not only do they want the thing that is being bid on, they also want to win, just to win. Taking advantage of someone's competitive nature can drive up the price. Brinkmanship: One party aggressively pursues a set of terms to the point where the other negotiating party must either agree or walk away. Brinkmanship is a type of "hard nut" approach to bargaining in which one party pushes the other party to the "brink" or edge of what that party is willing to accommodate. Successful brinkmanship convinces the other party they have no choice but to accept the offer and there is no acceptable alternative to the proposed agreement. Bogey: Negotiators use the bogey tactic to pretend that an issue of little or no importance is very important. Then, later in the negotiation, the issue can be traded for a major concession of actual importance. Calling a higher authority: To mitigate too far reaching concessions, deescalate, or overcome a deadlock situation, one party makes the further negotiation process dependent on the decision of a decision maker, not present at the negotiation table. Chicken: Negotiators propose extreme measures, often bluffs, to force the other party to chicken out and give them what they want. This tactic can be dangerous when parties are unwilling to back down and go through with the extreme measure. Defense in Depth: Several layers of decision-making authority is used to allow further concessions each time the agreement goes through a different level of authority. In other words, each time the offer goes to a decision maker, that decision maker asks to add another concession to close the deal. Deadlines: Give the other party a deadline forcing them to make a decision. This method uses time to apply pressure to the other party. Deadlines given can be actual or artificial. Flinch: Flinching is showing a strong negative physical reaction to a proposal. Common examples of flinching are gasping for air, or a visible expression of surprise or shock. The flinch can be done consciously or unconsciously. The flinch signals to the opposite party that you think the offer or proposal is absurd in hopes the other party will lower their aspirations. Seeing a physical reaction is more believable than hearing someone saying, "I'm shocked". Forgiveness Math or Generous Tit Tat: Computer simulated research identifies that optimal strategy as extending an olive branch or giving the opponent the opportunity to collaborate and create a win win resolution. Of course the worst of negotiators do not even recognize their self-interest so negotiators need to protect their own interests and be prepared for lack of cooperation. Good Guy/Bad Guy: Within the tactic of good guy/bad guy (synonyms are good cop/bad cop or black hat/white hat) oftentimes positive and unpleasant tasks are divided between two negotiators on the same negotiation side or unpleasant tasks or decisions are allocated to an (real or fictitious) outsider. The good guy supports the conclusion of the contract and emphasizes positive aspects of the negotiation (mutual interests). The bad guy criticizes negative aspects (opposing interests). The division of the two roles allows more consistent behavior and credibility of the individual negotiators. As the good guy promotes the contract, he/she can build trust with the other side. Highball/Low-ball or Ambit claim: Depending on whether selling or buying, sellers or buyers use a ridiculously high, or ridiculously low opening offer that is not achievable. The theory is that the extreme offer makes the other party reevaluate their own opening offer and move close to the resistance point (as far as you are willing to go to reach an agreement). Another advantage is that the party giving the extreme demand appears more flexible when they make concessions toward a more reasonable outcome. A danger of this tactic is that the opposite party may think negotiating is a waste of time. Generous Tit for Tat: So many negotiators attempt to exercise a macho and manipulative style reflecting the world's use and abuse of power. They puff up their chests and pretend to be savvy when actually they have never studied or practiced skillful negotiations. Some of the world's worst and ugliest leadership exemplifies. Actual research shows the benefits of a skillful and educated collaborative style. See Forgiveness Math above for research-based and proven tactics. The Nibble: Also known under salami tactic or quivering quill, nibbling is the demand of proportionally small concessions that haven't been discussed previously just before closing the deal. This method takes advantage of the other party's desire to close by adding "just one more thing". Snow Job: Negotiators overwhelm the other party with so much information that they have difficulty determining what information is important, and what is a diversion. Negotiators may also use technical language or jargon to mask a simple answer to a question asked by a non-expert. Mirroring: When people get on well, the outcome of a negotiation is likely to be more positive. To create trust and a rapport, a negotiator may mimic or mirror the opponent's behavior and repeat what they say. Mirroring refers to a person repeating the core content of what another person just said, or repeating a certain expression. It indicates attention to the subject of negotiation and acknowledges the other party's point or statement. Mirroring can help create trust and establish a relationship. Anchoring: Anchoring is the process of establishing a reference point first in order to guide the other person more to your suggested price. It often is presented at the beginning of a negotiation in order to influence the rest of the negotiation. As an example, say you want to sell a car for 50,000 dollars. Now a customer walks in saying they want to buy a car. You say that you can sell the car for 65,000 dollars. Their counter offer would probably be 50,000-55,000 dollars. This also works vice-versa for buying something. The idea here is that we are narrowing the other parties expectations down or up.In order to counter anchoring, you should point out the fact that they are anchoring and say that they need to drive it down to a acceptable price. Nonverbal communication Communication is a key element of negotiation. Effective negotiation requires that participants effectively convey and interpret information. Participants in a negotiation communicate information not only verbally but non-verbally through body language and gestures. By understanding how nonverbal communication works, a negotiator is better equipped to interpret the information other participants are leaking non-verbally while keeping secret those things that would inhibit his/her ability to negotiate. Examples Non-verbal "anchoring" In a negotiation, a person can gain the advantage by verbally expressing a position first. By anchoring one's position, one establishes the position from which the negotiation proceeds. In a like manner, one can "anchor" and gain advantage with nonverbal (body language) cues. Dominant Physical Position: By leaning back and whispering, one can effectively create a dominant physical position that can yield an upper hand in negotiations. Personal space: The person at the head of the table is the apparent symbol of power. Negotiators can negate this strategic advantage by positioning allies in the room to surround that individual. First impression: Begin the negotiation with positive gestures and enthusiasm. Look the person in the eye with sincerity. If you cannot maintain eye contact, the other person might think you are hiding something or that you are insincere. Give a solid handshake. Reading non-verbal communication Being able to read the non-verbal communication of another person can significantly aid in the communication process. By being aware of inconsistencies between a person's verbal and non-verbal communication and reconciling them, negotiators can to come to better resolutions. Examples of incongruity in body language include: Nervous Laugh: A laugh not matching the situation. This could be a sign of nervousness or discomfort. When this happens, it may be good to probe with questions to discover the person's true feelings. Positive words but negative body language: If someone asks their negotiation partner if they are annoyed and the person pounds their fist and responds sharply, "what makes you think anything is bothering me?" Hands raised in a clenched position: The person raising his/her hands in this position reveals frustration even when he/she is smiling. This is a signal that the person doing it may be holding back a negative attitude. If possible, it may be helpful for negotiation partners to spend time together in a comfortable setting outside of the negotiation room. Knowing how each partner non-verbally communicates outside of the negotiation setting helps negotiation partners sense incongruity between verbal and non-verbal communication. Conveying receptivity The way negotiation partners position their bodies relative to each other may influence how receptive each is to the other person's message and ideas. Face and eyes: Receptive negotiators smile, make plenty of eye contact. This conveys the idea that there is more interest in the person than in what is being said. On the other hand, non-receptive negotiators make little to no eye contact. Their eyes may be squinted, jaw muscles clenched and head turned slightly away from the speaker Arms and hands: To show receptivity, negotiators should spread arms and open hands on table or relaxed on their lap. Negotiators show poor receptivity when their hands are clenched, crossed, positioned in front of their mouth, or rubbing the back of their neck. Legs and Feet: Receptive negotiators sit with legs together or one leg slightly in front of the other. When standing, they distribute weight evenly and place hands on their hips with their body tilted toward the speaker. Non-receptive negotiators stand with legs crossed, pointing away from the speaker. Torso: Receptive negotiators sit on the edge of their chair, unbutton their suit coat with their body tilted toward the speaker. Non-receptive negotiators may lean back in their chair and keep their suit coat buttoned. Receptive negotiators tend to appear relaxed with their hands open and palms visibly displayed. Barriers Die-hard bargainers Lack of trust Informational vacuums and negotiator's dilemma Structural impediments Spoilers Cultural and gender differences Communication problems The power of dialogue Emotion Emotions play an important part in the negotiation process, although it is only in recent years that their effect is being studied. Emotions have the potential to play either a positive or negative role in negotiation. During negotiations, the decision as to whether or not to settle rests in part on emotional factors. Negative emotions can cause intense and even irrational behavior, and can cause conflicts to escalate and negotiations to break down, but may be instrumental in attaining concessions. On the other hand, positive emotions often facilitate reaching an agreement and help to maximize joint gains, but can also be instrumental in attaining concessions. Positive and negative discrete emotions can be strategically displayed to influence task and relational outcomes and may play out differently across cultural boundaries. Affect effect Dispositions for affects affect various stages of negotiation: which strategies to use, which strategies are actually chosen, the way the other party and their intentions are perceived, their willingness to reach an agreement and the final negotiated outcomes. Positive affectivity (PA) and negative affectivity (NA) of one or more of the negotiating sides can lead to very different outcomes. Positive affect Even before the negotiation process starts, people in a positive mood have more confidence, and higher tendencies to plan to use a cooperative strategy. During the negotiation, negotiators who are in a positive mood tend to enjoy the interaction more, show less contentious behavior, use less aggressive tactics and more cooperative strategies. This in turn increases the likelihood that parties will reach their instrumental goals, and enhance the ability to find integrative gains. Indeed, compared with negotiators with negative or natural affectivity, negotiators with positive affectivity reached more agreements and tended to honor those agreements more. Those favorable outcomes are due to better decision making processes, such as flexible thinking, creative problem solving, respect for others' perspectives, willingness to take risks and higher confidence. Post-negotiation positive affect has beneficial consequences as well. It increases satisfaction with achieved outcome and influences one's desire for future interactions. The PA aroused by reaching an agreement facilitates the dyadic relationship, which brings commitment that sets the stage for subsequent interactions. PA also has its drawbacks: it distorts perception of self performance, such that performance is judged to be relatively better than it actually is. Thus, studies involving self reports on achieved outcomes might be biased. Negative affect Negative affect has detrimental effects on various stages in the negotiation process. Although various negative emotions affect negotiation outcomes, by far the most researched is anger. Angry negotiators plan to use more competitive strategies and to cooperate less, even before the negotiation starts. These competitive strategies are related to reduced joint outcomes. During negotiations, anger disrupts the process by reducing the level of trust, clouding parties' judgment, narrowing parties' focus of attention and changing their central goal from reaching agreement to retaliating against the other side. Angry negotiators pay less attention to opponent's interests and are less accurate in judging their interests, thus achieve lower joint gains. Moreover, because anger makes negotiators more self-centered in their preferences, it increases the likelihood that they will reject profitable offers. Opponents who get really angry (or cry, or otherwise lose control) are more likely to make errors: make sure they are in your favor. Anger does not help achieve negotiation goals either: it reduces joint gains and does not boost personal gains, as angry negotiators do not succeed. Moreover, negative emotions lead to acceptance of settlements that are not in the positive utility function but rather have a negative utility. However, expression of negative emotions during negotiation can sometimes be beneficial: legitimately expressed anger can be an effective way to show one's commitment, sincerity, and needs. Moreover, although NA reduces gains in integrative tasks, it is a better strategy than PA in distributive tasks (such as zero-sum). In his work on negative affect arousal and white noise, Seidner found support for the existence of a negative affect arousal mechanism through observations regarding the devaluation of speakers from other ethnic origins. Negotiation may be negatively affected, in turn, by submerged hostility toward an ethnic or gender group. Conditions for emotion affect Research indicates that negotiator's emotions do not necessarily affect the negotiation process. Albarracın et al. (2003) suggested that there are two conditions for emotional affect, both related to the ability (presence of environmental or cognitive disturbances) and the motivation: Identification of the affect: requires high motivation, high ability or both. Determination that the affect is relevant and important for the judgment: requires that either the motivation, the ability or both are low. According to this model, emotions affect negotiations only when one is high and the other is low. When both ability and motivation are low,
iCycleBeads (the digital version), based on the Standard Days Method, are designed to be both effective and simple to teach, learn, and use. In 2019, Urrutia et al. released a study as well as interactive graph over-viewing all studied fertility awareness based methods. Femtech companies such as Dot and Natural Cycles have also produced new studies and apps to help women avoid pregnancy. Natural Cycles is the first app of its kind to receive FDA approval. Fertility signs Most menstrual cycles have several days at the beginning that are infertile (pre-ovulatory infertility), a period of fertility, and then several days just before the next menstruation that are infertile (post-ovulatory infertility). The first day of red bleeding is considered day one of the menstrual cycle. Different systems of fertility awareness calculate the fertile period in slightly different ways, using primary fertility signs, cycle history, or both. Primary fertility signs The three primary signs of fertility are basal body temperature (BBT), cervical mucus, and cervical position. A woman practicing symptoms-based fertility awareness may choose to observe one sign, two signs, or all three. Many women experience secondary fertility signs that correlate with certain phases of the menstrual cycle, such as abdominal pain and heaviness, back pain, breast tenderness, and mittelschmerz (ovulation pains). Basal body temperature This usually refers to a temperature reading collected when a person first wakes up in the morning (or after their longest sleep period of the day). The true BBT can only be obtained by continuous temperature monitoring through internally worn temperature sensors. In women, ovulation will trigger a rise in BBT between 0.2º and 0.5 °C. (0.5 and 1.°F) that lasts approximately until the next menstruation. This temperature shift may be used to determine the onset of post-ovulatory infertility. (See ref. 30) Cervical mucus The appearance of cervical mucus and vulvar sensation are generally described together as two ways of observing the same sign. Cervical mucus is produced by the cervix, which connects the uterus to the vaginal canal. Fertile cervical mucus promotes sperm life by decreasing the acidity of the vagina, and also it helps guide sperm through the cervix and into the uterus. The production of fertile cervical mucus is caused by estrogen, the same hormone that prepares a woman's body for ovulation. By observing her cervical mucus and paying attention to the sensation as it passes the vulva, a woman can detect when her body is gearing up for ovulation, and also when ovulation has passed. When ovulation occurs, estrogen production drops slightly and progesterone starts to rise. The rise in progesterone causes a distinct change in the quantity and quality of mucus observed at the vulva. Cervical position The cervix changes position in response to the same hormones that cause cervical mucus to be produced and to dry up. When a woman is in an infertile phase of her cycle, the cervix will be low in the vaginal canal; it will feel firm to the touch (like the tip of a person's nose); and the os—the opening in the cervix—will be relatively small, or "closed". As a woman becomes more fertile, the cervix will rise higher in the vaginal canal, it will become softer to the touch (more like a person's lips), and the os will become more open. After ovulation has occurred, the cervix will revert to its infertile position. Cycle history Calendar-based systems determine both pre-ovulatory and post-ovulatory infertility based on cycle history. When used to avoid pregnancy, these systems have higher perfect-use failure rates than symptoms-based systems but are still comparable with barrier methods, such as diaphragms and cervical caps. Mucus- and temperature-based methods used to determine post-ovulatory infertility, when used to avoid conception, result in very low perfect-use pregnancy rates. However, mucus and temperature systems have certain limitations in determining pre-ovulatory infertility. A temperature record alone provides no guide to fertility or infertility before ovulation occurs. Determination of pre-ovulatory infertility may be done by observing the absence of fertile cervical mucus; however, this results in a higher failure rate than that seen in the period of post-ovulatory infertility. Relying only on mucus observation also means that unprotected sexual intercourse is not allowed during menstruation, since any mucus would be obscured. The use of certain calendar rules to determine the length of the pre-ovulatory infertile phase allows unprotected intercourse during the first few days of the menstrual cycle while maintaining a very low risk of pregnancy. With mucus-only methods, there is a possibility of incorrectly identifying mid-cycle or anovulatory bleeding as menstruation. Keeping a BBT chart enables accurate identification of menstruation, when pre-ovulatory calendar rules may be reliably applied. In temperature-only systems, a calendar rule may be relied on alone to determine pre-ovulatory infertility. In symptothermal systems, the calendar rule is cross-checked by mucus records: observation of fertile cervical mucus overrides any calendar-determined infertility. Calendar rules may set a standard number of days, specifying that (depending on a woman's past cycle lengths) the first three to six days of each menstrual cycle are considered infertile. Or, a calendar rule may require calculation, for example holding that the length of the pre-ovulatory infertile phase is equal to the length of a woman's shortest cycle minus 21 days. Rather than being tied to cycle length, a calendar
Method (FAM) refers specifically to the system taught by Toni Weschler. The term natural family planning (NFP) is sometimes used to refer to any use of FA methods, the lactational amenorrhea method and periodic abstinence during fertile times. A method of FA may be used by NFP users to identify these fertile times. Women who are breastfeeding a child and wish to avoid pregnancy may be able to practice the lactational amenorrhea method (LAM). LAM is distinct from fertility awareness, but because it also does not involve contraceptives, it is often presented alongside FA as a method of "natural" birth control. Within the Catholic Church and some Protestant denominations, the term natural family planning is often used to refer to Fertility Awareness pointing out it is the only method of family planning approved by the Church. History Development of calendar-based methods It is not known exactly when it was first discovered that women have predictable periods of fertility and infertility. It is already clearly stated in the Talmud tractate Niddah, that a woman only becomes pregnant in specific periods in the month, which seemingly refers to ovulation. St. Augustine wrote about periodic abstinence to avoid pregnancy in the year 388 (the Manichaeans attempted to use this method to remain childfree, and Augustine condemned their use of periodic abstinence). One book states that periodic abstinence was recommended "by a few secular thinkers since the mid-nineteenth century," but the dominant force in the twentieth century popularization of fertility awareness-based methods was the Roman Catholic Church. In 1905 Theodoor Hendrik van de Velde, a Dutch gynecologist showed that women only ovulate once per menstrual cycle. In the 1920s, Kyusaku Ogino, a Japanese gynecologist, and Hermann Knaus, from Austria, independently discovered that ovulation occurs about fourteen days before the next menstrual period. Ogino used his discovery to develop a formula for use in aiding infertile women to time intercourse to achieve pregnancy. In 1930, John Smulders, a Roman Catholic physician from the Netherlands, used this discovery to create a method for avoiding pregnancy. Smulders published his work with the Dutch Roman Catholic medical association, and this was the first formalized system for periodic abstinence: the rhythm method. Introduction of temperature and cervical mucus signs In the 1930s, Reverend Wilhelm Hillebrand, a Catholic priest in Germany, developed a system for avoiding pregnancy based on basal body temperature. This temperature method was found to be more effective at helping women avoid pregnancy than were calendar-based methods. Over the next few decades, both systems became widely used among Catholic women. Two speeches delivered by Pope Pius XII in 1951 gave the highest form of recognition to the Catholic Church's approval—for couples who needed to avoid pregnancy—of these systems. In the early 1950s, John Billings discovered the relationship between cervical mucus and fertility while working for the Melbourne Catholic Family Welfare Bureau. Billings and several other physicians, including his wife, Dr. Evelyn Billings, studied this sign for a number of years, and by the late 1960s had performed clinical trials and begun to set up teaching centers around the world. First symptoms-based teaching organizations While Dr. Billings initially taught both the temperature and mucus signs, they encountered problems in teaching the temperature sign to largely illiterate populations in developing countries. In the 1970s they modified the method to rely on only mucus. The international organization founded by Dr. Billings is now known as the World Organization Ovulation Method Billings (WOOMB). The first organization to teach a symptothermal method was founded in 1971. John and Sheila Kippley, lay Catholics, joined with Dr. Konald Prem in teaching an observational method that relied on all three signs: temperature, mucus, and cervical position. Their organization is now called Couple to Couple League International. The next decade saw the founding of other now-large Catholic organizations, Family of the Americas (1977), teaching the Billings method, and the Pope Paul VI Institute (1985), teaching a new mucus-only system called the Creighton Model. Up until the 1980s, information about fertility awareness was only available from Catholic sources. The first secular teaching organization was the Fertility Awareness Center in New York, founded in 1981. Toni Weschler started teaching in 1982 and published the bestselling book Taking Charge of Your Fertility in 1995. Justisse was founded in 1987 in Edmonton, Canada. These secular organizations all teach symptothermal methods. Although the Catholic organizations are significantly larger than the secular fertility awareness movement, independent secular teachers have become increasingly common since the 1990s. Ongoing development Development of fertility awareness methods is ongoing. In the late 1990s, the Institute for Reproductive Health at Georgetown University introduced two new methods. The Two-Day Method, a mucus-only system, and CycleBeads and iCycleBeads (the digital version), based on the Standard Days Method, are designed to be both effective and simple to teach, learn, and use. In 2019, Urrutia et al. released a study as well as interactive graph over-viewing all studied fertility awareness based methods. Femtech companies such as Dot and Natural Cycles have also produced new studies and apps to help women avoid pregnancy. Natural Cycles is the first app of its kind to receive FDA approval. Fertility signs Most menstrual cycles have several days at the beginning that are infertile (pre-ovulatory infertility), a period of fertility, and then several days just before the next menstruation that are infertile (post-ovulatory infertility). The first day of red bleeding is considered day one of the menstrual cycle. Different systems of fertility awareness calculate the fertile period in slightly different ways, using primary fertility signs, cycle history, or both. Primary fertility signs The three primary signs of fertility are basal body temperature (BBT), cervical mucus, and cervical position. A woman practicing symptoms-based fertility awareness may choose to observe one sign, two signs, or all three. Many women experience secondary fertility signs that correlate with certain phases of the menstrual cycle, such as abdominal pain and heaviness, back pain, breast tenderness, and mittelschmerz (ovulation pains). Basal body temperature This usually refers to a temperature reading collected when a person first wakes up in the morning (or after their longest sleep period of the day). The true BBT can only be obtained by continuous temperature monitoring through internally worn temperature sensors. In women, ovulation will trigger a rise in BBT between 0.2º and 0.5 °C. (0.5 and 1.°F) that lasts approximately until the next menstruation. This temperature shift may be used to determine the onset of post-ovulatory infertility. (See ref. 30) Cervical mucus The appearance of cervical mucus and vulvar sensation are generally described together as two ways of observing the same sign. Cervical mucus is produced by the cervix, which connects the uterus to the vaginal canal. Fertile cervical mucus promotes sperm life by decreasing the acidity of the vagina, and also it helps guide sperm through the cervix and into the uterus. The production of fertile cervical mucus is caused by estrogen, the same hormone that prepares a woman's body for ovulation. By observing her cervical mucus and paying attention to the sensation as it passes the vulva, a woman can detect when her body is gearing up for ovulation, and also when ovulation has passed. When ovulation occurs, estrogen production drops slightly and progesterone starts to rise. The rise in progesterone causes a distinct change in the quantity and quality of mucus observed at the vulva. Cervical position The cervix changes position in response to the same hormones that cause cervical mucus to be produced and to dry up. When a woman is in an infertile phase of her cycle, the cervix will be low in the vaginal canal; it will feel firm to the touch (like the tip of a person's nose); and the os—the opening in the cervix—will be relatively small, or "closed". As a woman becomes more fertile, the cervix will rise higher in the vaginal canal, it will become softer to the touch (more like a person's lips), and the os will become more open. After ovulation has occurred, the cervix will revert to its infertile position. Cycle history Calendar-based systems determine both pre-ovulatory and post-ovulatory infertility based on cycle history. When used to avoid pregnancy, these systems have higher perfect-use failure rates than symptoms-based systems but are still comparable with barrier methods, such as diaphragms and cervical caps. Mucus- and temperature-based methods used to determine post-ovulatory infertility, when used to avoid conception, result in very low perfect-use pregnancy rates. However, mucus and temperature systems have certain limitations in determining pre-ovulatory infertility. A temperature record alone provides no guide to fertility or infertility before ovulation occurs. Determination of pre-ovulatory infertility may be done by observing the absence of fertile cervical mucus; however, this results in a higher failure rate than that seen in the period of post-ovulatory infertility. Relying only on mucus observation also means that unprotected sexual intercourse is not allowed during menstruation, since any mucus would be obscured. The use of certain calendar rules to determine the length of the pre-ovulatory infertile phase allows unprotected intercourse during the first few days of the menstrual cycle while maintaining a very low risk of pregnancy. With mucus-only methods, there is a possibility of incorrectly identifying mid-cycle or anovulatory bleeding as menstruation. Keeping a BBT chart enables accurate identification of menstruation, when pre-ovulatory
Lake Nicaragua. A free trade zone with commercial facilities as well as tourist hotels and an international airport at Rivas were planned to be built when canal construction was advanced. Appropriate road improvements were planned. The Pan-American Highway would have crossed the canal via a bridge. Nicaragua Route 25 (Acoyapa-San Carlos) on the eastern side of Lake Nicaragua would have gotten a ferry service. Both ports would get public road connections. HKND plans to construct a private gravel maintenance road on both sides of the canal. The estimate for the workforce in 2020 was 3,700 people, and 12,700 in 2050 when traffic had increased. Transit time would have been about 30 hours. It was projected that by 2020 3,576 ships would have transited the canal annually. The transit rate was expected to increase to 4,138 by 2030, and to 5,097 by 2050. For comparison, the Panama Canal handled 12,855 transits in 2009. Construction No significant construction took place. No "major works" such as dredging were planned to take place until after a Pacific Ocean wharf was built. The apparent lack of experience of Wang and his HKND Group in large-scale engineering was cited as a risk. On December 22, 2014, Wang announced construction started in Rivas, Nicaragua. Wang spoke during the starting ceremony of the first works of the Interoceanic Grand Canal in Brito town. Construction of the new waterway would have been by HKND Group—Hong Kong–based HK Nicaragua Canal Development Investment Co Ltd., which is controlled by Wang. According to HKND's announced plans in 2015, the project entailed the canal's development and building, and a supporting infrastructure. There would have been four main phases. The preconstruction phase included getting permits, acquiring land and machinery, and finalizing designs and plans. The early construction phase, started in December 2014, lasted through September 2015; it secured access to construction sites, but it did not provide the critical infrastructure nor mobilized the workforce. During the construction phase from September 2015 to March 2020, the canal would have been dug and the locks built along with accompanying infrastructure. The commissioning phase projected from April 2020 to June 2020 included lock testing and lock and tug boat operator training. HKND described the project as the largest civil earth-moving operation in history. Most of this would have consisted of dry excavation to form the canal with an estimate of 4,019 MCM of rock and soil. There would have been 739 MCM of freshwater dredging (Lake Nicaragua) and 241 MCM of marine dredging. Marine dredging of the oceanic access canal would be required on the Pacific side for and on the Caribbean Sea for . Disposal of excavation material would have been done along the canal in designated disposal areas typically within of the canal. Two concrete plants and a steel plant were planned to support the project. While cement would have likely been imported, construction aggregate would have come from local quarries near the two locks. HKND estimates that about 50,000 people would be employed during the five-year construction, about half of them from Nicaragua, 25% from China, and the remainder from various other countries. 1,400 workers would be in office or administrative positions and the rest in the field. The management offices would be rented or purchased near Rivas. Workers would live in one of nine camps, which besides food and shelter would also provide health care and security. These are “closed” camps — that is, workers cannot leave the camp unless part of an organized activity. The work schedule calls for 12 hour shifts for 7 days a week. Domestic workers work two weeks and get one week off, while foreign workers are 6 weeks on and get 2 weeks off (management) or 22 weeks on, 4 weeks off (blue collar workers). On 2 September 2015, Pang Wai Kwok (executive VP of HKND Group) was interviewed by Nicaraguan journalist Carlos Solis and said up to 3,000 people might be employed on the canal project within the year. However, the labor force depends on the contract bid's winner and Kwok said anyone in the world is eligible to work on the canal. Financing Project costs were estimated in the region of $40 billion to $50 billion Beside private money provided by Wang at the start-up, further influx of financial support was expected from investors. An IPO was reported to be in preparation by the end of 2014. XCMG, a state-owned Chinese construction company would have provided machinery and take 1.5% to 3% of HKND shares in return. By the end of 2014, no major investors had been named. There had been speculation that the Chinese government would provide financial backing for the project, but China, as well as Wang, denied this. Wang lost nearly 85% of his wealth during the 2015 Chinese stock market crash, according to the Bloomberg Billionaires Index. In addition, Wang has had a string of setbacks for projects around the world since 2014. The economic development potential for the canal project is relatively measurable with Panama; however, the World Bank describes the country of Nicaragua as the second poorest in Latin America and the Caribbean. The World Bank has compiled a data list of projects that the impoverished nation has on record and the majority of the efforts are geared towards infrastructure and agricultural needs, but there is no explicit title project that would support the canal line of effort. Wang admitted that the project has financial, political, and engineering risks. With the high cost of the project that independently has been estimated to be about $100 billion, it was not fully funded. The project was expected in 2014 to be completed in 2020, but Stratfor, an analyst agency, stated then that was an "unrealistic goal." While the Nicaraguan National Bank reserves served as collateral on the part of Nicaragua, HKND had no such potential liability. Following financial difficulties, HKND finally closed its Headquarter offices in China in April 2018, leaving no forwarding address or telephone numbers to be reached. Impact Environmental Some of the natural habitat of at least 22 endangered species would be destroyed in the construction. Another major environmental concern is the project's impact on Lake Nicaragua, the largest source of freshwater in Nicaragua. An oil spill would have serious and lasting consequences. Other problems include the possibility of dredging bringing up toxic sediments, the disruption of migration patterns of animal species, and the potential to introduce invasive species to the Lake. Environmental studies had not been released by HKND when the project officially started in December 2014. The Nicaraguan Academy of Sciences noted that hundreds of thousands of hectares of pristine forests and wetlands would be destroyed and pointed out that the environmental study performed for the canal was not independent. President Daniel Ortega stated that he is "not concerned about harming the lake because it is already contaminated." Protesters fear that the canal would bring massive environmental destruction to Lake Nicaragua and the Atlantic Autonomous Regions. 400,000 hectares of tropical rain forest and wetlands would be destroyed. It would also encroach upon the habitats of animals such as Baird's tapir, the spider monkey, and the jaguar. Safety Richard Condit, from the Smithsonian Tropical Research Institute, believes that the project could be used as leverage for forest protection in a country that currently lacks "institutional capacity" to meet conservation needs. A Canadian pilot was the first fatality during the canal project. The pilot was flying alone on the western side of Lake Nicaragua during an aerial survey. Sustainability The survey site was on the same side as NicarAgua–Dulce, which is the only ecotourism group in Nicaragua that is affiliated with The International Ecotourism Society, and it is located north of the proposed canal site. Falling in line with ecotourism, Nicaragua's Ministry of Environment and Natural Resources has promoted formal workshops at each level of education (primary, secondary, and post-secondary); however, there is no curriculum relevant to the pending canal project. The American-led Foundation for Sustainable Development is another partner that provides training initiatives to Nicaraguans that cannot access formal education. One of FSD's support sites is located at Tola, which is within close proximity of the proposed Brito–Pacific canal opening. Economic As the original Panama Canal still has capacity for Panamax-sized shipping and Panama has completed its Panama Canal expansion project, adding more capacity and allowing transit for even larger New Panamax size ships, projections for the Nicaragua Canal's traffic may be optimistic. While the proposed Nicaragua Canal would be wide enough to accommodate Triple E class of mega container ships, which are too wide for the expanded Panama Canal, few ports are able to accommodate these ships at the present time. Further, a coast-to-coast railway line may be built by China in Honduras and could affect use of the Nicaragua Canal. Also, North American overland shipping through Pacific ports in Mexico and the United States will compete in the traffic between Asia and the U.S. east coast. Thus, competition may undermine the Nicaragua Canal's economic viability if it were ever built. The canal would affect neighboring economies, like Honduras and El Salvador, as they are part of the commercial treaty known as the Northern
Bloomberg reported in 2015 that "conspiracy theories abound" including the project was a land grab by Ortega, an attempt by Ortega to "whip up" support in elections, and part of a Chinese plan to gain influence in the region. By November 2016, the president of the canal commission, Manual Coronel Kautz said "According to our schedule, we should initiate major works by the end of the year." However, Carlos Fernando Chamorro, editor of the Confidencial newspaper, said "If the People's Republic of China does not step forward, it won't happen. Wang Jing does not have the reputation to push this through. If it is just him, then the chances of this happening are zero. If the PRC steps in, then it is a big possibility." Following financial difficulties, HKND finally closed its headquarters offices in China in April 2018, leaving no forwarding address or telephone numbers to be reached. Despite HKND abandoning its attempt to construct the canal, the Nicaragua government indicated that it will go ahead with the vast land expropriations () under land expropriation Canal Law 840 enacted in 2013, which includes a concession for carrying out seven sub-projects, among them ports, oil pipelines, free-trade zones, and developing tourist areas that could be realized in any part of the national territory. In particular, this law denies any right to appeal against the expropriation decision and provides a derisory level of compensation. It also allows the investor (HKND) to buy and sell its rights over the various sub-projects "in parts", which is a highly profitable enterprise. This has been called a "land grab" and it has prompted protests, and some violent confrontations against security forces. Activists noted that the canal contract established that it must be dissolved in 72 months, if the investor has not obtained the money to start the project; that deadline was 14 June 2019, so they assert that the Law 840 must be repealed. Opposition Protests against the canal's construction occurred shortly after the official ceremony marking its beginning. Farmers feared it could cause their eviction and land expropriation. Opposition leader Eliseo Nuñez has called the deal "part of one of the biggest international scams in the world". Legal challenges that the deal violates constitutional rights were rejected by the Supreme Court of Nicaragua and a retrospective rewriting of the Constitution of Nicaragua placed HKND beyond legal challenge. HKND has been granted the right to expropriate land within on each side of the canal and pay only cadastral value, not market value, for property. Wang, however, promised to pay fair market value. The estimates of the number of people who would be displaced range from 29,000 to more than 100,000. There are indications of local opposition to intended expropriations. Thus, according to an activist leader, an unrest in Rivas in December 2014, in opposition to the canal, left two protesters dead, although no evidence was ever produced to justify his claim. The CIDH, Nicaragua's Human Rights Commission, has strongly criticized the government for not looking into the project's effect on citizens, amid claims that citizens were not involved in decision-making. The British firm ERM, who carried out the Environmental Impact Assessment, claims it held consultations with around 6,000 people in the communities along its planned route, and estimated that the property of about 30,000 people would be affected. National opinion polls show that support for the project is about 70%. Reported end to the canal project Investor Wang had financial setbacks unrelated to the Nicaragua project, losing 80% of his net worth during the 2015–16 Chinese stock market turbulence. In March 2017, the Havana Times reported that the public relations agency handling Wang's interests in Nicaragua had been let go, in absence of any developments on the project to report, and Wang had not been in the country in more than two years. In May 2017, the PanAm Post indicated that "no concrete action has been taken to begin the project" and suggested that the project was either "paralyzed, or nonexistent." In September 2017 Agence France-Presse reported that the work had been "pushed back indefinitely," although the government renewed the project's environmental permit in April 2017. In February 2018, Manuel Coronel Kautz, head of Interoceanic Grand Canal Authority of Nicaragua, told Agence France Presse work on the canal was still ongoing, but by that point analysts and activists widely viewed the canal project as defunct, with China having shifted its investment focus to Panama, the main competitor to a Nicaraguan canal. Following financial difficulties, HKND finally closed its offices in April 2018, leaving no forwarding address or telephone numbers to be reached. Absent a 60% vote to revoke the legislation, HKND maintains the legal concessions established by the 2013 law, including for other infrastructures projects in Nicaragua, including ports, roads, railway and an airport. Description The construction company provided a project description for review on open source, dated December 2014. The canal as planned would have been and would have three sections. The West Canal runs from Brito on the Pacific Ocean up the Rio Brito valley, crosses the continental divide, and after passing through the Rio Las Lajas valley enters Lake Nicaragua; its length would be . The Nicaragua Lake section measures and runs from south of San Jorge to south of San Miguelito. The Eastern Canal would be the longest section at and would be built along the Rio Tule valley through the Caribbean highland to the Rio Punta Gorda valley to meet the Caribbean Sea. A channel would have to be dug in the lake bottom, as it is not deep enough for large vessels to transit the canal. Both the West Canal and the East Canal would each have one lock with 3 consecutive chambers to raise ships to the level of Lake Nicaragua that has an average water elevation of , with a range between . The western Brito Lock would be inland from the Pacific, and the eastern Camilo Lock would be inland from the Caribbean Sea. The dimensions of each of the locks' chambers are long, wide, and threshold depth. As locks generally define the limit on the size of ships that can be handled, the Nicaragua Canal would have allowed passage for larger ships than those that pass through the Panama Canal. For comparison, the new third set of locks in the Panama expansion will only be long, wide, and deep. No water from Lake Nicaragua was planned to be used to flood the locks; water would have come from local rivers and recycling using water-saving basins. The Camilo lock would have been built adjacent to a new dam of the upper Punta Gorda River that creates a reservoir. This Atlanta Reservoir (or Lake Atlanta) would have a surface area of . West of the Atlanta reservoir, the Rio Agua Zarca would have been dammed to create a second reservoir. This reservoir would have had a surface area of and hold . A hydropower facility would be built at the dam and would have generated over 10 megawatts of power to be used for Camilo Lock operations. Both locks would also be connected to the country's power grid and have back-up generator facilities. It was estimated that each lock would have used about 9 megawatts of power. At each oceanic
(sometimes stylized as nü-metal) is a subgenre of that combines elements of heavy metal music with elements of other music genres such as hip hop, alternative rock, funk, industrial, and grunge. Nu metal bands have drawn elements and influences from a variety of musical styles, including multiple genres of heavy metal. Nu metal rarely features guitar solos or other displays of technical competence; the genre is heavily syncopated and based on guitar riffs. Many nu metal guitarists use seven-string guitars that are down-tuned to feature a heavier sound. DJs are occasionally featured in nu metal to provide instrumentation such as sampling, turntable scratching and electronic backgrounds. Vocal styles in nu metal include singing, rapping, screaming and growling. Nu metal is one of the key genres of the new wave of American heavy metal. Nu metal became popular in the late 1990s with bands and artists such as Korn, Limp Bizkit, and Kid Rock all releasing albums that sold millions of copies. Nu metal's popularity continued during the early 2000s, with bands such as Papa Roach, Staind, and P.O.D. all selling multi-platinum albums, and came to a peak with Linkin Park's diamond-selling album Hybrid Theory. However, by the mid-2000s, the oversaturation of bands combined with the underperformance of several high-profile releases led to nu metal's decline, leading to the rise of metalcore and many nu metal bands disbanding or abandoning their established sound in favor of other genres. During the 2010s, there was a nu metal revival; many bands that combine nu metal with other genres (for example, metalcore and deathcore) emerged, and some nu metal bands from the 1990s and early 2000s returned to the nu metal sound. Bands like Of Mice & Men, Emmure, Issues and My Ticket Home would combine nu metal with metalcore or deathcore. Artists like Grimes, Poppy and Rina Sawayama would integrate nu metal sounds into electronic pop music in the late 2010s and early 2020s. Nu metal has received criticism from many fans of heavy metal, and is often labeled with pejoratives like "mallcore". Many heavy metal fans do not consider nu metal to be a true subgenre of heavy metal. Some musicians referred to as nu metal have rejected the label; others did not even view their music as a form of heavy metal. Characteristics and fashion Terminology and origins Nu metal is also known as nü-metal and aggro-metal. It is a subgenre of alternative metal. MTV states that the early nu metal group Korn "arrived in 1993 into the burgeoning alternative metal scene, which would morph into nü-metal the way college rock became alternative rock." Stereogum has similarly claimed that nu metal was a "weird outgrowth of the Lollapalooza-era alt-metal scene". Nu metal merges elements of heavy metal music with elements of other music genres such as hip hop, and alternative rock. Nu metal bands have been influenced by and have used elements of a variety of musical genres, including electronic music, funk, gothic rock, hardcore punk, punk rock, dance music, new wave, jazz, post-punk, symphonic rock and synth-pop. Nu metal bands also are influenced by and use elements of genres of heavy metal music such as death metal, rap metal, groove metal, funk metal, and thrash metal. Some nu metal bands, such as Static-X and Dope, made nu metal music with elements of industrial metal. In contrast with other heavy metal subgenres, nu metal tends to use the same structure of verses, choruses and bridges as those in pop music. Musical characteristics Instrumentation Nu metal is heavily syncopated and is based mostly on guitar riffs. Mid-song bridges and a general lack of guitar solos contrasts it with other genres of heavy metal. Kory Grow of Revolver wrote, "... [i]n its efforts to tune down and simplify riffs, effectively drove a stake through the heart of the guitar solo". Another contrast with other heavy metal genres is nu metal's emphasis on rhythm, rather than on complexity or mood, often its rhythm sounds like that of groove metal. The wah pedal is occasionally featured in nu metal music. Nu metal guitar riffs occasionally are similar to those of death metal. Nu metal bassists and drummers are often influenced by funk and hip hop, respectively, adding to nu metal's rhythmic nature. Blast beats, which are common in heavy metal subgenres such as black metal and death metal, are extremely rare in nu metal. Nu metal's similarities with many heavy metal subgenres include its use of common time, distorted guitars, power chords and note structures primarily revolving around Dorian, Aeolian or Phrygian modes. While loud and heavily distorted electric guitars are a core feature of all metal genres, nu metal guitarists took the sounds of "violence and destruction" to new levels with their overdriven guitar tone, which music journalists Kitts and Tolinski compared to the "...sound [of] a Mack truck being crushed by a collapsing skyscraper." Some nu metal bands use seven-string guitars that are generally down-tuned, rather than traditional Likewise, some bass guitarists use five-string and six-string instruments. in nu metal often features an emphasis on funk elements. In nu metal music, DJs are sometimes featured to provide instrumentation such as sampling, turntable scratching and electronic backgrounds. tends to have hip hop grooves and rhythms. Vocals Vocal styles used in nu metal music include singing, rapping, screaming and growling. Vocals in nu metal are often rhythmic and influenced by hip hop. While some nu metal bands, such as Limp Bizkit and Linkin Park, have rapping in their music, other nu metal bands, such as Godsmack and Staind, do not. Nu metal bands occasionally feature hip hop musicians as guests in their songs; Korn's song "Children of the Korn" features the rapper Ice Cube, who performed on the band's 1998 Family Values Tour. The hip hop musician Nas was featured on Korn's song "Play Me", which is on the band's album Take a Look in the Mirror. Limp Bizkit has recorded with multiple hip hop musicians including Method Man, Lil Wayne, Xzibit, Redman, DMX and Snoop Dogg. Linkin Park collaborated with hip hop musician Jay-Z on their 2004 extended play Collision Course. Kid Rock has recorded with hip hop musicians Eminem and Snoop Dogg. Trevor Baker of The Guardian wrote, "Bands such as Linkin Park, Korn and even the much reviled Limp Bizkit ... did far more to break down the artificial barriers between 'urban music' and rock than any of their more critically acceptable counterparts." Lyrics Lyrics in nu metal songs are often angry or nihilistic; many of the genre's lyrics focus on topics such as pain, angst, bullying, emotional issues, abandonment, betrayal, and personal alienation, in a way similar to those of grunge. Many nu metal lyrics that are about these topics tend to be in a very direct tone. However, some songs have lyrics that are about other topics. P.O.D. have used positive lyrics about promise and hope. The nu metal song "Bodies" by Drowning Pool is about moshing. The Michigan Daily wrote about Limp Bizkit's lyrics, writing that the band "used the nu-metal sound as a way to spin testosterone fueled fantasies into snarky white-boy rap. Oddly, audiences took frontman Fred Durst more seriously than he wanted, failing to see the intentional silliness in many of his songs". Limp Bizkit's lyrics also have been described as misogynistic. Dope's lyrics are usually about sex, drugs, parties, women, violence and relationships. In contrast, according to Josh Chesler of the Phoenix New Times, the lyrics of Deftones, who were once considered a nu metal band, "tend to have complex allusions and leave the songs open to many different interpretations." Fashion Nu metal clothing typically consists of baggy pants, shirts, and shorts, JNCO jeans, Adidas tracksuits, sports jerseys, baseball caps, baggy hoodies, cargo pants, and sweatpants. Nu metal hairstyles and facial hairstyles include dreadlocks, braids, spiky hair, chin beards, bald heads, goatees, frosted tips, and bleached or dyed hair. Common accessories in nu metal fashion include wallet chains, tattoos, and piercings, especially facial piercings. Nu metal fashion has been compared to hip hop fashion. Some nu metal bands such as Motograter, Mushroomhead, Mudvayne, and Slipknot wear masks, jumpsuits, costumes, face paint, corpse paint or body paint. A few nu metal bands, such as Coal Chamber, Evanescence, and Kittie, are known for having gothic appearances. History 1980s–1993: Precursors and influences Many heavy metal, alternative metal, industrial, funk metal, alternative rock, rap metal, and industrial metal artists and bands of the 1980s and early 1990s have been credited with laying groundwork for the development of nu metal by combining heavy guitar riffs with pop music structures and drawing influences from subgenres of heavy metal and other music genres; Faith No More, Primus, Helmet, Boo-Yaa T.R.I.B.E., Tool, Fear Factory, 24-7 Spyz, Hot Dawgz, Fishbone, Biohazard, Suicidal Tendencies, Infectious Grooves, Godflesh, Red Hot Chili Peppers, Nine Inch Nails, White Zombie, Mr. Bungle, Prong, Rage Against the Machine, and Ministry all have been highlighted as examples of this. Groove metal and thrash metal bands of the same period such as Machine Head, Sepultura, Metallica, Pantera, Slayer, and Anthrax all have been cited as influential to nu metal as well. For example, Anthrax pioneered the rap metal genre by combining hip hop and rap with heavy metal on their 1987 EP I'm the Man, which laid groundwork for development. Korn's lead vocalist Jonathan Davis said about Pantera guitarist Dimebag Darrell, "if there was no Dimebag Darrell, there would be no Korn". In the 1990s, bands described as "neo-metal" by the author Garry Sharpe-Young emerged; these bands include Pantera, Strapping Young Lad, Machine Head, Biohazard and Fear Factory. Sharpe-Young wrote that these bands "had chosen to strip metal down to its raw, primal element" and that "neo-metal paved the way for nu-metal". Nu metal is often influenced by hip hop. Rappers Dr. Dre and Ice Cube have been a big influence on nu metal pioneers Korn; guitarist Munky said the band were trying to emulate the samples of Dr. Dre's 1992 album The Chronic. Munky and fellow Korn guitarist Head also said they tried to emulate samples by the hip hop group Cypress Hill. Both the Geto Boys and N.W.A. also have been a major influence on Korn. Fred Durst of Limp Bizkit has cited the hip hop group The Fat Boys as a major influence on him. Shifty Shellshock of the nu metal band Crazy Town cited Run–D.M.C. and Beastie Boys as influences. Josey Scott of the nu metal band Saliva cited LL Cool J, Beastie Boys, Public Enemy, N.W.A., Chuck D, Doug E. Fresh, and Whodini as influences. Sonny Sandoval of the nu metal band P.O.D. cited hip hop groups Boogie Down Productions and Run–D.M.C. as influences. Linkin Park member Mike Shinoda's hip hop influences include Boogie Down Productions, Public Enemy, N.W.A., and the Juice Crew. Chester Bennington, another member of Linkin Park, cited A Tribe Called Quest, KRS-One, Run–D.M.C., Public Enemy, N.W.A., Beastie Boys, and Rob Base as influences. Beastie Boys are a hip hop music group that influenced nu metal. Hip hop group Run–DMC was one of the first groups to combine rap with rock, paving the way for nu metal. 1993–1998: Early development and rise Joel McIver acknowledged Korn as the band that created and pioneered the nu metal genre with its demo Neidermayer's Mind, which was released in 1993. McIver also acknowledged Korn as the band that started the new wave of American heavy metal, which is a heavy metal music movement that started in the 1990s. The aggressive riffs of Korn, the rapping of Limp Bizkit, and the melodic ballads of Staind created the sonic template for nu metal. The origins of the term "nu metal" are often attributed to the work of producer Ross Robinson, who has been called "The Godfather of Nu Metal" between producers. Robinson has produced for nu metal bands such as Korn, Limp Bizkit and Slipknot. Many of the first nu metal bands, such as Korn and Deftones, came from California; however, the genre soon spread across the United States and many bands arose from various states, including Limp Bizkit from Florida, Staind from Massachusetts, and Slipknot from Iowa. In the book Brave Nu World, Tommy Udo wrote about the nu metal band Coal Chamber, "There's some evidence to suggest that Coal Chamber were the first band to whom the tag 'nu metal' was actually applied, in a live review in Spin magazine." In 1994, Korn released their self-titled debut album, which is widely considered the first nu metal album. Korn had experienced underground popularity at this time; their debut album peaked at number 72 on the Billboard 200. However, earlier the same year, P.O.D.'s album Snuff the Punk was also released, which was later recognized as the first nu metal album. In 1995, the band Sugar Ray released its debut studio album Lemonade and Brownies, an album described as nu metal. In 1995, Deftones released their debut album Adrenaline. The album peaked at number 23 on the Heatseekers Albums chart on October 5, 1996. Deftones also were temporarily controversial in 1996 when their vocalist Chino Moreno was blamed by TV news reports for a riot that occurred at the 1996 U-Fest festival. Deftones' 1997 album Around the Fur peaked at number 29 on the Billboard 200 on November 15, 1997. Both Adrenaline and Around the Fur were certified gold by the Recording Industry Association of America (RIAA) in the summer of 1999.Adrenaline and Around the Fur were certified platinum by the RIAA in September 2008 and June 2011, respectively. Sepultura's 1996 album Roots features nu metal elements that were considered influential to the genre, while Roots itself was influenced by Korn's self-titled debut album. Few bands were playing nu metal until 1997 when bands such as Coal Chamber, Limp Bizkit, and Papa Roach all released their debut albums. Attention through MTV and Ozzy Osbourne's 1995 introduction of Ozzfest was integral to the launching of the careers of many nu metal bands, including Limp Bizkit in 1998. Nu metal began to rise in popularity when Korn's 1996 album Life Is Peachy peaked at number 3 on the Billboard 200 and sold 106,000 copies in its first week of release. In 1997, Sugar Ray released its second studio album Floored. The album achieved mainstream success very quickly and was certified 2x platinum by the RIAA on February 20, 1998. Although Floored is a nu metal album, the only song from the album that achieved chart success was the song "Fly", which is instead a reggae song. Although Sugar Ray continued to be extremely popular, the band abandoned the nu metal genre and became a pop rock band with its 1999 studio album 14:59. 1998–2003: Mainstream popularity In 1998, nu metal became one of the most mainstream genres of music when Korn's third album Follow the Leader peaked at number 1 on the Billboard 200, was certified 5x platinum by the RIAA, and paved the way for other nu metal bands. At this point, many nu metal bands were signed to major record labels, and were playing combinations of heavy metal, hip hop, industrial, grunge and hardcore punk styles. Hip hop artists Vanilla Ice and Cypress Hill, along with heavy metal bands Sepultura, Primus, Fear Factory, Machine Head, and Slayer released albums that draw from the nu metal genre. In 1999, Korn's fourth studio album Issues peaked at number 1 on the Billboard 200. The album was certified 3× platinum by the RIAA in one month. The album sold at least 573,000 copies in its first week of release. During the late 1990s and early 2000s, multiple nu metal bands such as Korn, Limp Bizkit and P.O.D. appeared repeatedly on MTV's Total Request Live. The
on and on about all those bands." In 1999, Staind's second album Dysfunction was released; the track "Mudshovel" peaked at number 10 on the Mainstream Rock chart. Dysfunction was certified platinum by the RIAA in 2000 and 2x platinum in 2004. In 2000, Limp Bizkit's third studio album Chocolate Starfish and the Hot Dog Flavored Water set a record for highest week-one sales of a rock album, selling over 1,000,000 copies in the United States in its first week of release—400,000 of which sold on its first day of release, making it the fastest-selling rock album ever and breaking the world record held for seven years by Pearl Jam's Vs. Chocolate Starfish and the Hot Dog Flavored Water by Limp Bizkit was certified 6x platinum by the RIAA. That same year, both Papa Roach's second studio album Infest and Disturbed's debut studio album The Sickness were released. Both albums became multi-platinum hits. In 2000, P.O.D.'s album The Fundamental Elements of Southtown went platinum in the United States and was the 143rd best-selling album of 2000. The album's song "Rock the Party (Off the Hook)" went to number 1 on MTV's Total Request Live. At the turn of the millennium at the time, many nu metal bands performed at Ozzfest, including Kittie, Disturbed, Mudvayne, Linkin Park, Slipknot, Papa Roach, Otep, Static-X, Methods of Mayhem, Taproot and Drowning Pool. Ozzfest was successful, with Ozzfest 2000, for example, selling out and having 19,000 audience members. During that same year, nu metal bands like Papa Roach and Limp Bizkit joined rappers like Eminem and Xzibit on Eminem's Anger Management Tour, which had sold-out concerts. Late in 2000, Linkin Park released their debut album Hybrid Theory, which was the best-selling debut album by any artist of any genre in the 21st century. The album was also the best-selling album of 2001, selling more than albums such as Celebrity by NSYNC and Hot Shot by Shaggy. Linkin Park earned a Grammy Award for their second single "Crawling". Their fourth single, "In the End", was released late in 2001 and peaked at number 2 on the Billboard Hot 100 in March 2002. In 2001, Linkin Park's album Hybrid Theory sold 4,800,000 copies in the United States, making it the highest-selling album of the year. Linkin Park's album Hybrid Theory was certified 12x platinum (diamond) by the RIAA and sold at least 10,222,000 copies in the United States. In 2000, Godsmack released their second studio album Awake, which was certified double platinum. The album's title track peaked at number 1 on the Mainstream Rock chart. Both the album's title track and the song "Sick of Life" have been featured on the United States Navy's television commercials. Crazy Town's debut album The Gift of Game peaked at number 9 on the Billboard 200, went platinum in February 2001, and sold at least 1,500,000 copies in the United States. Worldwide, the album sold at least 2,500,000 copies. Staind's 2001 album Break the Cycle debuted at number 1 on the Billboard 200 with at least 716,000 copies sold in its first week of release, selling more than albums such as Survivor by Destiny's Child, Lateralus by Tool and Miss E... So Addictive by Missy Elliott. Break the Cycle by Staind was certified 5x platinum by the RIAA, with 4,240,000 copies sold in 2001 in the United States. Although the album featured nu metal tracks, a lot of the album showed Staind moving to a softer sound. Noting Staind's change in style to a softer sound, Tommy Udo of Brave Nu World wrote: "It's often said that nobody over the age of 24 could possibly like Limp Bizkit or Korn, but Staind are a more mainstream band and their songs are likely to appeal to a much bigger fanbase." In August 2001, Slipknot released their album Iowa, which peaked at number 3 on the Billboard 200 and went platinum in October 2001. Critic John Mulvey called the album the "absolute triumph of nu metal". P.O.D.'s 2001 album Satellite went and peaked at number 6 on the Billboard 200. P.O.D.'s popularity continued in the year 2002. On June 5, 2001, Drowning Pool released a nu metal album titled Sinner, which features the song "Bodies". The album went platinum on August 23, 2001 and its song "Bodies" became one of the most frequently played videos on MTV for new bands. "Bodies" went to number 6 on the Mainstream Rock chart. In 2001, System of a Down's album Toxicity peaked at number 1 on the Billboard 200. In November 2002, Toxicity was certified 3x platinum by the RIAA. System of a Down blended nu metal with occasional influences of Middle Eastern music, Greek music, Armenian music, and jazz music, and the band featured political lyrics. In 2003, MTV wrote that nu metal's mainstream popularity was declining, citing that Korn's fifth album Untouchables and Papa Roach's third album Lovehatetragedy both sold less than the bands' previous releases. Korn's lead vocalist Jonathan Davis blamed music piracy for the amount of sales of Untouchables because the album had been leaked to the Internet more than four months before its official release date. MTV also wrote that nu metal bands were played less frequently on radio stations and MTV began focusing on other musical genres. MTV wrote that Papa Roach's third album Lovehatetragedy has less hip hop elements than the band's previous album Infest and also said that Saliva's 2002 album Back into Your System has less elements than the band's 2001 album Every Six Seconds. MTV also wrote that Crazy Town's second album Darkhorse had no hit singles and sold less than the band's previous album The Gift of Game. MTV wrote that although Kid Rock's album Cocky had characteristics of the musician's 1998 album Devil Without a Cause, Cocky song "Forever", which featured the style of Kid Rock's song "Bawitdaba", was not as popular as Cocky country song "Picture". MTV also wrote, "Another cause for nü-metal and rap-rock's slip from the spotlight could be a diluted talent pool caused by so many similar-sounding bands. American Head Charge, Primer 55, Adema, Cold, the Union Underground, Dope, Apartment 26, Hed (Planet Earth) and Skrape—all of whom released albums between 2000 and 2001—left more of a collective impression than individual ones". Despite what MTV wrote, the RIAA certified Korn's album Untouchables platinum in July 2002, and one of the album's singles, "Here to Stay", received a lot of radio play and peaked at number one on MTV's Total Request Live twice. Untouchables sold at least 434,000 copies in first week of release and peaked at number 2 on the Billboard 200. However, Untouchables still did not sell as many copies as Korn's most commercially successful album, Follow the Leader. Despite the MTV report that nu metal was declining, nu metal remained extremely popular with bands such as Linkin Park, Godsmack, and Evanescence. Linkin Park's remix album Reanimation was released in July 2002 and sold more than a million copies that year, which MTV described as "impressive for a remix album". Evanescence's debut album Fallen was released in March 2003. Johnny Loftus of AllMusic noted the nu metal sound of the album. Fallen Grammy Award-winning lead single "Bring Me to Life" peaked at number 5 on the Billboard Hot 100 chart and number 1 on the Mainstream Top 40 chart. In 2003, Linkin Park's album Meteora peaked at number 1 on the Billboard 200 and sold at least 810,000 copies in its first week of being released. Meteora by Linkin Park and Fallen by Evanescence ranked third and fourth respectively on the best-selling albums of 2003. Both Linkin Park and Evanescence released high-charting singles throughout 2003 to Fallen by Evanescence sold at least 7,600,000 copies in the United States and Meteora by Linkin Park sold at least 6,100,000 copies in the United States. That same year, Godsmack released their third studio album Faceless, which peaked at number 1 on the Billboard 200 and was certified platinum by the RIAA in its first five weeks of being released. 2003–2010: Decline in popularity Most of nu metal's mainstream popularity sharply declined in 2003 and 2004. After a period of mainstream success with bands such as Godsmack, Linkin Park and Evanescence, nu metal declined in popularity. Limp Bizkit's 2003 album Results May Vary, which features alternative rock music and peaked at number 3 on the Billboard 200, with sales of at least 325,000 copies in its first week of being released. In three weeks of being released, the album had sold at least 500,000 copies. In 2004, Blabbermouth.net reported that, according to Nielsen SoundScan, Results May Vary sold 1,337,356 copies in the United States. However, the album garnered very poor critical reception and consequently performed much weaker than previous Limp Bizkit albums such as Significant Other and Chocolate Starfish and the Hot Dog Flavored Water. Korn's 2003 album Take a Look in the Mirror sold less than previous Korn albums like Issues and Untouchables. In 2004, 1970s and 1980s-inspired rock bands such as Jet and The Darkness were achieving mainstream success as the popularity of nu metal declined. During the the popularity of emo exceeded the declining popularity of nu metal. Also, during the metalcore, a fusion of extreme metal and hardcore punk, became one of the most popular genres in the new wave of American heavy metal. In the mid-to-late 2000s, many nu metal bands experimented with other genres and sounds. Linkin Park's third studio album Minutes to Midnight, released in 2007, was noted for its complete departure from the band's nu metal sound. Nu metal bands such as Disturbed and Drowning Pool moved to a different sound away from nu metal. Slipknot also departed from their nu metal sound and included elements of groove metal, death metal and thrash metal into their music. Staind and Papa Roach moved to lighter sounds. Staind's 2003 album 14 Shades Of Grey was significantly less heavy than previous albums and shows the band's departure from nu metal and a movement towards a lighter sound. Papa Roach abandoned the nu metal genre with their 2004 album Getting Away with Murder, moving to a hard rock style. System of a Down released two albums in 2005, Mezmerize and Hypnotize. Both did well commercially and critically, but the band took a more alternative metal approach to the two albums compared to their past three efforts. In 2005, Limp Bizkit released a record called The Unquestionable Truth (Part 1) without promoting and advertising the record. The album was not very popular; its sales fell 67% during its second week of release. In 2006, Limp Bizkit went on hiatus. Limp Vocalist Fred Durst said: In 2004, the popularity of nu metal was gone, and metalcore replaced nu metal as the most prominent heavy metal genre with the success of bands like Killswitch Engage and Shadows Fall. Other metalcore bands, including God Forbid, Unearth, Trivium, and Bullet for My Valentine, were also popular. Groove metal band Lamb of God also became successful in the heavy metal genre. Stephen Hill of Louder Sound called the rise of metalcore after the decline of nu metal "the metalcore revolution". 2011–present: Revivals and fusion with other genres During the mid-2010s, there was a discussion within media of a possible nu metal revival because of bands fusing nu metal with other genres. Despite the lack of radio play and popularity, some nu metal bands recaptured some of their former popularity as they released albums in a nu metal style. Many metalcore and deathcore groups such as My Ticket Home, Stray from the Path, Emmure, Of Mice & Men, Suicide Silence, and Issues, all gained moderate popularity in the 2010s and used elements from nu metal. This fusion has sometimes been referred to as "nu metalcore". Suicide Silence's 2011 album The Black Crown, which features elements of nu metal and deathcore, peaked at number 28 on the Billboard 200. In 2014, Issues' self-titled debut album peaked at number 9 on the same chart. The album features elements of metalcore, nu metal, pop and R&B. Of Mice & Men's 2014 album Restoring Force, which features elements of nu metal, peaked at number 4 on the Billboard 200. Bring Me the Horizon, often described as a metalcore band, released their fifth album That's the Spirit, which peaked at number 2 on the Billboard 200, in 2015. The album draws from multiple genres including nu metal and would experiment further with nu metal on their 2020 album Post Human: Survival Horror. The band's keyboardist has described them as a nu metal band. A nu metal revival began in the mid-2010s, with groups like Blood Youth, Cane Hill Sworn In, DangerKids and Islander. Within this movement, nu metalcore began increasingly prominent through the popularity of groups like Vein.fm, Loathe and Code Orange. According to PopMatters writer Ethan Stewart, Code Orange's 2017 album Forever led to nu metalcore becoming "one of the most prominent flavors of contemporary metal". In contrast, Metal Hammer writer Dannii Leivers cited the aforementioned groups as simplifying hinting towards a revival, insteading claiming a revival began in 2021, "a crop of young revivalists... looking to put a brand-new spin on the music of their formative years", namely Tetrarch. In the mid–late 2010s, genres like emo rap and trap metal emerged. Electronic and art pop singer-songwriters incorporated nu metal into their sound in the late 2010s and 2020s. Poppy has incorporated nu metal on her albums Am I A Girl? and I Disagree, Grimes on album Miss Anthropocene and Rina Sawayama on Sawayama. The singles "We Appreciate Power" and "Play Destroy" were pioneering examples. Poppy has described this fusion as "nu-Poppy" or "Poppymetal". I Disagree received critical acclaim for this fusion, with single "Bloodmoney" nominated for the 2021 Grammy Award for Best Metal Performance, making her the first female solo artist to be nominated for the award in its history. Dorian Electra incorporated nu metal influences on their album My Agenda, as did Ashnikko on Demidevil, particularly on single "Cry". The Guardian noted that these mostly female artists have revived a previously male-dominated genre and successfully adapted it to showcase a female perspective. Rina Sawayama said "metal itself lends itself to toxic masculine tropes, but it’s also almost taking the piss out of a very masculine expression of emotion”. Criticism and controversy Despite its popularity in the late 1990s and early 2000s, nu metal has often been criticized by many fans of heavy metal music, often being labelled with derogatory terms such as "mallcore" and "whinecore". Gregory Heaney of AllMusic called nu metal "one of metal's more unfortunate pushes into the mainstream". Lucy Jones of NME called nu metal "the worst genre of all time". In Metal: The Definitive Guide : Heavy, NWOBH, Progressive, Thrash, Death ... , Garry Sharpe-Young described as "a dumbed-down and—thankfully short[-]lived exercise". When Machine Head moved to the nu metal genre with their album The Burning Red and their vocalist Robb Flynn spiked his hair in the fashion of many nu metal musicians, the band were accused of "selling out" and many fans criticized their change of appearance and musical style. Machine Head's drummer Dave McClain said, "Pissing people off isn't a bad thing, you know? For people to be narrow-minded is bad ... [i]t doesn't bother us at all, we know we're going to piss people off with this record, but some people hopefully will actually sit down and listen to the whole record". Robb Flynn, Machine Head's vocalist, said Jonathan Davis, the vocalist of Korn, spoke about the criticism of nu metal from heavy metal fans, saying: Lamb of God's vocalist Randy Blythe criticized the nu metal genre and spoke about its loss of popularity in 2004, saying: "Nu-metal sucks, so that's why that's dying off. And I think... people are ready for angrier music. I think people are ready for something that's real, not, you know, 'I did it all for the nookie.'" Megadeth frontman Dave Mustaine said he would "rather have his eyelids pulled out" than listen to nu metal. Guitarist Gary Holt of Exodus and Slayer said that he "was so glad about" the decline of . Criticism from musicians who inspired nu metal Some musicians who influenced nu metal have tried to distance themselves from the subgenre and its bands. Mike Patton, the vocalist of Faith No More and Mr. Bungle, tried to distance himself from the subgenre and criticized it, even though he is featured on the song "Lookaway" on Sepultura's album Roots, which is often considered a nu metal album. Patton said of his music's influence on nu metal, "I feel no responsibility for that, it's their mothers' fault, not mine". Helmet frontman Page Hamilton said, "It's frustrating that people write [us] off because we're affiliated with or credited with or discredited with creating and rap metal ... which we sound nothing like". Although Trent Reznor of Nine Inch Nails has said he knows some Korn members and that he thinks they are "cool guys", he also criticized nu metal, saying: In response to reports that Fred Durst, lead singer of Limp Bizkit, is a big fan of Tool, the latter's vocalist Maynard James Keenan said, "If the lunch-lady in high school hits on you, you appreciate the compliment, but you're not really gonna start dating the lunch-lady, are ya?" While Durst has cited Rage Against the Machine as a major influence, Rage Against the Machine's bassist Tim Commerford is open about his hatred of Limp Bizkit, describing them as "one of the dumbest bands in the history of music". At the 2000 MTV Video Music Awards, Limp Bizkit won the Best Rock Video category for their song "Break Stuff", beating Rage Against the Machine's "Sleep Now in the Fire". When Limp Bizkit accepted
which was maintained by various people through 1986. ncurses The pcurses library was further improved when Zeyd Ben-Halim took over the development effort in late 1991. The new library was released as ncurses in November 1993, with version 1.8.1 as the first major release. Subsequent work, through version 1.8.8 (M1995), was driven by Eric S. Raymond, who added the form and menu libraries written by Juergen Pfeifer. Since 1996, it has been maintained by Thomas E. Dickey. Most ncurses calls can be easily ported to the old curses. System V curses implementations can support BSD curses programs with just a recompilation. However, a few areas are problematic, such as handling terminal resizing, since no counterpart exists in the old curses. Terminal database Ncurses can use either terminfo (with extensible data) or termcap. Other implementations of curses generally use terminfo; a minority use termcap. Few (mytinfo was an older exception) use both. License Ncurses is a part of the GNU Project, but is not distributed under the GNU GPL or LGPL. Instead, it is distributed under a permissive free software licence, i.e., the MIT License. This is due to the agreement made with the Free Software Foundation at the time the developers assigned their copyright. When the agreement was made to pass on the rights to the FSF, there was a clause that stated: The Foundation promises that all distribution of the Package, or of any work "based on the Package", that takes place under the control of the Foundation or its agents or assignees, shall be on terms that explicitly and perpetually permit anyone possessing a copy of the work to which the terms apply, and possessing accurate notice of these terms, to redistribute copies of the work to anyone on the same terms. According to the maintainer Thomas E. Dickey, this precludes relicensing to the GPL in any version, since it would place restrictions on the programs that will be able to link to the libraries. Programs using ncurses There are hundreds of programs which use ncurses. Some, such as GNU
which was itself an enhancement over the discontinued 4.4 BSD curses. The XSI Curses standard issued by X/Open is explicitly and closely modeled on System V. curses The first curses library was developed at the University of California at Berkeley, for a BSD operating system, around 1980 to support Rogue, a text-based adventure game. It originally used the termcap library, which was used in other programs, such as the vi editor. The success of the BSD curses library prompted Bell Labs to release an enhanced curses library in their System V Release 2 Unix systems. This library was more powerful and instead of using termcap, it used terminfo. However, due to AT&T policy regarding source-code distribution, this improved curses library did not have much acceptance in the BSD community. pcurses Around 1982, Pavel Curtis started work on a freeware clone of the Bell Labs curses, named pcurses, which was maintained by various people through 1986. ncurses The pcurses library was further improved when Zeyd Ben-Halim took over the development effort in late 1991. The new library was released as ncurses in November 1993, with version 1.8.1 as the first major release. Subsequent work, through version 1.8.8 (M1995), was driven by Eric S. Raymond, who added the form and menu libraries written by Juergen Pfeifer. Since 1996, it has been maintained by Thomas E. Dickey. Most ncurses calls can be easily ported to the old curses. System V curses implementations can support BSD curses programs with just a recompilation. However, a few areas are problematic, such as handling terminal resizing, since no counterpart exists in the old curses. Terminal database Ncurses can use either terminfo (with extensible data) or termcap. Other implementations of curses generally use terminfo; a minority use termcap. Few (mytinfo was an older exception) use both. License Ncurses is a part of the GNU Project, but is not distributed under the GNU GPL or LGPL. Instead, it is distributed under a permissive free software licence, i.e., the MIT License. This is due to the agreement made with the Free Software
that became the World Boxing Association in 1962 Nepal Basketball Association, the national basketball association of Nepal participating in the FIBA Asia zone of the International Basketball Federation (FIBA) Nippon Badminton Association, the national governing body for the sport of badminton in Japan Other uses .nba, a file extension used by Nero software. NBA (video game series), the video game series based on the National Basketball Association NBA (2005 video game), a 2005 basketball video game Narmada Bachao Andolan, a political movement in India against a dam built on the Narmada River National Bank Act,
a higher education accreditation body in India National Book Award, an award given for literary achievement in the United States National Braille Association (United States) Net Book Agreement (United Kingdom) Neue Bach-Ausgabe, the second complete edition of the music of J. S. Bach Newcastle Brown Ale (United Kingdom) Nihon Bus Association North British Academy of Arts (United Kingdom) News Broadcasters Association, a private organization of broadcasters in India. National
conference (18 games against each opponent, 9 at home, 9 on the road), and 150 games against the other conference (10 games against each team, 5 at home, 5 on the road). The NBA is also the only league that regularly schedules games on Christmas Day. The league has been playing games regularly on the holiday since 1947, though the first Christmas Day games were not televised until . Games played on this day have featured some of the best teams and players. Christmas is also notable for NBA on television, as the holiday is when the first NBA games air on network television each season. Games played on this day have been some of the highest-rated games during a particular season. In February, the regular season pauses to celebrate the annual NBA All-Star Game. Fans vote throughout the United States, Canada, and on the Internet, and the top vote-getters in each conference are named captains. Fan votes determine the rest of the allstar starters. Coaches vote to choose the remaining 14 All-Stars. Then, the top vote-getters in each conference draft their own team from a player pool of allstars. The top vote-getter in the league earns first pick and so forth. The player with the best performance during the game is rewarded with a Game MVP award. Other attractions of the All-Star break include the Rising Stars Challenge (originally Rookie Challenge), where the top rookies and second-year players in the NBA play in a 5-on-5 basketball game, with the current format pitting U.S. players against those from the rest of the world; the Skills Challenge, where players compete to finish an obstacle course consisting of shooting, passing, and dribbling in the fastest time; the Three Point Contest, where players compete to score the highest number of three-point field goals in a given time; and the NBA Slam Dunk Contest, where players compete to dunk the ball in the most entertaining way according to the judges. These other attractions have varying names which include the names of the various sponsors who have paid for naming rights. Shortly after the All-Star break is the trade deadline, which is set to fall on the 16th Thursday of the season (usually in February) at 3pm Eastern Time. After this date, teams are not allowed to exchange players with each other for the remainder of the season, although they may still sign and release players. Major trades are often completed right before the trading deadline, making that day a hectic time for general managers. Around the middle of April, the regular season ends. It is during this time that voting begins for individual awards, as well as the selection of the honorary, league-wide, post-season teams. The Sixth Man of the Year Award is given to the best player coming off the bench (must have more games coming off the bench than actual games started). The Rookie of the Year Award is awarded to the most outstanding first-year player. The Most Improved Player Award is awarded to the player who is deemed to have shown the most improvement from the previous season. The Defensive Player of the Year Award is awarded to the league's best defender. The Coach of the Year Award is awarded to the coach that has made the most positive difference to a team. The Most Valuable Player Award is given to the player deemed the most valuable for (his team) that season. Additionally, Sporting News awards an unofficial (but widely recognized) Executive of the Year Award to the general manager who is adjudged to have performed the best job for the benefit of his franchise. The post-season teams are the All-NBA Team, the All-Defensive Team, and the All-Rookie Team; each consists of five players. There are three All-NBA teams, consisting of the top players at each position, with first-team status being the most desirable. There are two All-Defensive teams, consisting of the top defenders at each position. There are also two All-Rookie teams, consisting of the top first-year players regardless of position. Playoffs The NBA playoffs begin in April after the conclusion of the regular season with the top eight teams in each conference, regardless of divisional alignment, competing for the league's championship title, the Larry O'Brien Championship Trophy. Seeds are awarded in strict order of regular season record (with a tiebreaker system used as needed). Having a higher seed offers several advantages. Since the first seed begins the playoffs playing against the eighth seed, the second seed plays the seventh seed, the third seed plays the sixth seed, and the fourth seed plays the fifth seed, having a higher seed means a team faces a weaker team in the first round. The team in each series with the better record has home-court advantage, including the First Round. Before the league changed its playoff determination format for the 2006–07 season, this meant that, for example, if the team that received the sixth seed had a better record than the team with the third seed (by virtue of a divisional championship), the sixth seed would have home-court advantage, even though the other team had a higher seed. Therefore, the team with the best regular season record in the league is guaranteed home-court advantage in every series it plays. For example, in 2006, the Denver Nuggets won 44 games and captured the Northwest Division and the third seed. Their opponent was the sixth-seeded Los Angeles Clippers, who won 47 games and finished second in the Pacific Division. Although Denver won its much weaker division, the Clippers had a home-court advantage and won the series in 5. The playoffs follow a tournament format. Each team plays an opponent in a best-of-seven series, with the first team to win four games advancing into the next round, while the other team is eliminated from the playoffs. In the next round, the successful team plays against another advancing team of the same conference. All but one team in each conference are eliminated from the playoffs. Since the NBA does not re-seed teams, the playoff bracket in each conference uses a traditional design, with the winner of the series matching the first- and eighth-seeded teams playing the winner of the series matching the fourth- and fifth-seeded teams, and the winner of the series matching the second- and seventh-seeded teams playing the winner of the series matching the third- and sixth-seeded teams. In every round, the best-of-7 series follows a 2–2–1–1–1 home-court pattern, meaning that one team will have home court in games 1, 2, 5, and 7, while the other plays at home in games 3, 4, and 6. From 1985 to 2013, the NBA Finals followed a 2–3–2 pattern, meaning that one team had home court in games 1, 2, 6, and 7, while the other played at home in games 3, 4, and 5. The final playoff round, a best-of-seven series between the victors of both conferences, is known as the NBA Finals and is held annually in June. The winner of the NBA Finals receives the Larry O'Brien Championship Trophy. Each player and major contributor—including coaches and the general manager—on the winning team receive a championship ring. In addition, the league awards the Bill Russell NBA Finals Most Valuable Player Award to the best performing player of the series. The league began using its current format, with the top eight teams in each conference advancing regardless of divisional alignment, in the 2015–16 season. Previously, the top three seeds went to the division winners. Championships The Los Angeles Lakers and the Boston Celtics have won the most championships with each having 17 NBA Finals wins. The Golden State Warriors and Chicago Bulls are tied for the third most championships won, with 6 each. Following them are the San Antonio Spurs with five championships, all since 1999. Current teams that have no NBA Finals appearances: Charlotte Hornets (formerly Charlotte Bobcats) Denver Nuggets Los Angeles Clippers (formerly Buffalo Braves, San Diego Clippers) Memphis Grizzlies (formerly Vancouver Grizzlies) Minnesota Timberwolves New Orleans Pelicans (formerly New Orleans Hornets, New Orleans/Oklahoma City Hornets) Media coverage As one of the major sports leagues in North America, the NBA has a long history of partnerships with television networks in the United States. The NBA signed a contract with DuMont Television Network in its eighth season, the 1953–54 season, marking the first year the NBA had a national television broadcaster. Similar to the National Football League, the lack of television stations led to NBC taking over the rights from the 1954-55 season until April 7, 1962–NBC's first tenure with the NBA. Currently in the U.S., the NBA has a contract with ESPN and TNT through the 2024–25 season. Games that are not broadcast nationally are usually aired over regional sports networks specific to the area where the teams are located. International competitions The National Basketball Association has sporadically participated in international club competitions. From 1987 to 1999 an NBA team played against championship club teams from Asia, Europe and South America in the McDonald's Championship. This tournament was won by the NBA invitee every year it was held. Ticket prices and viewership demographics In 2012, a ticket cost from $10 to $3,000 each, depending on the location of the seat and the success of the teams that were playing. In 2020, ticket prices for the NBA All Star Game became more expensive than ever before, averaging around $2,600, and even more on the secondary market. Viewership demographics According to Nielsen's survey, in 2013 the NBA had the youngest audience, with 45 percent of its viewers under 35, but the least likely, along with Major League Baseball, to be watched by women, who make up only 30% of the viewership. As of 2014, 45 percent of its viewers were black, while 40 percent of viewers were white, making it the only top North American sport that does not have a white majority audience. As of 2017, the NBA's popularity further declined among White Americans, who during the 2016–17 season, made up only 34% of the viewership. At the same time, the black viewership increased to 47 percent, while Hispanic (of any race) stood at 11% and Asian viewership stood at 8%. According to the same poll, the NBA was favored more strongly by Democrats than Republicans. Outside the U.S., the NBA's biggest international market is in China, where an estimated 800 million viewers watched the 2017–18 season. NBA China is worth approximately $4 billion. Controversies and criticism The NBA has been involved in a number of controversies over the years and has received a significant amount of criticism. Notable people Presidents and commissioners Maurice Podoloff, President from 1946 to 1963 Walter Kennedy, President from 1963 to 1967 and Commissioner from 1967 to 1975 Larry O'Brien, Commissioner from 1975 to 1984 David Stern, Commissioner from 1984 to 2014 Adam Silver, Commissioner from 2014 to present Players NBA 75th Anniversary Team Lists of National Basketball Association players List of foreign NBA players, a list that is exclusively for players who are not from the United States Foreign players International influence Following pioneers like Vlade Divac (Serbia) and Dražen Petrović (Croatia) who joined the NBA in the late 1980s, an increasing number of international players have moved directly from playing elsewhere in the world to starring in the NBA. Below is a short list of foreign players who have won NBA awards or have been otherwise recognized for their contributions to basketball, either currently or formerly active in the league: Dražen Petrović, Croatia – 2002 inductee into the Naismith Memorial Basketball Hall of Fame, four-time Euroscar winner, two-time Mr. Europa winner, MVP of the 1986 FIBA World Championship and EuroBasket 1989, two-time Olympic silver medalist, World champion, European champion, 50 Greatest EuroLeague Contributors. Vlade Divac, Serbia – 2019 inductee into the Naismith Memorial Basketball Hall of Fame, two-time Olympic silver medalist, 2001 NBA All-Star, two-time World champion, three-time European champion, 1989 Mr. Europa winner, 50 Greatest EuroLeague Contributors. Šarūnas Marčiulionis, Lithuania – 2014 inductee into the Naismith Memorial Basketball Hall of Fame. First player from the Soviet Union and one of the first Europeans to sign a contract with an NBA club and to play solidly in the league, helping to lead the way for the internationalization of the league in the late 1990s. Toni Kukoč, Croatia – 2021 inductee into the Naismith Memorial Basketball Hall of Fame, three-time NBA champion with Chicago Bulls (1996, 1997, 1998), 1996 Sixth Man Award winner, named in 2008 as one of the 50 Greatest EuroLeague Contributors. Arvydas Sabonis, Lithuania – 2011 inductee into the Naismith Memorial Basketball Hall of Fame, five-time Euroscar winner, two-time Mr. Europa winner, Olympic gold medalist in 1988 with the Soviet Union and bronze medalist in 1992 and 1996 with Lithuania, 1996 NBA All-Rookie First Team, 50 Greatest EuroLeague Contributors. Peja Stojaković, Serbia – NBA champion with Dallas Mavericks (2011), MVP of the EuroBasket 2001, member of the all-tournament team in the 2002 FIBA World Championship, 2001 Euroscar winner, two-time Mr. Europa winner, two-time NBA Three-Point Shootout champion, three-time NBA All-Star. Dirk Nowitzki, Germany – NBA champion with Dallas Mavericks (2011), MVP of the 2002 FIBA World Championship and EuroBasket 2005, member of the all-tournament team in the 2002 FIBA World Championship, six-time Euroscar winner, 2005 Mr. Europa, two-time FIBA Europe Player of the Year, 2007 NBA MVP, 2011 Bill Russell NBA Finals Most Valuable Player Award, 2006 NBA Three-Point Shootout champion and 14-time NBA All-Star. Hedo Türkoğlu, Turkey – 2008 Most Improved Player Award winner, member of the all-tournament team in the 2010 FIBA World Championship. Pau Gasol, Spain – two-time NBA champion with Los Angeles Lakers (2009 and 2010), six-time NBA All-Star, 2002 NBA Rookie of the Year, two-time Mr. Europa, 2006 FIBA World Championship MVP, four-time Euroscar, two-time FIBA Europe Player of the Year, MVP of the EuroBasket 2009 and EuroBasket 2015, winner of the NBA Citizenship Award in 2012. Andrei Kirilenko, Russia – 2004 NBA All-Star, MVP of the EuroBasket 2007, 2007 FIBA Europe Player of the Year. Tony Parker, France – four-time NBA champion with the San Antonio Spurs, 2007 NBA Finals MVP, six-time NBA All-Star and 2007 Euroscar winner. Manu Ginóbili, Argentina – four-time NBA champion with San Antonio Spurs, 2008 Sixth Man Award winner, two-time NBA All-Star, 50 Greatest EuroLeague Contributors, Olympic gold medalist in 2004 with Argentina. Yao Ming, China – 2016 inductee into the Naismith Memorial Basketball Hall of Fame, first overall pick in the 2002 NBA draft and eight-time NBA All-Star. Leandro Barbosa, Brazil – NBA champion with Golden State Warriors (2015), 2007 Sixth Man Award winner. Andrea Bargnani, Italy – first overall pick in the 2006 NBA draft by the Toronto Raptors. Giannis Antetokounmpo, Greece – NBA champion with the Milwaukee Bucks (2021), 2021 NBA Finals MVP, two-time NBA MVP, 2017 Most Improved Player, five-time NBA All-Star. Nikola Jokić, Serbia – 2021 NBA MVP, three-time NBA All-Star, 2016 NBA All-Rookie First Team, Olympic silver medalist. Luka Dončić, Slovenia - 2019 NBA Rookie of the Year, two-time NBA All-Star, European champion On some occasions, young players, most but not all from the English-speaking world, have attended U.S. colleges before playing in the NBA. Notable examples are: Nigerian Hakeem Olajuwon – first overall pick in the 1984 NBA draft, two-time champion, 12-time NBA All-Star, 1994 NBA MVP, two-time NBA Finals MVP, two-time NBA Defensive Player of the Year (only player to receive the MVP Award, Defensive Player of the Year Award, and Finals MVP award in the same season,) and Hall of Famer. Congolese Dikembe Mutombo – fourth overall pick in the 1991 NBA draft, four-time NBA Defensive Player of the Year, eight-time NBA All-Star and Hall of Famer. Dutchman Rik Smits – second overall pick in the 1988 NBA draft, 1998 NBA All-Star, played 12 years for the Indiana Pacers. German Detlef Schrempf – two-time NBA Sixth Man Award winner, three-time NBA All-Star. Canadians Steve Nash (two-time NBA MVP, eight-time NBA All-Star, Hall of Famer) and Andrew Wiggins (first overall pick in the 2014 NBA draft, 2015 NBA Rookie of the Year) Australians Luc Longley (three-time champion with the Chicago Bulls), Andrew Bogut (first overall pick in the 2005 NBA draft, 2015 NBA champion with Golden State Warriors) and Ben Simmons (first overall pick in the 2016 NBA draft, 2018 NBA Rookie of the Year, three-time NBA All-Star). Sudanese-born Englishman Luol Deng – 2007 NBA Sportsmanship Award winner, two-time NBA All-Star. Cameroonians Joel Embiid (four-time NBA All-Star, 2017 NBA All-Rookie First Team) and Pascal
position of commissioner. During that season's playoffs, the Bobcats officially reclaimed the Hornets name, and by agreement with the league and the Pelicans, also received sole ownership of all history, records, and statistics from the Pelicans' time in Charlotte. As a result, the Hornets are now officially considered to have been founded in 1988, suspended operations in 2002, and resumed in 2004 as the Bobcats, while the Pelicans are officially treated as a 2002 expansion team. (This is somewhat similar to the relationship between the Cleveland Browns and Baltimore Ravens in the NFL.) Donald Sterling, who was then-owner of the Los Angeles Clippers, received a lifetime ban from the NBA on April 29, 2014, after racist remarks he made became public. Sterling was also fined US$2.5 million, the maximum allowed under the NBA Constitution. Becky Hammon was hired by the San Antonio Spurs on August 5, 2014, as an assistant coach, becoming the second female coach in NBA history but the first full-time coach. This also makes her the first full-time female coach in any of the four major professional sports in North America. The NBA announced on April 15, 2016, that it would allow all 30 of its teams to sell corporate sponsor advertisement patches on official game uniforms, beginning with the 2017–18 season. The sponsorship advertisement patches would appear on the left front of jerseys, opposite Nike's logo, marking the first time a manufacturer's logo would appear on NBA jerseys, and would measure approximately 2.5 by 2.5 inches. The NBA would become the first major North American professional sports league to allow corporate sponsorship logos on official team uniforms, and the last to have a uniform manufacturer logo appear on its team uniforms. The first team to announce a jersey sponsorship was the Philadelphia 76ers, who agreed to a deal with StubHub. On July 6, 2017, the NBA unveiled an updated rendition of its logo; it was largely identical to the previous design, except with revised typography and a "richer" color scheme. The league began to phase in the updated logo across its properties during the 2017 NBA Summer League. The NBA also officially released new Nike uniforms for all 30 teams beginning with the 2017–18 season. The league eliminated "home" and "away" uniform designations. Instead, each team would have four or six uniforms: the "Association" edition, which is the team's white uniform, the "Icon" edition, which is the team's color uniform, and the "Statement" and "City" uniforms, which most teams use as an alternate uniform. In 2018, the NBA also released the "Earned" uniform. Teams The NBA originated in 1946 with 11 teams, and through a sequence of team expansions, reductions and relocations currently consists of 30 teams. The United States is home to 29 teams; another is in Canada. The current league organization divides 30 teams into two conferences of three divisions with five teams each. The current divisional alignment was introduced in the 2004–05 season. Reflecting the population distribution of the United States and Canada as a whole, most teams are in the eastern half of the country: 13 teams are in the Eastern Time Zone, nine in the Central, three in the Mountain, and five in the Pacific. Notes An asterisk (*) denotes a franchise move. See the respective team articles for more information. The Fort Wayne Pistons, Minneapolis Lakers and Rochester Royals all joined the NBA (BAA) in 1948 from the NBL. The Syracuse Nationals and Tri-Cities Blackhawks joined the NBA in 1949 as part of the BAA-NBL absorption. The Indiana Pacers, New York Nets, San Antonio Spurs, and Denver Nuggets all joined the NBA in 1976 as part of the ABA–NBA merger. The Charlotte Hornets are regarded as a continuation of the original Charlotte franchise, which suspended operations in 2002 and rejoined the league in 2004. They were known as the Bobcats from 2004 to 2014. The New Orleans Pelicans are regarded as being established as an expansion team in 2002, originally known as the New Orleans Hornets until 2013. Regular season Following the summer break, teams begin training camps in late September. Training camps allow the coaching staff to evaluate players (especially rookies), scout the team's strengths and weaknesses, prepare the players for the rigorous regular season and determine the 12-man active roster (and a 3-man inactive list) with which they will begin the regular season. Teams have the ability to assign players with less than two years of experience to the NBA G League. After training camp, a series of preseason exhibition games are held. Preseason matches are sometimes held in non-NBA cities, both in the United States and overseas. The NBA regular season begins in the last week of October. During the regular season, each team plays 82 games, 41 each home and away. A team faces opponents in its own division four times a year (16 games). Each team plays six of the teams from the other two divisions in its conference four times (24 games), and the remaining four teams three times (12 games). Finally, each team plays all the teams in the other conference twice apiece (30 games). This asymmetrical structure means the strength of schedule will vary between teams (but not as significantly as the NFL or MLB). Over five seasons, each team will have played 80 games against their division (20 games against each opponent, 10 at home, 10 on the road), 180 games against the rest of their conference (18 games against each opponent, 9 at home, 9 on the road), and 150 games against the other conference (10 games against each team, 5 at home, 5 on the road). The NBA is also the only league that regularly schedules games on Christmas Day. The league has been playing games regularly on the holiday since 1947, though the first Christmas Day games were not televised until . Games played on this day have featured some of the best teams and players. Christmas is also notable for NBA on television, as the holiday is when the first NBA games air on network television each season. Games played on this day have been some of the highest-rated games during a particular season. In February, the regular season pauses to celebrate the annual NBA All-Star Game. Fans vote throughout the United States, Canada, and on the Internet, and the top vote-getters in each conference are named captains. Fan votes determine the rest of the allstar starters. Coaches vote to choose the remaining 14 All-Stars. Then, the top vote-getters in each conference draft their own team from a player pool of allstars. The top vote-getter in the league earns first pick and so forth. The player with the best performance during the game is rewarded with a Game MVP award. Other attractions of the All-Star break include the Rising Stars Challenge (originally Rookie Challenge), where the top rookies and second-year players in the NBA play in a 5-on-5 basketball game, with the current format pitting U.S. players against those from the rest of the world; the Skills Challenge, where players compete to finish an obstacle course consisting of shooting, passing, and dribbling in the fastest time; the Three Point Contest, where players compete to score the highest number of three-point field goals in a given time; and the NBA Slam Dunk Contest, where players compete to dunk the ball in the most entertaining way according to the judges. These other attractions have varying names which include the names of the various sponsors who have paid for naming rights. Shortly after the All-Star break is the trade deadline, which is set to fall on the 16th Thursday of the season (usually in February) at 3pm Eastern Time. After this date, teams are not allowed to exchange players with each other for the remainder of the season, although they may still sign and release players. Major trades are often completed right before the trading deadline, making that day a hectic time for general managers. Around the middle of April, the regular season ends. It is during this time that voting begins for individual awards, as well as the selection of the honorary, league-wide, post-season teams. The Sixth Man of the Year Award is given to the best player coming off the bench (must have more games coming off the bench than actual games started). The Rookie of the Year Award is awarded to the most outstanding first-year player. The Most Improved Player Award is awarded to the player who is deemed to have shown the most improvement from the previous season. The Defensive Player of the Year Award is awarded to the league's best defender. The Coach of the Year Award is awarded to the coach that has made the most positive difference to a team. The Most Valuable Player Award is given to the player deemed the most valuable for (his team) that season. Additionally, Sporting News awards an unofficial (but widely recognized) Executive of the Year Award to the general manager who is adjudged to have performed the best job for the benefit of his franchise. The post-season teams are the All-NBA Team, the All-Defensive Team, and the All-Rookie Team; each consists of five players. There are three All-NBA teams, consisting of the top players at each position, with first-team status being the most desirable. There are two All-Defensive teams, consisting of the top defenders at each position. There are also two All-Rookie teams, consisting of the top first-year players regardless of position. Playoffs The NBA playoffs begin in April after the conclusion of the regular season with the top eight teams in each conference, regardless of divisional alignment, competing for the league's championship title, the Larry O'Brien Championship Trophy. Seeds are awarded in strict order of regular season record (with a tiebreaker system used as needed). Having a higher seed offers several advantages. Since the first seed begins the playoffs playing against the eighth seed, the second seed plays the seventh seed, the third seed plays the sixth seed, and the fourth seed plays the fifth seed, having a higher seed means a team faces a weaker team in the first round. The team in each series with the better record has home-court advantage, including the First Round. Before the league changed its playoff determination format for the 2006–07 season, this meant that, for example, if the team that received the sixth seed had a better record than the team with the third seed (by virtue of a divisional championship), the sixth seed would have home-court advantage, even though the other team had a higher seed. Therefore, the team with the best regular season record in the league is guaranteed home-court advantage in every series it plays. For example, in 2006, the Denver Nuggets won 44 games and captured the Northwest Division and the third seed. Their opponent was the sixth-seeded Los Angeles Clippers, who won 47 games and finished second in the Pacific Division. Although Denver won its much weaker division, the Clippers had a home-court advantage and won the series in 5. The playoffs follow a tournament format. Each team plays an opponent in a best-of-seven series, with the first team to win four games advancing into the next round, while the other team is eliminated from the playoffs. In the next round, the successful team plays against another advancing team of the same conference. All but one team in each conference are eliminated from the playoffs. Since the NBA does not re-seed teams, the playoff bracket in each conference uses a traditional design, with the winner of the series matching the first- and eighth-seeded teams playing the winner of the series matching the fourth- and fifth-seeded teams, and the winner of the series matching the second- and seventh-seeded teams playing the winner of the series matching the third- and sixth-seeded teams. In every round, the best-of-7 series follows a 2–2–1–1–1 home-court pattern, meaning that one team will have home court in games 1, 2, 5, and 7, while the other plays at home in games 3, 4, and 6. From 1985 to 2013, the NBA Finals followed a 2–3–2 pattern, meaning that one team had home court in games 1, 2, 6, and 7, while the other played at home in games 3, 4, and 5. The final playoff round, a best-of-seven series between the victors of both conferences, is known as the NBA Finals and is held annually in June. The winner of the NBA Finals receives the Larry O'Brien Championship Trophy. Each player and major contributor—including coaches and the general manager—on the winning team receive a championship ring. In addition, the league awards the Bill Russell NBA Finals Most Valuable Player Award to the best performing player of the series. The league began using its current format, with the top eight teams in each conference advancing regardless of divisional alignment, in the 2015–16 season. Previously, the top three seeds went to the division winners. Championships The Los Angeles Lakers and the Boston Celtics have won the most championships with each having 17 NBA Finals wins. The Golden State Warriors and Chicago Bulls are tied for the third most championships won, with 6 each. Following them are the San Antonio Spurs with five championships, all since 1999. Current teams that have no NBA Finals appearances: Charlotte Hornets (formerly Charlotte Bobcats) Denver Nuggets Los Angeles Clippers (formerly Buffalo Braves, San Diego Clippers) Memphis Grizzlies (formerly Vancouver Grizzlies) Minnesota Timberwolves New Orleans Pelicans (formerly New Orleans Hornets, New Orleans/Oklahoma City Hornets) Media coverage As one of the major sports leagues in North America, the NBA has a long history of partnerships with television networks in the United States. The NBA signed a contract with DuMont Television Network in its eighth season, the 1953–54 season, marking the first year the NBA had a national television broadcaster. Similar to the National Football League, the lack of television stations led to NBC taking over the rights from the 1954-55 season until April 7, 1962–NBC's first tenure with the NBA. Currently in the U.S., the NBA has a contract with ESPN and TNT through the 2024–25 season. Games that are not broadcast nationally are usually aired over regional sports networks specific to the area where the teams are located. International competitions The National Basketball Association has sporadically participated in international club competitions. From 1987 to 1999 an NBA team played against championship club teams from Asia, Europe and South America in the McDonald's Championship. This tournament was won by the NBA invitee every year it was held. Ticket prices and viewership demographics In 2012, a ticket cost from $10 to $3,000 each, depending on the location of the seat and the success of the teams that were playing. In 2020, ticket prices for the NBA All Star Game became more expensive than ever before, averaging around $2,600, and even more on the secondary market. Viewership demographics According to Nielsen's survey, in 2013 the NBA had the youngest audience, with 45 percent of its viewers under 35, but the least likely, along with Major League Baseball, to be watched by women, who make up only 30% of the viewership. As of 2014, 45 percent of its viewers were black, while 40 percent of viewers were white, making it the only top North American sport that does not have a white majority audience. As of 2017, the NBA's popularity further declined among White Americans, who during the 2016–17 season, made up only 34% of the viewership. At the same time, the black viewership increased to 47 percent, while Hispanic (of any race) stood at 11% and Asian viewership stood at 8%. According to the same poll, the NBA was favored more strongly by Democrats than Republicans. Outside the U.S., the NBA's biggest international market is in China, where an estimated 800 million viewers watched the 2017–18 season. NBA China is worth approximately $4 billion. Controversies and criticism The NBA has been involved in a number of controversies over the years and has received a significant amount of criticism. Notable people Presidents and commissioners Maurice Podoloff, President from 1946 to 1963 Walter Kennedy, President from 1963 to 1967 and Commissioner from 1967 to 1975 Larry O'Brien, Commissioner from 1975 to 1984 David Stern, Commissioner from 1984 to 2014 Adam Silver, Commissioner from 2014 to present Players NBA 75th Anniversary Team Lists of National Basketball Association players List of foreign NBA players, a list that is exclusively for players who are not from the United States Foreign players International influence Following pioneers like Vlade Divac (Serbia) and Dražen Petrović (Croatia) who joined the NBA in the late 1980s, an increasing number of international players have moved directly from playing elsewhere in the world to starring in the NBA. Below is a short list of foreign players who have won NBA awards or have been otherwise recognized for their contributions to basketball, either currently or formerly active in the league: Dražen Petrović, Croatia – 2002 inductee into the Naismith Memorial Basketball Hall of Fame, four-time Euroscar winner, two-time Mr. Europa winner, MVP of the 1986 FIBA World Championship and EuroBasket 1989, two-time Olympic silver medalist, World champion, European champion, 50 Greatest EuroLeague Contributors. Vlade Divac, Serbia – 2019 inductee into the Naismith Memorial Basketball Hall of Fame, two-time Olympic silver medalist, 2001 NBA All-Star, two-time World champion, three-time European champion, 1989 Mr. Europa winner, 50 Greatest EuroLeague Contributors. Šarūnas Marčiulionis, Lithuania – 2014 inductee into the Naismith Memorial Basketball Hall of Fame. First player from the Soviet Union and one of the first Europeans to sign a contract with an NBA club and to play solidly in the league, helping to lead the way for the internationalization of the league in the late 1990s. Toni Kukoč, Croatia – 2021 inductee into the Naismith Memorial Basketball Hall of Fame, three-time NBA champion with Chicago Bulls (1996, 1997, 1998), 1996 Sixth Man Award winner, named in 2008 as one of the 50 Greatest EuroLeague Contributors. Arvydas Sabonis, Lithuania – 2011 inductee into the Naismith Memorial Basketball Hall of Fame, five-time Euroscar winner, two-time Mr. Europa winner, Olympic gold medalist in 1988 with the Soviet Union and bronze medalist in 1992 and 1996 with Lithuania, 1996 NBA All-Rookie First Team, 50 Greatest EuroLeague Contributors. Peja Stojaković, Serbia – NBA champion with Dallas Mavericks (2011), MVP of the EuroBasket 2001, member of the all-tournament team in the 2002 FIBA World Championship, 2001 Euroscar winner, two-time Mr. Europa winner, two-time NBA Three-Point Shootout champion, three-time NBA All-Star. Dirk Nowitzki, Germany – NBA champion with Dallas Mavericks (2011), MVP of the 2002 FIBA World Championship and EuroBasket 2005, member of the all-tournament team in the 2002 FIBA World Championship, six-time Euroscar winner, 2005 Mr. Europa, two-time FIBA Europe Player of the Year, 2007 NBA MVP, 2011 Bill Russell NBA Finals Most Valuable Player Award, 2006 NBA Three-Point Shootout
precession would only change the tilt from vertical (second Euler angle). However, in spacecraft dynamics, precession (a change in the first Euler angle) is sometimes referred to as nutation. Rigid body If a top is set at a tilt on a horizontal surface and spun rapidly, its rotational axis starts precessing about the vertical. After a short interval, the top settles into a motion in which each point on its rotation axis follows a circular path. The vertical force of gravity produces a horizontal torque about the point of contact with the surface; the top rotates in the direction of this torque with an angular velocity such that at any moment (vector cross product) where is the instantaneous angular momentum of the top. Initially, however, there is no precession, and the upper part of the top falls sideways and downward, thereby tilting. This gives rise to an imbalance in torques that starts the precession. In falling, the top overshoots the amount of tilt at which it would precess steadily and then oscillates about this level. This oscillation is called nutation. If the motion is damped, the oscillations will die down until the motion is a steady precession. The physics of nutation in tops and gyroscopes can be explored using the model of a heavy symmetrical top with its tip fixed. (A symmetrical top is one with rotational symmetry, or more generally one in which two of the three principal moments of inertia are equal.) Initially, the effect of friction is ignored. The motion of the top can be described by three Euler angles: the tilt angle between the symmetry axis of the top and the vertical (second Euler angle); the azimuth of the top about the vertical (first Euler angle); and the rotation angle of the top about its own axis (third Euler angle). Thus, precession is the change in and nutation is the change in . If the top has mass and its center of mass is at a distance from the pivot point, its gravitational potential relative to the plane of the support is In a coordinate system where the axis is the axis of symmetry, the top has angular velocities and moments of inertia about the , and axes. Since we are taking a symmetric top, we have
second Euler angle. If it is not caused by forces external to the body, it is called free nutation or Euler nutation. A pure nutation is a movement of a rotational axis such that the first Euler angle is constant. Therefore it can be seen that the circular red arrow in the diagram indicates the combined effects of precession and nutation, while nutation in the absence of precession would only change the tilt from vertical (second Euler angle). However, in spacecraft dynamics, precession (a change in the first Euler angle) is sometimes referred to as nutation. Rigid body If a top is set at a tilt on a horizontal surface and spun rapidly, its rotational axis starts precessing about the vertical. After a short interval, the top settles into a motion in which each point on its rotation axis follows a circular path. The vertical force of gravity produces a horizontal torque about the point of contact with the surface; the top rotates in the direction of this torque with an angular velocity such that at any moment (vector cross product) where is the instantaneous angular momentum of the top. Initially, however, there is no precession, and the upper part of the top falls sideways and downward, thereby tilting. This gives rise to an imbalance in torques that starts the precession. In falling, the top overshoots the amount of tilt at which it would precess steadily and then oscillates about this level. This oscillation is called nutation. If the motion is damped, the oscillations will die down until the motion is a steady precession. The physics of nutation in tops and gyroscopes can be explored using the model of a heavy symmetrical top with its tip fixed. (A symmetrical top is one with rotational symmetry, or more generally one in which two of the three principal moments of inertia are equal.) Initially, the effect of friction is ignored. The motion of the top can be described by three Euler angles: the tilt angle between the symmetry axis of the top and the vertical (second Euler angle); the azimuth of the top about the vertical (first Euler angle); and the rotation angle of the top about its own axis (third Euler angle). Thus, precession is the change in and nutation is the change in . If the top has mass and its center of mass is at a distance from the pivot point, its gravitational potential relative to the plane of the support is In a coordinate system where the axis is the axis of symmetry, the top has angular velocities and moments of inertia about the , and axes. Since we are taking a symmetric top, we have =. The kinetic energy is In terms of the Euler angles, this is If the Euler–Lagrange equations are solved for this system, it is found that the motion depends on two constants and (each related to a constant of motion). The rate of precession is related to the tilt by The tilt is determined by a differential equation for of
Nasco (c. 1510–1561), Franco-Flemish composer and writer on music Joe Nasco (born 1984), American soccer player National Association of State Charity Officials, an American association of regulators Native American Services Corp., a construction company in Kellogg, Idaho North American Students of Cooperation, a federation of housing cooperatives
a construction company in Kellogg, Idaho North American Students of Cooperation, a federation of housing cooperatives in Canada and the United States North American SuperCorridor Coalition, a non-profit
status during the annual meetings. Organizational Organs Council North American Commission North-East Atlantic Commission West Greenland Commission International Atlantic Salmon Research Board (IASRB) Secretariat The secretariat currently has 5 full-time employees based at the Headquarters in Edinburgh, Scotland. In the council, each member state is represented and decisions are made based on a three quarter majority. The main tasks of the council include: Providing a forum for the study, analysis and exchange of information on salmon. Coordinating the activities of the Commissions. Establishing working arrangements with other fisheries and scientific
(since 1984): Canada, Denmark (in respect of the Faroe Islands and Greenland), the European Union, Norway, Russian Federation, and the United States of America. Former participants: Iceland (1984-2009) Finland (1984-1995) Sweden (1984-1995) The NASCO also has 44 NGOs from the different member states that have observational status during the annual meetings. Organizational Organs Council North American Commission North-East Atlantic Commission West Greenland Commission International Atlantic Salmon Research Board (IASRB) Secretariat The secretariat currently has 5 full-time employees based at the Headquarters in Edinburgh, Scotland. In the council, each member state is represented and decisions are made based on a three quarter majority. The main tasks of the council include: Providing a forum for the study, analysis and exchange of information on salmon. Coordinating the activities of the Commissions. Establishing working arrangements with other fisheries and scientific organizations. Making recommendations for scientific research. References External links Official website Fisheries conservation organizations Organizations established in 1983 Salmon Atlantic Ocean Intergovernmental organizations established by
probably Hesketh Pearson's The Life of Oscar Wilde (1946) in which this story is recorded (Penguin edition, p. 217) as one of Wilde's inspired inventions. This version of the Narcissus story is based on Wilde's "The Disciple" from his "Poems in Prose (Wilde) ". Author and poet Rainer Maria Rilke visits the character and symbolism of Narcissus in several of his poems. Seamus Heaney references Narcissus in his poem "Personal Helicon" from his first collection "Death of a Naturalist":To stare, big-eyed Narcissus, into some spring Is beneath all adult dignity. In Rick Riordan's Heroes of Olympus series, Narcissus appears as a minor antagonist in the third book The Mark of Athena. In the fantasy series Harry Potter, Narcissa Malfoy, a minor antagonist, is named for Narcissus. William Faulkner's character "Narcissa" in Sanctuary, sister of Horace Benbow, was also named after Narcissus. Throughout the novel, she allows the arrogant, pompous pressures of high-class society to overrule the unconditional love that she should have for her brother. Hermann Hesse's character "Narcissus" in "Narcissus and Goldmund" shares several of mythical Narcissus' traits, although his narcissism is based on his intellect rather than his physical beauty. A. E. Housman refers to the 'Greek Lad', Narcissus, in his poem "Look not in my Eyes" from A Shropshire Lad set to music by several English composers including George Butterworth. At the end of the poem stands a jonquil, a variety of daffodil, Narcissus jonquilla, which like Narcissus looks sadly down into the water. Herman Melville references the myth of Narcissus in his novel Moby-Dick, in which Ishmael explains the myth as "the key to it all," referring to the greater theme of finding the essence of Truth through the physical world. On Sophia de Mello Breyner Andresen's A Fada Oriana, the eponymous protagonist is punished with mortality for abandoning her duties in order to stare at herself in the surface of a river. Joseph Conrad's novel The Nigger of the 'Narcissus' features a merchant ship named Narcissus. An incident involving the ship, and the difficult decisions made by the crew, explore themes involving self-interest vs. altruism and humanitarianism. Naomi Iizuka's play Polaroid Stories, a contemporary rewrite of the Orpheus and Eurydice myth, features Narcissus as a character. In the play he is portrayed as a self obsessed, and drug addicted young man who was raised on the streets. He is alluded to being a member of the LGBT+ community (a wider LGBT community, that may include non-binary and intersex activities) and mentions his sexual endeavours with older men, some ending with the death of these men due to drug overdoses. He is accompanied by the character Echo, whom he continuously spurns. Film and television In the TV series Boardwalk Empire, a Dr. Narcisse (Valentin Narcisse) is introduced as a condescending intellectual. Scottish-Canadian animator Norman McLaren finished his career with a short film named Narcissus, re-telling the Greek legend through ballet. Narcissus appears in the Disney adaptation of Hercules. In the film, he is portrayed as an Olympian god with purple skin. In the film Bab'Aziz, directed by Nacer Khemir, a Narcissus like character was portrayed by an ancient prince who sat by a pond for days after days and looked at the reflection of his own soul. He was referred to as 'The prince who contemplated his soul'. Pink Narcissus is an artistic film by James Bidgood about the fantasies of a hustler. The escape craft Ripley boards in the 1979 Ridley Scott film Alien is called the Narcissus. Narcissus is the name of Laurel and Hardy's goat in their 1940 film Saps At Sea. The Neon Demon, a 2016 psychological horror film by Nicolas Winding Refn, is loosely based on the story of Narcissus. Narcissus is the name of the host club in the 2018 Japanese drama Todome no Kiss. The lead character, Otaro Dojima (Kento Yamazaki), works in the nightclub as a sought-after host under the stage name Eight and just like Narcissus, he is
named for the myth or the myth for the flower, or if there is any true connection at all. Pliny the Elder wrote that the plant was named for its fragrance ( , "I grow numb"), not the youth. Family In some versions, Narcissus was the son of the god of the river Cephissus and nymph Liriope, while Nonnus instead has him as the son of the lunar goddess Selene and her mortal lover Endymion. Mythology Several versions of the myth have survived from ancient sources. The classic version is by Ovid, found in Book 3 of his Metamorphoses. This is the story of Echo and Narcissus. When Liriope gave birth to the handsome child Narcissus, she consulted the seer Tiresias, who predicted that the boy would live a long life only if he never discovered himself. One day Narcissus was walking in the woods when Echo, an Oread (mountain nymph) saw him, fell deeply in love, and followed him. Narcissus sensed he was being followed and shouted "Who's there?". Echo repeated "Who's there?" She eventually revealed her identity and attempted to embrace him. He stepped away and told her to leave him alone. She was heartbroken and spent the rest of her life in lonely glens until nothing but an echo sound remained of her. Nemesis (as an aspect of Aphrodite), the goddess of revenge, noticed this behaviour after learning the story and decided to punish Narcissus. Once, during the summer, he was getting thirsty after hunting, and the goddess lured him to a pool where he leaned upon the water and saw himself in the bloom of youth. Narcissus did not realize it was merely his own reflection and fell deeply in love with it, as if it were somebody else. Unable to leave the allure of his image, he eventually realized that his love could not be reciprocated and he melted away from the fire of passion burning inside him, eventually turning into a gold and white flower. An earlier version ascribed to the poet Parthenius of Nicaea, composed around 50 BC, was discovered in 2004 by Dr Benjamin Henry among the Oxyrhynchus papyri at Oxford. Unlike Ovid's version, it ended with Narcissus who lost his will to live and committed suicide. A version by Conon, a contemporary of Ovid, also ends in suicide (Narrations, 24). In it, a young man named Ameinias fell in love with Narcissus, who had already spurned his male suitors. Narcissus also spurned him and gave him a sword. Ameinias committed suicide at Narcissus's doorstep. He had prayed to the gods to give Narcissus a lesson for all the pain he provoked. Narcissus walked by a pool of water and decided to drink some. He saw his reflection, became entranced by it, and killed himself because he could not have his object of desire. A century later the travel writer Pausanias recorded a novel variant of the story, in which Narcissus falls in love with his twin sister rather than himself. In all versions, his body disappears and all that is left is a narcissus flower. Influence on culture The myth of Narcissus has inspired artists for at least two thousand years, even before the Roman poet Ovid featured a version in book III of his Metamorphoses. This was followed in more recent centuries by other poets (e.g. Keats and Alfred Edward Housman) and painters (Caravaggio, Poussin, Turner, Dalí (see Metamorphosis of Narcissus), and Waterhouse). Literature In Stendhal's novel Le Rouge et le Noir (1830), there is a classic narcissist in the character of Mathilde. Says Prince Korasoff to Julien Sorel, the protagonist, with respect to his beloved girl: She looks at herself instead of looking at you, and so doesn't know you. During the two or three little outbursts of passion she has allowed herself in your favor, she has, by a great effort of imagination, seen in you the hero of her dreams, and not yourself as you really are. (Page 401, 1953 Penguin Edition, trans. Margaret R.B. Shaw). The myth had a decided influence on English Victorian homoerotic culture, via André Gide's study of the myth, Le Traité du Narcisse ('The Treatise of the Narcissus', 1891), and the only novel by Oscar Wilde, The Picture of Dorian Gray. Paulo Coelho's The Alchemist also starts with a story about Narcissus, found (we are told) by the alchemist in a book brought by someone in the caravan. The alchemist's (and Coelho's) source was very probably Hesketh Pearson's The Life of Oscar Wilde (1946) in which this story is recorded (Penguin edition, p. 217) as one of Wilde's inspired inventions. This version of the Narcissus story is based on Wilde's "The Disciple" from his "Poems in Prose (Wilde) ". Author and poet Rainer Maria Rilke visits the character and symbolism of Narcissus in several of his poems. Seamus Heaney references Narcissus in his poem "Personal Helicon" from his first collection "Death of a Naturalist":To stare, big-eyed Narcissus, into some spring Is beneath all adult dignity. In Rick Riordan's Heroes of Olympus series, Narcissus appears as a minor antagonist in the third book The Mark of Athena. In the fantasy series Harry Potter, Narcissa Malfoy, a minor antagonist, is named for Narcissus. William Faulkner's character "Narcissa" in Sanctuary, sister of Horace Benbow, was also named after Narcissus. Throughout the novel, she allows the arrogant, pompous pressures of high-class society to overrule the unconditional love that she should have for her brother. Hermann Hesse's character "Narcissus" in "Narcissus and Goldmund" shares several of mythical Narcissus' traits, although his narcissism is based on his intellect rather than his physical beauty. A. E. Housman refers to the 'Greek Lad', Narcissus, in his poem "Look not in my Eyes" from A Shropshire Lad set to music by several English composers including George Butterworth. At the end of the poem stands a jonquil, a variety of daffodil, Narcissus jonquilla, which like Narcissus looks sadly down into the water. Herman Melville references the myth of Narcissus in his novel Moby-Dick, in which Ishmael explains the myth as "the key to it all," referring to the greater theme of finding the essence of Truth through the physical world. On Sophia de Mello Breyner Andresen's A Fada Oriana, the eponymous protagonist is punished with mortality for abandoning her duties in order to stare at herself in the surface of a river. Joseph Conrad's novel The Nigger of the 'Narcissus' features a merchant ship named Narcissus. An incident involving the ship, and the difficult decisions made by the crew, explore themes involving self-interest vs. altruism and humanitarianism. Naomi Iizuka's play Polaroid Stories, a contemporary
shipping to avoid the mines. The warnings do not have to be specific; for example, during World War II, Britain declared simply that it had mined the English Channel, North Sea and French coast. History Early use Precursors to naval mines were first invented by Chinese innovators of Imperial China and were described in thorough detail by the early Ming dynasty artillery officer Jiao Yu, in his 14th-century military treatise known as the Huolongjing. Chinese records tell of naval explosives in the 16th century, used to fight against Japanese pirates (wokou). This kind of naval mine was loaded in a wooden box, sealed with putty. General Qi Jiguang made several timed, drifting explosives, to harass Japanese pirate ships. The Tiangong Kaiwu (The Exploitation of the Works of Nature) treatise, written by Song Yingxing in 1637, describes naval mines with a ripcord pulled by hidden ambushers located on the nearby shore who rotated a steel wheellock flint mechanism to produce sparks and ignite the fuse of the naval mine. Although this is the rotating steel wheellock's first use in naval mines, Jiao Yu described their use for land mines in the 14th century. The first plan for a sea mine in the West was by Ralph Rabbards, who presented his design to Queen Elizabeth I of England in 1574. The Dutch inventor Cornelius Drebbel was employed in the Office of Ordnance by King Charles I of England to make weapons, including the failed "floating petard". Weapons of this type were apparently tried by the English at the Siege of La Rochelle in 1627. American David Bushnell developed the first American naval mine, for use against the British in the American War of Independence. It was a watertight keg filled with gunpowder that was floated toward the enemy, detonated by a sparking mechanism if it struck a ship. It was used on the Delaware River as a drift mine. 19th century In 1812, Russian engineer Pavel Shilling exploded an underwater mine using an electrical circuit. In 1842 Samuel Colt used an electric detonator to destroy a moving vessel to demonstrate an underwater mine of his own design to the United States Navy and President John Tyler. However, opposition from former president John Quincy Adams, scuttled the project as "not fair and honest warfare". In 1854, during the unsuccessful attempt of the Anglo-French fleet to seize the Kronstadt fortress, British steamships HMS Merlin (9 June 1855, the first successful mining in history), HMS Vulture and HMS Firefly suffered damage due to the underwater explosions of Russian naval mines. Russian naval specialists set more than 1,500 naval mines, or infernal machines, designed by Moritz von Jacobi and by Immanuel Nobel, in the Gulf of Finland during the Crimean War of 1853–1856. The mining of Vulcan led to the world's first minesweeping operation.<ref>{{cite book |last1=Lambert |first1=Andrew D. |author-link1=Andrew Lambert |year=1990 |title=The Crimean War: British Grand Strategy Against Russia, 1853–56 |url=https://books.google.com/books?id=GCVyIZEdc6kC |publisher=Ashgate Publishing, Ltd. |publication-date=2011 |pages=288–289 |isbn=9781409410119 |access-date=31 January 2016 |quote=On 9 June Merlin, Dragon, Firefly and D'Assas' took Penaud and several British captains to examine Cronstadt. While still 2 miles out the two surveying ships were struck by 'infernals'. [...] The fleet left Seskar on the 20th. Vulture, almost the last to arrive, was struck by an infernal. The following day the boats fished up several of the primitive mines, and both Dundas and Seymour inspected them aboard their flagships.}}</ref> During the next 72 hours, 33 mines were swept. The Jacobi mine was designed by German-born, Russian engineer Jacobi, in 1853. The mine was tied to the sea bottom by an anchor. A cable connected it to a galvanic cell which powered it from the shore, the power of its explosive charge was equal to of black powder. In the summer of 1853, the production of the mine was approved by the Committee for Mines of the Ministry of War of the Russian Empire. In 1854, 60 Jacobi mines were laid in the vicinity of the Forts Pavel and Alexander (Kronstadt), to deter the British Baltic Fleet from attacking them. It gradually phased out its direct competitor the Nobel mine on the insistence of Admiral Fyodor Litke. The Nobel mines were bought from Swedish industrialist Immanuel Nobel who had entered into collusion with the Russian head of navy Alexander Sergeyevich Menshikov. Despite their high cost (100 Russian rubles) the Nobel mines proved to be faulty, exploding while being laid, failing to explode or detaching from their wires, and drifting uncontrollably, at least 70 of them were subsequently disarmed by the British. In 1855, 301 more Jacobi mines were laid around Krostadt and Lisy Nos. British ships did not dare to approach them. In the 19th century, mines were called torpedoes, a name probably conferred by Robert Fulton after the torpedo fish, which gives powerful electric shocks. A spar torpedo was a mine attached to a long pole and detonated when the ship carrying it rammed another one and withdrew a safe distance. The submarine used one to sink on 17 February 1864. A Harvey torpedo was a type of floating mine towed alongside a ship and was briefly in service in the Royal Navy in the 1870s. Other "torpedoes" were attached to ships or propelled themselves. One such weapon called the Whitehead torpedo after its inventor, caused the word "torpedo" to apply to self-propelled underwater missiles as well as to static devices. These mobile devices were also known as "fish torpedoes". The American Civil War of 1861–1865 also saw the successful use of mines. The first ship sunk by a mine, , foundered in 1862 in the Yazoo River. Rear Admiral David Farragut's famous/apocryphal command during the Battle of Mobile Bay in 1864, "Damn the torpedoes, full speed ahead!" refers to a minefield laid at Mobile, Alabama. After 1865 the United States adopted the mine as its primary weapon for coastal defense. In the decade following 1868, Major Henry Larcom Abbot carried out a lengthy set of experiments to design and test moored mines that could be exploded on contact or be detonated at will as enemy shipping passed near them. This initial development of mines in the United States took place under the purview of the U.S. Army Corps of Engineers, which trained officers and men in their use at the Engineer School of Application at Willets Point, New York (later named Fort Totten). In 1901 underwater minefields became the responsibility of the US Army's Artillery Corps, and in 1907 this was a founding responsibility of the United States Army Coast Artillery Corps. The Imperial Russian Navy, a pioneer in mine warfare, successfully deployed mines against the Ottoman Navy during both the Crimean War and the Russo-Turkish War (1877-1878). During the Battle of Tamsui (1884), in the Keelung Campaign of the Sino-French War, Chinese forces in Taiwan under Liu Mingchuan took measures to reinforce Tamsui against the French; they planted nine torpedo mines in the river and blocked the entrance. Early 20th century During the Boxer Rebellion, Imperial Chinese forces deployed a command-detonated mine field at the mouth of the Peiho river before the Dagu forts, to prevent the western Allied forces from sending ships to attack.(Issue 143 of Document (United States. War Dept.))(Original from the New York Public Library) The next major use of mines was during the Russo-Japanese War of 1904–1905. Two mines blew up when the struck them near Port Arthur, sending the holed vessel to the bottom and killing the fleet commander, Admiral Stepan Makarov, and most of his crew in the process. The toll inflicted by mines was not confined to the Russians, however. The Japanese Navy lost two battleships, four cruisers, two destroyers and a torpedo-boat to offensively laid mines during the war. Most famously, on 15 May 1904, the Russian minelayer Amur planted a 50-mine minefield off Port Arthur and succeeded in sinking the Japanese battleships and . Following the end of the Russo-Japanese War, several nations attempted to have mines banned as weapons of war at the Hague Peace Conference (1907). Many early mines were fragile and dangerous to handle, as they contained glass containers filled with nitroglycerin or mechanical devices that activated a blast upon tipping. Several mine-laying ships were destroyed when their cargo exploded. Beginning around the start of the 20th century, submarine mines played a major role in the defense of U.S. harbours against enemy attacks as part of the Endicott and Taft Programs. The mines employed were controlled mines, anchored to the bottoms of the harbours, and detonated under control from large mine casemates onshore. During World War I, mines were used extensively to defend coasts, coastal shipping, ports and naval bases around the globe. The Germans laid mines in shipping lanes to sink merchant and naval vessels serving Britain. The Allies targeted the German U-boats in the Strait of Dover and the Hebrides. In an attempt to seal up the northern exits of the North Sea, the Allies developed the North Sea Mine Barrage. During a period of five months from June 1918, almost 70,000 mines were laid spanning the North Sea's northern exits. The total number of mines laid in the North Sea, the British East Coast, Straits of Dover, and Heligoland Bight is estimated at 190,000 and the total number during the whole of WWI was 235,000 sea mines. Clearing the barrage after the war took 82 ships and five months, working around the clock. It was also during World War I, that the British hospital ship, HMHS Britannic, became the largest vessel ever sunk by a naval mine. The Britannic was the sister ship of the RMS Titanic, and the RMS Olympic. World War II During World War II, the U-boat fleet, which dominated much of the battle of the Atlantic, was small at the beginning of the war and much of the early action by German forces involved mining convoy routes and ports around Britain. German submarines also operated in the Mediterranean Sea, in the Caribbean Sea, and along the U.S. coast. Initially, contact mines (requiring a ship to physically strike a mine to detonate it) were employed, usually tethered at the end of a cable just below the surface of the water. Contact mines usually blew a hole in ships' hulls. By the beginning of World War II, most nations had developed mines that could be dropped from aircraft, some of which floated on the surface, making it possible to lay them in enemy harbours. The use of dredging and nets was effective against this type of mine, but this consumed valuable time and resources and required harbours to be closed. Later, some ships survived mine blasts, limping into port with buckled plates and broken backs. This appeared to be due to a new type of mine, detecting ships by their proximity to the mine (an influence mine) and detonating at a distance, causing damage with the shock wave of the explosion. Ships that had successfully run the gantlet of the Atlantic crossing were sometimes destroyed entering freshly cleared British harbours. More shipping was being lost than could be replaced, and Churchill ordered the intact recovery of one of these new mines to be of the highest priority. The British experienced a stroke of luck in November 1939, when a German mine was dropped from an aircraft onto the mudflats off Shoeburyness during low tide. Additionally, the land belonged to the army and a base with men and workshops was at hand. Experts were dispatched from to investigate the mine. The Royal Navy knew that mines could use magnetic sensors, Britain having developed magnetic mines in World War I, so everyone removed all metal, including their buttons, and made tools of non-magnetic brass. They disarmed the mine and rushed it to the labs at HMS Vernon, where scientists discovered that the mine had a magnetic arming mechanism. A large ferrous object passing through the Earth's magnetic field will concentrate the field through it, due to its magnetic permeability; the mine's detector was designed to trigger as a ship passed over when the Earth's magnetic field was concentrated in the ship and away from the mine. The mine detected this loss of the magnetic field which caused it to detonate. The mechanism had an adjustable sensitivity, calibrated in milligauss. The U.S. began adding delay counters to their magnetic mines in June 1945. From this data, known methods were used to clear these mines. Early methods included the use of large electromagnets dragged behind ships or below low-flying aircraft (a number of older bombers like the Vickers Wellington were used for this). Both of these methods had the disadvantage of "sweeping" only a small strip. A better solution was found in the "Double-L Sweep" using electrical cables dragged behind ships that passed large pulses of current through the seawater. This created a large magnetic field and swept the entire area between the two ships. The older methods continued to be used in smaller areas. The Suez Canal continued to be swept by aircraft, for instance. While these methods were useful for clearing mines from local ports, they were of little or no use for enemy-controlled areas. These were typically visited by warships, and the majority of the fleet then underwent a massive degaussing process, where their hulls had a slight "south" bias induced into them which offset the concentration-effect almost to zero. Initially, major warships and large troopships had a copper degaussing coil fitted around the perimeter of the hull, energized by the ship's electrical system whenever in suspected magnetic-mined waters. Some of the first to be so fitted were the carrier HMS Ark Royal and the liners and . It was a photo of one of these liners in New York harbour, showing the degaussing coil, which revealed to German Naval Intelligence the fact that the British were using degaussing methods to combat their magnetic mines. This was felt to be impractical for smaller warships and merchant vessels, mainly because the ships lacked the generating capacity to energise such a coil. It was found that "wiping" a current-carrying cable up and down a ship's hull temporarily canceled the ships' magnetic signature sufficiently to nullify the threat. This started in late 1939, and by 1940 merchant vessels and the smaller British warships were largely immune for a few months at a time until they once again built up a field. The cruiser is just one example of a ship that was struck by a magnetic mine during this time. On 21 November 1939, a mine broke her keel, which damaged her engine and boiler rooms, as well as injuring 46 men with one man later dying from his injuries. She was towed to Rosyth for repairs. Incidents like this resulted in many of the boats that sailed to Dunkirk being degaussed in a marathon four-day effort by degaussing stations. The Allies and Germany deployed acoustic mines in World War II, against which even wooden-hulled ships (in particular minesweepers) remained vulnerable. Japan developed sonic generators to sweep these; the gear was not ready by war's end. The primary method Japan used was small air-delivered bombs. This was profligate and ineffectual; used against acoustic mines at Penang, 200 bombs were needed to detonate just 13 mines. The Germans developed a pressure-activated mine and planned to deploy it as well, but they saved it for later use when it became clear the British had defeated the magnetic system. The U.S. also deployed these, adding "counters" which would allow a variable number of ships to pass unharmed before detonating. This made them a great deal harder to sweep. Mining campaigns could have devastating consequences. The U.S. effort against Japan, for instance, closed major ports, such as Hiroshima, for days, and by the end of the Pacific War had cut the amount of freight passing through Kobe–Yokohama by 90%. When the war ended, more than 25,000 U.S.-laid mines were still in place, and the Navy proved unable to sweep them all, limiting efforts to critical areas. After sweeping for almost a year, in May 1946, the Navy abandoned the effort with 13,000 mines still unswept. Over the next thirty years, more than 500 minesweepers (of a variety of types) were damaged or sunk clearing them. Cold War era Since World War II, mines have damaged 14 United States Navy ships, whereas air and missile attacks have damaged four. During the Korean War, mines laid by North Korean forces caused 70% of the casualties suffered by U.S. naval vessels and caused 4 sinkings. During the Iran–Iraq War from 1980 to 1988, the belligerents mined several areas of the Persian Gulf and nearby waters. On 24 July 1987, the supertanker SS Bridgeton was mined by Iran near Farsi Island. On 14 April 1988, struck an Iranian mine in the central Persian Gulf shipping lane, wounding 10 sailors. In the summer of 1984, magnetic sea mines damaged at least 19 ships in the Red Sea. The U.S. concluded Libya was probably responsible for the minelaying. In response the U.S., Britain, France, and
against the French; they planted nine torpedo mines in the river and blocked the entrance. Early 20th century During the Boxer Rebellion, Imperial Chinese forces deployed a command-detonated mine field at the mouth of the Peiho river before the Dagu forts, to prevent the western Allied forces from sending ships to attack.(Issue 143 of Document (United States. War Dept.))(Original from the New York Public Library) The next major use of mines was during the Russo-Japanese War of 1904–1905. Two mines blew up when the struck them near Port Arthur, sending the holed vessel to the bottom and killing the fleet commander, Admiral Stepan Makarov, and most of his crew in the process. The toll inflicted by mines was not confined to the Russians, however. The Japanese Navy lost two battleships, four cruisers, two destroyers and a torpedo-boat to offensively laid mines during the war. Most famously, on 15 May 1904, the Russian minelayer Amur planted a 50-mine minefield off Port Arthur and succeeded in sinking the Japanese battleships and . Following the end of the Russo-Japanese War, several nations attempted to have mines banned as weapons of war at the Hague Peace Conference (1907). Many early mines were fragile and dangerous to handle, as they contained glass containers filled with nitroglycerin or mechanical devices that activated a blast upon tipping. Several mine-laying ships were destroyed when their cargo exploded. Beginning around the start of the 20th century, submarine mines played a major role in the defense of U.S. harbours against enemy attacks as part of the Endicott and Taft Programs. The mines employed were controlled mines, anchored to the bottoms of the harbours, and detonated under control from large mine casemates onshore. During World War I, mines were used extensively to defend coasts, coastal shipping, ports and naval bases around the globe. The Germans laid mines in shipping lanes to sink merchant and naval vessels serving Britain. The Allies targeted the German U-boats in the Strait of Dover and the Hebrides. In an attempt to seal up the northern exits of the North Sea, the Allies developed the North Sea Mine Barrage. During a period of five months from June 1918, almost 70,000 mines were laid spanning the North Sea's northern exits. The total number of mines laid in the North Sea, the British East Coast, Straits of Dover, and Heligoland Bight is estimated at 190,000 and the total number during the whole of WWI was 235,000 sea mines. Clearing the barrage after the war took 82 ships and five months, working around the clock. It was also during World War I, that the British hospital ship, HMHS Britannic, became the largest vessel ever sunk by a naval mine. The Britannic was the sister ship of the RMS Titanic, and the RMS Olympic. World War II During World War II, the U-boat fleet, which dominated much of the battle of the Atlantic, was small at the beginning of the war and much of the early action by German forces involved mining convoy routes and ports around Britain. German submarines also operated in the Mediterranean Sea, in the Caribbean Sea, and along the U.S. coast. Initially, contact mines (requiring a ship to physically strike a mine to detonate it) were employed, usually tethered at the end of a cable just below the surface of the water. Contact mines usually blew a hole in ships' hulls. By the beginning of World War II, most nations had developed mines that could be dropped from aircraft, some of which floated on the surface, making it possible to lay them in enemy harbours. The use of dredging and nets was effective against this type of mine, but this consumed valuable time and resources and required harbours to be closed. Later, some ships survived mine blasts, limping into port with buckled plates and broken backs. This appeared to be due to a new type of mine, detecting ships by their proximity to the mine (an influence mine) and detonating at a distance, causing damage with the shock wave of the explosion. Ships that had successfully run the gantlet of the Atlantic crossing were sometimes destroyed entering freshly cleared British harbours. More shipping was being lost than could be replaced, and Churchill ordered the intact recovery of one of these new mines to be of the highest priority. The British experienced a stroke of luck in November 1939, when a German mine was dropped from an aircraft onto the mudflats off Shoeburyness during low tide. Additionally, the land belonged to the army and a base with men and workshops was at hand. Experts were dispatched from to investigate the mine. The Royal Navy knew that mines could use magnetic sensors, Britain having developed magnetic mines in World War I, so everyone removed all metal, including their buttons, and made tools of non-magnetic brass. They disarmed the mine and rushed it to the labs at HMS Vernon, where scientists discovered that the mine had a magnetic arming mechanism. A large ferrous object passing through the Earth's magnetic field will concentrate the field through it, due to its magnetic permeability; the mine's detector was designed to trigger as a ship passed over when the Earth's magnetic field was concentrated in the ship and away from the mine. The mine detected this loss of the magnetic field which caused it to detonate. The mechanism had an adjustable sensitivity, calibrated in milligauss. The U.S. began adding delay counters to their magnetic mines in June 1945. From this data, known methods were used to clear these mines. Early methods included the use of large electromagnets dragged behind ships or below low-flying aircraft (a number of older bombers like the Vickers Wellington were used for this). Both of these methods had the disadvantage of "sweeping" only a small strip. A better solution was found in the "Double-L Sweep" using electrical cables dragged behind ships that passed large pulses of current through the seawater. This created a large magnetic field and swept the entire area between the two ships. The older methods continued to be used in smaller areas. The Suez Canal continued to be swept by aircraft, for instance. While these methods were useful for clearing mines from local ports, they were of little or no use for enemy-controlled areas. These were typically visited by warships, and the majority of the fleet then underwent a massive degaussing process, where their hulls had a slight "south" bias induced into them which offset the concentration-effect almost to zero. Initially, major warships and large troopships had a copper degaussing coil fitted around the perimeter of the hull, energized by the ship's electrical system whenever in suspected magnetic-mined waters. Some of the first to be so fitted were the carrier HMS Ark Royal and the liners and . It was a photo of one of these liners in New York harbour, showing the degaussing coil, which revealed to German Naval Intelligence the fact that the British were using degaussing methods to combat their magnetic mines. This was felt to be impractical for smaller warships and merchant vessels, mainly because the ships lacked the generating capacity to energise such a coil. It was found that "wiping" a current-carrying cable up and down a ship's hull temporarily canceled the ships' magnetic signature sufficiently to nullify the threat. This started in late 1939, and by 1940 merchant vessels and the smaller British warships were largely immune for a few months at a time until they once again built up a field. The cruiser is just one example of a ship that was struck by a magnetic mine during this time. On 21 November 1939, a mine broke her keel, which damaged her engine and boiler rooms, as well as injuring 46 men with one man later dying from his injuries. She was towed to Rosyth for repairs. Incidents like this resulted in many of the boats that sailed to Dunkirk being degaussed in a marathon four-day effort by degaussing stations. The Allies and Germany deployed acoustic mines in World War II, against which even wooden-hulled ships (in particular minesweepers) remained vulnerable. Japan developed sonic generators to sweep these; the gear was not ready by war's end. The primary method Japan used was small air-delivered bombs. This was profligate and ineffectual; used against acoustic mines at Penang, 200 bombs were needed to detonate just 13 mines. The Germans developed a pressure-activated mine and planned to deploy it as well, but they saved it for later use when it became clear the British had defeated the magnetic system. The U.S. also deployed these, adding "counters" which would allow a variable number of ships to pass unharmed before detonating. This made them a great deal harder to sweep. Mining campaigns could have devastating consequences. The U.S. effort against Japan, for instance, closed major ports, such as Hiroshima, for days, and by the end of the Pacific War had cut the amount of freight passing through Kobe–Yokohama by 90%. When the war ended, more than 25,000 U.S.-laid mines were still in place, and the Navy proved unable to sweep them all, limiting efforts to critical areas. After sweeping for almost a year, in May 1946, the Navy abandoned the effort with 13,000 mines still unswept. Over the next thirty years, more than 500 minesweepers (of a variety of types) were damaged or sunk clearing them. Cold War era Since World War II, mines have damaged 14 United States Navy ships, whereas air and missile attacks have damaged four. During the Korean War, mines laid by North Korean forces caused 70% of the casualties suffered by U.S. naval vessels and caused 4 sinkings. During the Iran–Iraq War from 1980 to 1988, the belligerents mined several areas of the Persian Gulf and nearby waters. On 24 July 1987, the supertanker SS Bridgeton was mined by Iran near Farsi Island. On 14 April 1988, struck an Iranian mine in the central Persian Gulf shipping lane, wounding 10 sailors. In the summer of 1984, magnetic sea mines damaged at least 19 ships in the Red Sea. The U.S. concluded Libya was probably responsible for the minelaying. In response the U.S., Britain, France, and three other nations launched Operation Intense Look, a minesweeping operation in the Red Sea involving more than 46 ships. On the orders of the Reagan administration, the CIA mined Nicaragua's Sandino port in 1984 in support of the Contra guerrilla group. A Soviet tanker was among the ships damaged by these mines. In 1986, in the case of Nicaragua v. United States, the International Court of Justice ruled that this mining was a violation of international law. Post Cold War During the Gulf War, Iraqi naval mines severely damaged and . When the war concluded, eight countries conducted clearance operations. Houthi forces in the Yemeni Civil War have made frequent use of naval mines, laying over 150 in the Red Sea throughout the conflict. Types Naval mines may be classified into three major groups; contact, remote and influence mines. Contact mines The earliest mines were usually of this type. They are still used today, as they are extremely low cost compared to any other anti-ship weapon and are effective, both as a psychological weapon and as a method to sink enemy ships. Contact mines need to be touched by the target before they detonate, limiting the damage to the direct effects of the explosion and usually affecting only the vessel that triggers them. Early mines had mechanical mechanisms to detonate them, but these were superseded in the 1870s by the "Hertz horn" (or "chemical horn"), which was found to work reliably even after the mine had been in the sea for several years. The mine's upper half is studded with hollow lead protuberances, each containing a glass vial filled with sulfuric acid. When a ship's hull crushes the metal horn, it cracks the vial inside it, allowing the acid to run down a tube and into a lead–acid battery which until then contained no acid electrolyte. This energizes the battery, which detonates the explosive. Earlier forms of the detonator employed a vial of sulfuric acid surrounded by a mixture of potassium perchlorate and sugar. When the vial was crushed, the acid ignited the perchlorate-sugar mix, and the resulting flame ignited the gunpowder charge. During the initial period of World War I, the Royal Navy used contact mines in the English Channel and later in large areas of the North Sea to hinder patrols by German submarines. Later, the American antenna mine was widely used because submarines could be at any depth from the surface to the seabed. This type of mine had a copper wire attached to a buoy that floated above the explosive charge which was weighted to the seabed with a steel cable. If a submarine's steel hull touched the copper wire, the slight voltage change caused by contact between two dissimilar metals was amplified and detonated the explosives. Limpet mines Limpet mines are a special form of contact mine that are manually attached to the target by magnets and remain in place. They are named because of the similarity to the limpet, a mollusk. Moored contact mines Generally, this type of mine is set to float just below the surface of the water or as deep as five meters. A steel cable connecting the mine to an anchor on the seabed prevents it from drifting away. The explosive and detonating mechanism is contained in a buoyant metal or plastic shell. The depth below the surface at which the mine floats can be set so that only deep draft vessels such as aircraft carriers, battleships or large cargo ships are at risk, saving the mine from being used on a less valuable target. In littoral waters it is important to ensure that the mine does not become visible when the sea level falls at low tide, so the cable length is adjusted to take account of tides. During WWII there were mines that could be moored in -deep water. Floating mines typically have a mass of around , including of explosives e.g. TNT, minol or amatol. Moored contact mines with plummet A special form of moored contact mines are those equipped with a plummet. When the mine is launched (1), the mine with the anchor floats first and the lead plummet sinks from it (2). In doing so, the plummet unwinds a wire, the deep line, which is used to set the depth of the mine below the water surface before it is launched (3). When the deep line has been unwound to a set length, the anchor is flooded and the mine is released from the anchor (4). The anchor begins to sink and the mooring cable unwinds until the plummet reaches the sea floor (5). Due to the decreasing tension on the deep line, the mooring cable is clamped. The anchor sinks further down to the bottom of the sea pulling the mine as deep below the water surface as the deep line has been unwound (6). Thus, even without knowing the exact depth, an exact depth of the mine below the water surface can be set, limited only by the maximum length of the mooring cable. Drifting contact mines Drifting mines were occasionally used during World War I and World War II. However, they were more feared than effective. Sometimes floating mines break from their moorings and become drifting mines; modern mines are designed to deactivate in this event. After several years at sea, the deactivation mechanism might not function as intended and the mines may remain live. Admiral Jellicoe's British fleet did not pursue and destroy the outnumbered German High Seas Fleet when it turned away at the Battle of Jutland because he thought they were leading him into a trap: he believed it possible that the Germans were either leaving floating mines in their wake, or were drawing him towards submarines, although neither of these was the case. After World War I the drifting contact mine was banned, but was occasionally used during World War II. The drifting mines were much harder to remove than tethered mines after the war, and they caused about the same damage to both sides. Churchill promoted "Operation Royal Marine" in 1940 and again in 1944 where floating mines were put into the Rhine in France to float down the river, becoming active after a time calculated to be long enough to reach German territory. Remotely controlled mines Frequently used in combination with coastal artillery and hydrophones, controlled mines (or command detonation mines) can be in place in peacetime, which is a huge advantage in blocking important shipping routes. The mines can usually be turned into "normal" mines with a switch (which prevents the enemy from simply capturing the controlling station and deactivating the mines), detonated on a signal or be allowed to detonate on their own. The earliest ones were developed around 1812 by Robert Fulton. The first remotely controlled mines were moored mines used in the American Civil War, detonated electrically from shore. They were considered superior to contact mines because they did not put friendly shipping at risk. The extensive American fortifications program initiated by the Board of Fortifications in 1885 included remotely controlled mines, which were emplaced or in reserve from the 1890s until the end of World War II. Modern examples usually weigh , including of explosives (TNT or torpex). Influence mines These mines are triggered by the influence of a ship or submarine, rather than direct contact. Such mines incorporate electronic sensors designed to detect the presence of a vessel and detonate when it comes within the blast range of the warhead. The fuses on such mines may incorporate one or more of the following sensors: magnetic, passive acoustic or water pressure displacement caused by the proximity of a vessel. First used during WWI, their use became more general in WWII. The sophistication of influence mine fuses has increased considerably over the years as first transistors and then microprocessors have been incorporated into designs. Simple magnetic sensors have been superseded by total-field magnetometers. Whereas early magnetic mine fuses would respond only to changes in a single component of a target vessel's magnetic field, a total field magnetometer responds to changes in the magnitude of the total background field (thus enabling it to better detect even degaussed ships). Similarly, the original broadband hydrophones of 1940s acoustic mines (which operate on the integrated volume of all frequencies) have been replaced by narrow-band sensors which are much more sensitive and selective. Mines can now be programmed to listen for highly specific acoustic signatures (e.g. a gas turbine powerplant or cavitation sounds from a particular design of propeller) and ignore all others. The sophistication of modern electronic mine fuzes incorporating these digital signal processing capabilities makes it much more difficult to detonate the mine with electronic countermeasures because several sensors working together (e.g. magnetic, passive acoustic and water pressure) allow it to ignore signals which are not recognised as being the unique signature of an intended target vessel. Modern influence mines such as the BAE Stonefish are computerised, with all the programmability this implies, such as the ability to quickly load new acoustic signatures into fuses, or program them to detect a single, highly distinctive target signature. In this way, a mine with a passive acoustic fuze can be programmed to ignore all friendly vessels and small enemy vessels, only detonating when a very large enemy target passes over it. Alternatively, the mine can be programmed specifically to ignore all surface vessels regardless of size and exclusively target submarines. Even as far back as WWII it was possible to incorporate a "ship counter" function in mine fuzes. This might set the mine to ignore the first two ships passing over it (which could be minesweepers deliberately trying to trigger mines) but detonate when the third ship passes overhead, which could be a high-value target such as an aircraft carrier or oil tanker. Even though modern mines are generally powered by a long life lithium battery, it is important to conserve power because they may need to remain active for months or even years. For this reason, most influence mines are designed to remain in a semi-dormant state until an unpowered (e.g. deflection of a mu-metal needle) or low-powered sensor detects the possible presence of a vessel, at which point the mine fuze powers up fully and the passive acoustic sensors will begin to operate for some minutes. It is possible to program computerised mines to delay activation for days or weeks after being laid. Similarly, they can be programmed to self-destruct or render themselves safe after a preset period of time. Generally, the more sophisticated the mine design, the more likely it is to have some form of anti-handling device to hinder clearance by divers or remotely piloted submersibles. Slide 31 of 81. Hosted by Federation of American Scientists. Moored mines The moored mine is the backbone of modern mine systems. They are deployed where water is too deep for bottom mines. They can use several kinds of instruments to detect an enemy, usually a combination of acoustic, magnetic and pressure sensors, or more sophisticated optical shadows or electro potential sensors. These cost many times more than contact mines. Moored mines are effective against most kinds of ships. As they are cheaper than other anti-ship weapons they can be deployed in large numbers, making them useful area denial or "channelizing" weapons. Moored mines usually have lifetimes of more than 10 years, and some almost unlimited. These mines usually weigh , including of explosives (RDX). In excess of of explosives the mine becomes inefficient, as it becomes too large to handle and the extra explosives add little to the mine's effectiveness. Bottom
women in Casablanca that has since become the biggest women's race held in a Muslim majority country, with up to 30,000 who came to run. In 1995, El Moutawakel became a council member of the International Association of Athletics Federations (IAAF), now known as World Athletics, and in 1998 she became a member of the International Olympic Committee (IOC). El Moutawakel is a member of the International Olympic Committee, and she was the president of the evaluation commissions for the selection of the host city for the Summer Olympics of 2012 and 2016. Since 2012 she is a vice-president of the IOC. In 2006, El Moutawakel was one of the eight honored to bear the Olympic flag at the 2006 Winter Olympics Opening Ceremony in Turin, Italy. On 26 July 2012, she carried the London Olympics torch through Westminster. El Moutawakel was one of the ambassadors of the Morocco bid for the 2026 FIFA World Cup. International competitions 1Representing Africa See also Politics of Morocco Sport in Morocco References External links 1962 births Living people Moroccan Muslims Sportspeople from Casablanca Moroccan sportsperson-politicians Moroccan female hurdlers Olympic athletes of Morocco Olympic gold medalists for Morocco Athletes (track and field) at the 1984 Summer Olympics Medalists at the 1984 Summer Olympics World Athletics Championships athletes for Morocco International Olympic Committee members Iowa State Cyclones women's track and field athletes Government ministers of Morocco National Rally of Independents politicians Moroccan emigrants to the United
Olympics of 2012 and 2016. Since 2012 she is a vice-president of the IOC. In 2006, El Moutawakel was one of the eight honored to bear the Olympic flag at the 2006 Winter Olympics Opening Ceremony in Turin, Italy. On 26 July 2012, she carried the London Olympics torch through Westminster. El Moutawakel was one of the ambassadors of the Morocco bid for the 2026 FIFA World Cup. International competitions 1Representing Africa See also Politics of Morocco Sport in Morocco References External links 1962 births Living people Moroccan Muslims Sportspeople from Casablanca Moroccan sportsperson-politicians Moroccan female hurdlers Olympic athletes of Morocco Olympic gold medalists for Morocco Athletes (track and field) at the 1984 Summer Olympics Medalists at the 1984 Summer Olympics World Athletics Championships athletes for Morocco International Olympic Committee members Iowa State Cyclones women's track and field athletes Government ministers of Morocco National Rally of Independents politicians Moroccan emigrants to the United States Olympic gold medalists in athletics
1950s and 1960s were lean years for North Melbourne, though the club did secure two consecutive Night Premierships in 1965 and 1966. Allen Aylett was a brilliant player in the late 1950s and early 1960s (and captain between 1961 and 1964), as was Noel Teasdale, who lost the Brownlow Medal on a countback in 1965 (he was later awarded a retrospective medal when the counting system was amended). Golden era In the late 1960s, under the leadership of Allen Aylett, North Melbourne began its climb to supremacy. As part of a major recruitment drive North secured the services of several big-name stars, including Barry Davis from Essendon, Doug Wade from Geelong, John Rantall from South Melbourne, and Barry Cable from Perth. In a major coup, the great Ron Barassi was appointed coach in 1973. Barrassi reversed the club's playing fortunes, taking a struggling team that was once regarded as the traditional cellar dwellers of the competition through to a golden era of success that transformed North Melbourne into one of the powerhouses of the VFL. Barassi took North to a Grand Final (losing to Richmond by 41 points) in 1974 and brought success in his 1975 and 1977 seasons. North made five consecutive Grand Finals from 1974 to 1978) and defeated Norwood in the 1975 national championship and thus declared Champions of Australia. In 1973 and 1974, North's wingman Keith Greig (recruited from Brunswick Football Club, Victoria) won consecutive Brownlow Medals; forward Malcolm Blight (recruited from Woodville Football Club, South Australia) then won the award in 1978. Doug Wade (recruited from Geelong Football Club, Geelong) won the Coleman Medal in 1974 with his 103 goals for the season. Barassi remained team coach until 1980, but only a Night Premiership in that year resulted in him leaving Arden Street. North then entered another period of decline, though Malcolm Blight kicked 103 goals to take out the Coleman medal in 1982, and another Brownlow win came through the talented Ross Glendinning in 1983. In that year, North Melbourne won a third Minor Premiership with 16 wins and 6 losses for the season, but they failed to make the Grand Final. Team of the 1990s Despite the tough, disciplined coaching of the legendary John Kennedy, the 1980s and early 1990s were mostly lean years for the Kangaroos. However, the rebuilding of the club was taking place. The Krakouer brothers (Jim and Phil) brought a spark into the side and lifted many hopes for North supporters and the excitement to the general football public. The innovative idea of night games was instigated by the club and meeting the challenges, the club survived. One major highlight was the recruitment of forward John Longmire in 1989, who topped the club goalkicking over five consecutive seasons (1990–1994) and won the Coleman medal in 1990 with 98 goals. At the beginning of the 1993 season, in a dramatic and controversial move, the board of the club sacked coach and long-time player Wayne Schimmelbusch, and appointed Denis Pagan in his place. Results were immediate, as North reached the finals for the first time in nearly a decade. Pagan was instrumental in appointing young centre half-forward Wayne Carey as the club's youngest-ever captain. Carey had been recruited at the same time as Longmire, but had taken longer to develop as a player. Over the next nine seasons, Carey came to be regarded as the standout player in the league, and was known as 'the King'. North Melbourne became a powerhouse through the 1990s under Pagan and Carey, and finished in the top four from 1994 until 2000. After being eliminated in the preliminary finals in 1994 and 1995, North went on to defeat the Sydney Swans in the 1996 Grand Final to take out the club's third premiership, and the gold centenary AFL cup; Glenn Archer won the Norm Smith Medal. The club was again eliminated in the preliminary final in 1997. In 1998, as the club won both the pre-season Ansett Cup and topped the ladder with 16 wins and 6 losses, but went on to lose the 1998 Grand Final to Adelaide, not helped by an inaccurate goalkicking performance of 8.22 (70) to Adelaide's 15.15 (105). In 1999, the Kangaroos finished in second position on the ladder, and went on to defeat Carlton in the Grand Final, winning the club's fourth VFL/AFL premiership; former Sydney midfielder Shannon Grant taking out the Norm Smith Medal. The club was eliminated in the preliminary finals in 2000 against Melbourne. In 1996, the club was in advanced talks with the Fitzroy Football Club to create the North Fitzroy Kangaroos Football Club, which was in a terminal financial condition, to a merger between the two clubs; however, Fitzroy ultimately merged with the Brisbane Bears instead. Seeking new markets and greater financial security in an increasingly corporatized AFL environment, the title "North Melbourne" was officially dropped from the logo in 1999, from which time the team played only as the "Kangaroos". During the successful 1999 season, North Melbourne played home games in Sydney with a view of becoming a second team in New South Wales; however, the experiment was not successful, with crowds averaging only 12,000. 21st century The 21st century did not begin well for North Melbourne. Its decade-long onfield potency was in decline, questions were raised about its financial position and long-term sustainability. Furthermore, three of the people most important to the club's success in the 1990s left the club under acrimonious circumstances: CEO Greg Miller left the club, captain Wayne Carey left prior to the 2002 season following an extramarital affair with the wife of teammate and vice captain Anthony Stevens, coach Denis Pagan was lured to Carlton at the end of 2002. Pagan was replaced by 1996 premiership player Dani Laidley, who had previously been an Assistant Coach at Collingwood from 1999 until the end of season 2002. On a post-season holiday, several players were caught in the 2002 Bali bombing terrorist attack, notably defender Jason McCartney, who suffered second-degree burns to over 50% of his body while carrying others to safety and nearly died during surgery after being flown back to Melbourne. In what is regarded as one of the most inspirational stories of Australian rules football and Australian sport in general, McCartney successfully returned to action on 6 June 2003 against Richmond at Docklands Stadium. Playing at full forward, he took a mark in the final quarter, scored a goal from the resulting set shot and set up Leigh Harding's winning goal with two minutes remaining. McCartney retired immediately after the game, citing that his recovery had left him spent, and he was chaired from the ground. McCartney wore the numbers "88" and "202" on the front of his long-sleeved for the match, signifying the Australian and total number of victims of the Bali bombings, while many in the crowd bore signs reading "Bali 88/202". Onfield, the club reached the elimination finals in 2002 and 2005, but otherwise failed to reach the finals from 2001 until 2006. After two seasons of finals North Melbourne dropped to 13th in 2009, and coach Dani Laidley announced her resignation with Darren Crocker acting as caretaker coach for the rest of the season, to eventually be replaced by ex-Brisbane Lions premiership player and Collingwood assistant coach Brad Scott. A$15 million redevelopment of the Arden Street, which had started in 2006, was completed in 2009, giving the club top-class training facilities. Brad Scott era North Melbourne struggled in its first two years under Brad Scott, finishing 9th in both 2010 and 2011. In 2012, the club returned to the finals for the first time since 2008, finishing the season in 8th place, but would go down to the West Coast Eagles by 96 points in an elimination final. In 2012, the club began a three-year deal to play two games each year at Blundstone Arena in Hobart, Tasmania. The club finished 10th in 2013 in a season full of close losses. Nick Dal Santo signed with the club at the end of the 2013 season as a restricted free agent. In 2014, North Melbourne finished 6th at the end of the home and away season and reached 40,000 members for the first time in the club's history. In September, North Melbourne went on to defeat Essendon by 12 points in the 2nd Elimination Final, only taking the lead in the last quarter. The following week, North Melbourne beat Geelong in the 2nd Semi-final by 6 points advancing them through to their first preliminary final since 2007. Their finals campaign came to a disappointing end at Stadium Australia when they were beaten by Sydney by 71 points. In 2015 the club made history by becoming the first team to qualify for a preliminary final from 8th spot, losing to the West Coast Eagles by 25 points after leading at half time. In 2016, North Melbourne won its first nine matches, which is the club's best start to a season in its VFL/AFL history. On 27 July 2016, the club announced it had surpassed 45,000 members for the first time in the club's history. In 2016, the Kangaroos fielded what was the oldest team in AFL history. Unfortunately after the midpoint of the season they fell away and struggled against some of the worst teams in competition. In the mid season of 2019 Brad Scott made the decision to leave NMFC after 10 years at the club taking them to the finals on multiple occasions. He holds the record for most games coached at a single club without making a Grand Final. Rhyce Shaw era Rhyce Shaw took over as caretaker coach in the interim in mid- to late 2019 and was later awarded the position as head coach for the following 2020 season. After a disastrous 2020 season, North won only 3 games and finished second-last, finishing just above the wooden spooners, Adelaide Crows, on percentage. Rhyce Shaw left the club in late October 2020 due to personal issues, bringing his short tenure as head coach to an end. David Noble era Noble's first year, 2021, started off with four wins, 17 losses, and one draw. Placing North Melbourne at the bottom of the ladder for the 2021 season. Club symbols and identity Name and mascot The club was widely known as the "Shinboners" for much of its early history. The origins of the nickname are unknown but it may have had something to do with the club's reputation for targeting the shinbones of opposition players, or to do with local butchers who showed their support for North by dressing up beef leg-bones in the club colours. By 1926, the club was known as the "Blue Birds", but this nickname did not last. It was Phonse Tobin, North president from 1953 to 1956, who oversaw the club adopting the kangaroo emblem in 1954; Tobin found the image
As it had after the merger with West Melbourne, North once again managed to avert its destruction. Entering the VFL After three attempts, 29 years of waiting and numerous other applications to enter the VFL, finally North was rewarded for its persistence with admittance to the League in 1925, along with Footscray and Hawthorn. Even then, the opportunity was almost lost as the League delegates debated into the early hours of the morning on which clubs should be invited to join the intake. It was only after much deliberation that North Melbourne's name was eventually substituted for Prahran's making North "the lucky side" of the invitees that included Footscray and Hawthorn. North Melbourne was forced to change its uniform to avoid a clash when it joined the VFL. North Melbourne were cellar dwellers for its first twenty-five years of VFL membership and struggled to win matches in the superior VFL competition, with the only bright note being Sel Murray winning the VFL Leading Goalkicker Medal in 1941 with 88 goals. By the late 1940s, North Melbourne had developed a strong list and significant supporter base. In 1949 North secured the VFL Minor Premiership, finishing top of the ladder at the end of the home-and-away season with 14 wins and 5 losses. They failed to make the Grand Final that year (eventually won by Essendon), but in 1950 they did reach the final, defeated by a more efficient Essendon. It was in this year that the club adopted the "Kangaroos" mascot. In February 1965, North Melbourne moved its playing and training base from the Arden Street Oval to Coburg Oval, signing a seven-year lease with the City of Coburg after initially negotiating long-term leases for up to 40 years. The club came to an arrangement to merge with the VFA's Coburg Football Club, whom it was displacing from the ground; fourteen Coburg committeemen joined the North Melbourne committee, but the merger was never completed after Coburg established a rival committee which remained loyal to the VFA. The lease at Coburg lasted only eight months; the Coburg council was hesitant to build a new grandstand without the security of a long-term lease, and neither party made the returns they expected, so it was terminated by mutual agreement in September 1965 and North Melbourne returned to the Arden Street Oval. Onfield, the 1950s and 1960s were lean years for North Melbourne, though the club did secure two consecutive Night Premierships in 1965 and 1966. Allen Aylett was a brilliant player in the late 1950s and early 1960s (and captain between 1961 and 1964), as was Noel Teasdale, who lost the Brownlow Medal on a countback in 1965 (he was later awarded a retrospective medal when the counting system was amended). Golden era In the late 1960s, under the leadership of Allen Aylett, North Melbourne began its climb to supremacy. As part of a major recruitment drive North secured the services of several big-name stars, including Barry Davis from Essendon, Doug Wade from Geelong, John Rantall from South Melbourne, and Barry Cable from Perth. In a major coup, the great Ron Barassi was appointed coach in 1973. Barrassi reversed the club's playing fortunes, taking a struggling team that was once regarded as the traditional cellar dwellers of the competition through to a golden era of success that transformed North Melbourne into one of the powerhouses of the VFL. Barassi took North to a Grand Final (losing to Richmond by 41 points) in 1974 and brought success in his 1975 and 1977 seasons. North made five consecutive Grand Finals from 1974 to 1978) and defeated Norwood in the 1975 national championship and thus declared Champions of Australia. In 1973 and 1974, North's wingman Keith Greig (recruited from Brunswick Football Club, Victoria) won consecutive Brownlow Medals; forward Malcolm Blight (recruited from Woodville Football Club, South Australia) then won the award in 1978. Doug Wade (recruited from Geelong Football Club, Geelong) won the Coleman Medal in 1974 with his 103 goals for the season. Barassi remained team coach until 1980, but only a Night Premiership in that year resulted in him leaving Arden Street. North then entered another period of decline, though Malcolm Blight kicked 103 goals to take out the Coleman medal in 1982, and another Brownlow win came through the talented Ross Glendinning in 1983. In that year, North Melbourne won a third Minor Premiership with 16 wins and 6 losses for the season, but they failed to make the Grand Final. Team of the 1990s Despite the tough, disciplined coaching of the legendary John Kennedy, the 1980s and early 1990s were mostly lean years for the Kangaroos. However, the rebuilding of the club was taking place. The Krakouer brothers (Jim and Phil) brought a spark into the side and lifted many hopes for North supporters and the excitement to the general football public. The innovative idea of night games was instigated by the club and meeting the challenges, the club survived. One major highlight was the recruitment of forward John Longmire in 1989, who topped the club goalkicking over five consecutive seasons (1990–1994) and won the Coleman medal in 1990 with 98 goals. At the beginning of the 1993 season, in a dramatic and controversial move, the board of the club sacked coach and long-time player Wayne Schimmelbusch, and appointed Denis Pagan in his place. Results were immediate, as North reached the finals for the first time in nearly a decade. Pagan was instrumental in appointing young centre half-forward Wayne Carey as the club's youngest-ever captain. Carey had been recruited at the same time as Longmire, but had taken longer to develop as a player. Over the next nine seasons, Carey came to be regarded as the standout player in the league, and was known as 'the King'. North Melbourne became a powerhouse through the 1990s under Pagan and Carey, and finished in the top four from 1994 until 2000. After being eliminated in the preliminary finals in 1994 and 1995, North went on to defeat the Sydney Swans in the 1996 Grand Final to take out the club's third premiership, and the gold centenary AFL cup; Glenn Archer won the Norm Smith Medal. The club was again eliminated in the preliminary final in 1997. In 1998, as the club won both the pre-season Ansett Cup and topped the ladder with 16 wins and 6 losses, but went on to lose the 1998 Grand Final to Adelaide, not helped by an inaccurate goalkicking performance of 8.22 (70) to Adelaide's 15.15 (105). In 1999, the Kangaroos finished in second position on the ladder, and went on to defeat Carlton in the Grand Final, winning the club's fourth VFL/AFL premiership; former Sydney midfielder Shannon Grant taking out the Norm Smith Medal. The club was eliminated in the preliminary finals in 2000 against Melbourne. In 1996, the club was in advanced talks with the Fitzroy Football Club to create the North Fitzroy Kangaroos Football Club, which was in a terminal financial condition, to a merger between the two clubs; however, Fitzroy ultimately merged with the Brisbane Bears instead. Seeking new markets and greater financial security in an increasingly corporatized AFL environment, the title "North Melbourne" was officially dropped from the logo in 1999, from which time the team played only as the "Kangaroos". During the successful 1999 season, North Melbourne played home games in Sydney with a view of becoming a second team in New South Wales; however, the experiment was not successful, with crowds averaging only 12,000. 21st century The 21st century did not begin well for North Melbourne. Its decade-long onfield potency was in decline, questions were raised about its financial position and long-term sustainability. Furthermore, three of the people most important to the club's success in the 1990s left the club under acrimonious circumstances: CEO Greg Miller left the club, captain Wayne Carey left prior to the 2002 season following an extramarital affair with the wife of teammate and vice captain Anthony Stevens, coach Denis Pagan was lured to Carlton at the end of 2002. Pagan was replaced by 1996 premiership player Dani Laidley, who had previously been an Assistant Coach at Collingwood from 1999 until the end of season 2002. On a post-season holiday, several players were caught in the 2002 Bali bombing terrorist attack, notably defender Jason McCartney, who suffered second-degree burns to over 50% of his body while carrying others to safety and nearly died during surgery after being flown back to Melbourne. In what is regarded as one of the most inspirational stories of Australian rules football and Australian sport in general, McCartney successfully returned to action on 6 June 2003 against Richmond at Docklands Stadium. Playing at full forward, he took a mark in the final quarter, scored a goal from the resulting set shot and set up Leigh Harding's winning goal with two minutes remaining. McCartney retired immediately after the game, citing that his recovery had left him spent, and he was chaired from the ground. McCartney wore the numbers "88" and "202" on the front of his long-sleeved for the match, signifying the Australian and total number of victims of the Bali bombings, while many in the crowd bore signs reading "Bali 88/202". Onfield, the club reached the elimination finals in 2002 and 2005, but otherwise failed to reach the finals from 2001 until 2006. After two seasons of finals North Melbourne dropped to 13th in 2009, and coach Dani Laidley announced her resignation with Darren Crocker acting as caretaker coach for the rest of the season, to eventually be replaced by ex-Brisbane Lions premiership player and Collingwood assistant coach Brad Scott. A$15 million redevelopment of the Arden Street, which had started in 2006, was completed in 2009, giving the club top-class training facilities. Brad Scott era North Melbourne struggled in its first two years under Brad Scott, finishing 9th in both 2010 and 2011. In 2012, the club returned to the finals for the first time since 2008, finishing the season in 8th place, but would go down to the West Coast Eagles by 96 points in an elimination final. In 2012, the club began a three-year deal to play two games each year at Blundstone Arena in Hobart, Tasmania. The club finished 10th in 2013 in a season full of close losses. Nick Dal Santo signed with the club at the end of the 2013 season as a restricted free agent. In 2014, North Melbourne finished 6th at the end of the home and away season and reached 40,000 members for the first time in the club's history. In September, North Melbourne went on to defeat Essendon by 12 points in the 2nd Elimination Final, only taking the lead in the last quarter. The following week, North Melbourne beat Geelong in the 2nd Semi-final by 6 points advancing them through to their first preliminary final since 2007. Their finals campaign came to a disappointing end at Stadium Australia when they were beaten by Sydney by 71 points. In 2015 the club made history by becoming the first team to qualify for a preliminary final from 8th spot, losing to the West Coast Eagles by 25 points after leading at half time. In 2016, North Melbourne won its first nine matches, which is the club's best start to a season in its VFL/AFL history. On 27 July 2016, the club announced it had surpassed 45,000 members for the first time in the club's history. In 2016, the Kangaroos fielded what was the oldest team in AFL history. Unfortunately after the midpoint of the season they fell away and struggled against some of the worst teams in competition. In the mid season of 2019 Brad Scott made the decision to leave NMFC after 10 years at the club taking them to the finals on multiple occasions. He holds the record for most games coached at a single club without making a Grand Final. Rhyce Shaw era Rhyce Shaw took over as caretaker coach in the interim in mid- to late 2019 and was later awarded the position as head coach for the following 2020 season. After a disastrous 2020 season, North won only 3 games and finished second-last, finishing just above the wooden spooners, Adelaide Crows, on percentage. Rhyce Shaw left the club in late October 2020 due to personal issues, bringing his short tenure as head coach to an end. David Noble era Noble's first year, 2021, started off with four wins, 17 losses, and one draw. Placing North Melbourne at the bottom of the ladder for the 2021 season. Club symbols and identity Name and mascot The club was widely known as the "Shinboners" for much of its early history. The origins of the nickname are unknown but it may have had something to do with the club's reputation for targeting the shinbones of opposition players, or to do with local butchers who showed their support for North by dressing up beef leg-bones in the club colours. By 1926, the club was known as the "Blue Birds", but this nickname did not last. It was Phonse Tobin, North president from 1953 to 1956, who oversaw the club adopting the kangaroo emblem in 1954; Tobin found the image of a shinbone unsavoury and wanted the club to have a mascot it could show with pride. In selecting a new name, he wanted something characteristically Australian and was inspired by a large kangaroo he saw on display outside a city store. The official name of the club is North Melbourne, but the club has gone under several other aliases over the years. The club was founded as the "North Melbourne Football Club", but changed to "North Melbourne cum Albert Park" after merging with Albert Park in 1876. Following the reformation of the club in 1877, it was known as the "Hotham Football Club" but later took the name "North Melbourne" again in 1888. In 1998 the club proposed changing its name to the "Northern Kangaroos", but it was rejected by the AFL. From 1999 to 2007, the club traded without much success as "The Kangaroos" in a bid to increase its appeal nationally; this decision was reversed at the end of 2007 and the club has again reverted to the name "North Melbourne". Club song "Join in the Chorus" is the official anthem of the North Melbourne Football Club. It is sung to the tune of a Scottish folk song from around 1911, "A Wee Deoch an Doris". The song is generally sung, in accordance to common football tradition, after a victory. It is also played before every match. "Join in the Chorus" is believed to be the oldest club anthem of any AFL club and has been associated with North from its early VFA days. The preamble of the song originates from a score of a theatre musical called Australia: Heart to Heart and Hand to Hand, written by Toso Taylor in the 1890s in pre-federation Australia. The second verse is unknown in origin and was presumably added later by members of the club when the song was chosen. The chorus was appropriated from a song written and performed by Scottish musician Harry Lauder. The recording currently used by the club was performed by the Fable Singers in April 1972 and only includes the choruses. The song has a strong Victorian heritage and has been traditionally sung by the Victorian State Football and Victorian Cricket teams respectively. The lyrics have occasionally been changed, including updating the year in the song (e.g. "North Melbourne will be premiers in 1993"), or to remove the words "North Melbourne" during the period when the club was competing only as the Kangaroos. For the 2015 premiership season, You Am I's lead singer, Tim Rogers, a North Melbourne supporter, announced that he would assist in an updated version of the song including the two verses. This version is only played at North home games as the team runs onto the ground. "Shinboner spirit" The term "Shinboner spirit" refers to camaraderie and determination of players or members of the North Melbourne Football Club. The term persists to the modern day, despite North Melbourne having switched its official nickname from the Shinboners to the Kangaroos in the 1950s. Because it relates to the club's original nickname, Shinboner spirit is often associated with the complete history of the club. In 2005, to celebrate the club's 80th anniversary of senior competition in the VFL and the 30th anniversary of its first VFL premiership, the Kangaroos held a "Shinboner Spirit" gala event attended by almost the entire surviving players. In the awards ceremony, the key Shinboners of the past 80 years were acknowledged and Glenn Archer was named the "Shinboner of the Century". Guernsey The North Melbourne Football Club has a long history of wearing various designs in the colours of royal blue and white. Most of the club's earliest jumpers were long-sleeved and not the sleeveless design common today. In their early years the club sported a hooped design when they took to the field. This changed at the behest of the VFA in 1884 who insisted that Hotham change their jumpers to vertical stripes to provide a visible contrast between Hotham and Geelong. After 1884 the vertical top was worn more often, usually in the lace-up design in the gallery below. After the merger with West Melbourne, North used a composite jumper that incorporated West
continued its refusal to export Uranium to India despite diplomatic pressure from India. In November 2011, Australian Prime Minister Julia Gillard announced a desire to allow exports to India, a policy change which was authorized by her party's national conference in December. The following month, Gillard overturned Australia's long-standing ban on exporting uranium to India. She further said, "We should take a decision in the national interest, a decision about strengthening our strategic partnership with India in this the Asian century," and said that any agreement to sell uranium to India would include strict safeguards to ensure it would only be used for civilian purposes, and not end up in nuclear weapons. On 5 September 2014 Tony Abbott, Gillard's successor as Australian Prime Minister, sealed a civil nuclear deal to sell uranium to India. "We signed a nuclear cooperation agreement because Australia trusts India to do the right thing in this area, as it has been doing in other areas," Abbott told reporters after he and Indian Prime Minister Narendra Modi signed a pact to sell uranium for peaceful power generation. Pakistan In May 1998, following India's nuclear tests earlier that month, Pakistan conducted two sets of nuclear tests, the Chagai-I and Chagai-II. Although there is little confirmed information in public, as of 2015, Pakistan was estimated to have as many as 120 warheads. According to analyses of the Carnegie Endowment for International Peace and the Stimson Center, Pakistan has enough fissile material for 350 warheads. Pakistani officials argue that the NPT is discriminatory. When asked at a briefing in 2015 whether Islamabad would sign the NPT if Washington requested it, Foreign Secretary Aizaz Ahmad Chaudhry was quoted as responding "It is a discriminatory treaty. Pakistan has the right to defend itself, so Pakistan will not sign the NPT. Why should we?" Until 2010, Pakistan had always maintained the position that it would sign the NPT if India did so. In 2010, Pakistan abandoned this historic position and stated that it would join the NPT only as a recognized nuclear-weapon state. The NSG Guidelines currently rule out nuclear exports by all major suppliers to Pakistan, with very narrow exceptions, since it does not have full-scope IAEA safeguards (i.e. safeguards on all its nuclear activities). Pakistan has sought to reach an agreement similar to that with India, but these efforts have been rebuffed by the United States and other NSG members, on the grounds that Pakistan's track record as a nuclear proliferator makes it impossible for it to have any sort of nuclear deal in the near future. By 2010, China reportedly signed a civil nuclear agreement with Pakistan, using the justification that the deal was "peaceful". The British government criticized this, on the grounds that 'the time is not yet right for a civil nuclear deal with Pakistan'. China did not seek formal approval from the nuclear suppliers group, and claimed instead that its cooperation with Pakistan was "grandfathered" when China joined the NSG, a claim that was disputed by other NSG members. Pakistan applied for membership on 19 May 2016, supported by Turkey and China However, many NSG members opposed Pakistan's membership bid due to its track record, including the illicit procurement network of Pakistani scientist A.Q. Khan, which aided the nuclear programs of Iran, Libya and North Korea. Pakistani officials reiterated the request in August 2016. Israel Israel has a long-standing policy of deliberate ambiguity with regards to its nuclear program (see List of countries with nuclear weapons). Israel has been developing nuclear technology at its Dimona site in the Negev since 1958, and some nonproliferation analysts estimate that Israel may have stockpiled between 100 and 200 warheads using reprocessed plutonium. The position on the NPT is explained in terms of "Israeli exceptionality", a term coined by Professor Gerald M. Steinberg, in reference to the perception that the country's small size, overall vulnerability, as well as the history of deep hostility and large-scale attacks by neighboring states, require a deterrent capability. The Israeli government refuses to confirm or deny possession of nuclear weapons, although this is now regarded as an open secret after Israeli junior nuclear technician Mordechai Vanunu—subsequently arrested and sentenced for treason by Israel—published evidence about the program to the British Sunday Times in 1986. On 18 September 2009 the General Conference of the International Atomic Energy Agency called on Israel to open its nuclear facilities to IAEA inspection and adhere to the non-proliferation treaty as part of a resolution on "Israeli nuclear capabilities", which passed by a narrow margin of 49–45 with 16 abstentions. The chief Israeli delegate stated that "Israel will not co-operate in any matter with this resolution." However, similar resolutions were defeated in 2010, 2013, 2014, and 2015. As with Pakistan, the NSG Guidelines currently rule out nuclear exports by all major suppliers to Israel. North Korea North Korea acceded to the treaty on 12 December 1985, but gave notice of withdrawal from the treaty on 10 January 2003 following U.S. allegations that it had started an illegal enriched uranium weapons program, and the U.S. subsequently stopping fuel oil shipments under the Agreed Framework which had resolved plutonium weapons issues in 1994. The withdrawal became effective 10 April 2003 making North Korea the first state ever to withdraw from the treaty. North Korea had once before announced withdrawal, on 12 March 1993, but suspended that notice before it came into effect. On 10 February 2005, North Korea publicly declared that it possessed nuclear weapons and pulled out of the six-party talks hosted by China to find a diplomatic solution to the issue. "We had already taken the resolute action of pulling out of the Nuclear Non-Proliferation Treaty and have manufactured nuclear arms for self-defence to cope with the Bush administration's evermore undisguised policy to isolate and stifle the DPRK [Democratic People's Republic of Korea]," a North Korean Foreign Ministry statement said regarding the issue. Six-party talks resumed in July 2005. On 19 September 2005, North Korea announced that it would agree to a preliminary accord. Under the accord, North Korea would scrap all of its existing nuclear weapons and nuclear production facilities, rejoin the NPT, and readmit IAEA inspectors. The difficult issue of the supply of light water reactors to replace North Korea's indigenous nuclear power plant program, as per the 1994 Agreed Framework, was left to be resolved in future discussions. On the next day North Korea reiterated its known view that until it is supplied with a light water reactor it will not dismantle its nuclear arsenal or rejoin the NPT. On 2 October 2006, the North Korean foreign minister announced that his country was planning to conduct a nuclear test "in the future", although it did not state when. On Monday, 9 October 2006 at 01:35:28 (UTC) the United States Geological Survey detected a magnitude 4.3 seismic event north of Kimchaek, North Korea indicating a nuclear test. The North Korean government announced shortly afterward that they had completed a successful underground test of a nuclear fission device. In 2007, reports from Washington suggested that the 2002 CIA reports stating that North Korea was developing an enriched uranium weapons program, which led to North Korea leaving the NPT, had overstated or misread the intelligence. On the other hand, even apart from these press allegations, there remains some information in the public record indicating the existence of a uranium effort. Quite apart from the fact that North Korean First Vice Minister Kang Sok Ju at one point admitted the existence of a uranium enrichment program, Pakistan's then-President Musharraf revealed that the A.Q. Khan proliferation network had provided North Korea with a number of gas centrifuges designed for uranium enrichment. Additionally, press reports have cited U.S. officials to the effect that evidence obtained in dismantling Libya's WMD programs points toward North Korea as the source for Libya's uranium hexafluoride (UF6) – which, if true, would mean that North Korea has a uranium conversion facility for producing feedstock for centrifuge enrichment. Iran Iran is a party to the NPT since 1970 but was found in non-compliance with its NPT safeguards agreement, and the status of its nuclear program remains in dispute. In November 2003 IAEA Director General Mohamed ElBaradei reported that Iran had repeatedly and over an extended period failed to meet its safeguards obligations under the NPT with respect to: reporting of nuclear material imported to Iran; reporting of the subsequent processing and use of imported nuclear material; declaring of facilities and other locations where nuclear material had been stored and processed. After about two years of EU3-led diplomatic efforts and Iran temporarily suspending its enrichment program, the IAEA Board of Governors, acting under Article XII.C of the IAEA Statute, found in a rare non-consensus decision with 12 abstentions that these failures constituted non-compliance with the IAEA safeguards agreement. This was reported to the UN Security Council in 2006, after which the Security Council passed a resolution demanding that Iran suspend its enrichment. Instead, Iran resumed its enrichment program. The IAEA has been able to verify the non-diversion of declared nuclear material in Iran, and is continuing its work on verifying the absence of undeclared activities. In February 2008, the IAEA also reported that it was working to address "alleged studies" of weaponization, based on documents provided by certain Member States, which those states claimed originated from Iran. Iran rejected the allegations as "baseless" and the documents as "fabrications". In June 2009, the IAEA reported that Iran had not "cooperated with the Agency in connection with the remaining issues ... which need to be clarified to exclude the possibility of military dimensions to Iran's nuclear program." The United States concluded that Iran violated its Article III NPT safeguards obligations, and further argued based on circumstantial evidence that Iran's enrichment program was for weapons purposes and therefore violated Iran's Article II nonproliferation obligations. The November 2007 US National Intelligence Estimate (NIE) later concluded that Iran had halted an active nuclear weapons program in the fall of 2003 and that it had remained halted as of mid-2007. The NIE's "Key Judgments", however, also made clear that what Iran had actually stopped in 2003 was only "nuclear weapon design and weaponization work and covert uranium conversion-related and uranium enrichment-related work"-namely, those aspects of Iran's nuclear weapons effort that had not by that point already been leaked to the press and become the subject of IAEA investigations. Since Iran's uranium enrichment program at Natanz—and its continuing work on a heavy water reactor at Arak that would be ideal for plutonium production—began secretly years before in conjunction with the very weaponization work the NIE discussed and for the purpose of developing nuclear weapons, many observers find Iran's continued development of fissile material production capabilities distinctly worrying. Particularly because fissile material availability has long been understood to be the principal obstacle to nuclear weapons development and the primary "pacing element" for a weapons program, the fact that Iran has reportedly suspended weaponization work may not mean very much. As The Bush Administration's Director of National Intelligence (DNI) Mike McConnell put it in 2008, the aspects of its work that Iran allegedly suspended were thus "probably the least significant part of the program." Iran stated it has a legal right to enrich uranium for peaceful purposes under the NPT, and further says that it had "constantly complied with its obligations under the NPT and the Statute of the International Atomic Energy Agency". Iran also stated that its enrichment program has been part of its civilian nuclear energy program, which is allowed under Article IV of the NPT. The Non-Aligned Movement has welcomed the continuing cooperation of Iran with the IAEA and reaffirmed Iran's right to the peaceful uses of nuclear technology. Early during his tenure as United Nations Secretary General, between 2007 and 2016, Ban Ki-moon welcomed the continued dialogue between Iran and the IAEA. He urged a peaceful resolution of the issue. In April 2010, during the signing of the U.S.-Russia New START Treaty, President Obama said that the United States, Russia, and other nations were demanding that Iran face consequences for failing to fulfill its obligations under the Nuclear Non-Proliferation Treaty, saying "We will not tolerate actions that flout the NPT, risk an arms race in a vital region, and threaten the credibility of the international community and our collective security." In 2015, Iran negotiated a nuclear deal with the P5+1, a group of countries that consisted of the five permanent members of the UN Security Council (China, France, Russia, the United Kingdom, and the United States) plus Germany. On 14 July 2015, the P5+1 and Iran concluded the Joint Comprehensive Plan of Action, lifting sanctions on Iran in exchange for constraints and on Iran's nuclear activities and increased verification by the IAEA. On 8 May 2018, President Donald Trump withdrew the United States from the JCPOA and reimposed sanctions on Iran. South Africa South Africa is the only country that developed nuclear weapons by itself and later dismantled them – unlike the former Soviet states Ukraine, Belarus and Kazakhstan, which inherited nuclear weapons from the former USSR and also acceded to the NPT as non-nuclear weapon states. During the days of apartheid, the South African government developed a deep fear of both a black uprising and the threat of communism. This led to the development of a secret nuclear weapons program as an ultimate deterrent. South Africa has a large supply of uranium, which is mined in the country's gold mines. The government built a nuclear research facility at Pelindaba near Pretoria where uranium was enriched to fuel grade for the Koeberg Nuclear Power Station as well as weapon grade for bomb production. In 1991, after international pressure and when a change of government was imminent, South African Ambassador to the United States Harry Schwarz signed the Nuclear Non-Proliferation Treaty. In 1993, the then president Frederik Willem de Klerk openly admitted that the country had developed a limited nuclear weapon capability. These weapons were subsequently dismantled before South Africa acceded to the NPT and opened itself up to IAEA inspection. In 1994, the IAEA completed its work and declared that the country had fully dismantled its nuclear weapons program. Libya Libya had signed (in 1968) and ratified (in 1975) the Nuclear Non-Proliferation Treaty and was subject to IAEA nuclear safeguards inspections, but undertook a secret nuclear weapons development program in violation of its NPT obligations, using material and technology provided by the A.Q. Khan proliferation network—including actual nuclear weapons designs allegedly originating in China. Libya began secret negotiations with the United States and the United Kingdom in March 2003 over potentially eliminating its WMD programs. In October 2003, Libya was embarrassed by the interdiction of a shipment of Pakistani-designed centrifuge parts sent from Malaysia, also as part of A. Q. Khan's proliferation ring. In December 2003, Libya announced that it had agreed to eliminate all its WMD programs, and permitted U.S. and British teams (as well as IAEA inspectors) into the country to assist this process and verify its completion. The nuclear weapons designs, gas centrifuges for uranium enrichment, and other equipment—including prototypes for improved SCUD ballistic missiles—were removed from Libya by the United States. (Libyan chemical weapons stocks and chemical bombs were also destroyed on site with international verification, with Libya joining the Chemical Weapons Convention.) Libya's non-compliance with its IAEA safeguards was reported to the U.N. Security Council, but with no action taken, as Libya's return to compliance with safeguards and Article II of the NPT was welcomed. In 2011, the Libyan government of Muammar al-Gaddafi was overthrown in the Libyan Civil War with the assistance of a military intervention by NATO forces acting under the auspices of UN Security Council Resolution 1973. Gaddafi's downfall 8 years after the disarmament of Libya, in which Gaddafi agreed to eliminate Libya's nuclear weapons program, has been repeatedly cited by North Korea, which views Gaddafi's fate as a "cautionary tale" that influences North Korea's decision to maintain and intensify its nuclear weapons program and arsenal despite pressure to denuclearize. Syria Syria is a state party to the NPT since 1969 and has a limited civil nuclear program. Before the advent of the Syrian Civil War it was known to operate only one small Chinese-built research reactor, SRR-1. Despite being a proponent of a Weapons of Mass Destruction Free Zone in the Middle East the country was accused of pursuing a military nuclear program with a reported nuclear facility in a desert Syrian region of Deir ez-Zor. The reactor's components had likely been designed and manufactured in North Korea, with the reactor's striking similarity in shape and size to the North Korean Yongbyon Nuclear Scientific Research Center. That information alarmed Israeli military and intelligence to such a degree that the idea of a targeted airstrike was conceived. It resulted in Operation Orchard, that took place on 6 September 2007 and saw as many as eight Israeli aircraft taking part. The Israeli government is said to have bounced the idea of the operation off of the US Bush administration, although the latter declined to participate. The nuclear reactor was destroyed in the attack, which also killed about ten North Korean workers. The attack did not cause an international outcry or any serious Syrian retaliatory moves as both parties tried to keep it secret: Despite a half-century state of war declared by surrounding states, Israel did not want publicity as regards its breach of the ceasefire, while Syria was not willing to acknowledge its clandestine nuclear program. Leaving the treaty Article X allows a state to leave the treaty if "extraordinary events, related to the subject matter of this Treaty, have jeopardized the supreme interests of its country", giving three months' (ninety days') notice. The state is required to give reasons for leaving the NPT in this notice. NATO states argue that when there is a state of "general war" the treaty no longer applies, effectively allowing the states involved to leave the treaty with no notice. This is a necessary argument to support the NATO nuclear weapons sharing policy. NATO's argument is based on the phrase "the consequent need to make every effort to avert the danger of such a war" in the treaty preamble, inserted at the behest of U.S. diplomats, arguing that the treaty would at that point have failed to fulfill its function of prohibiting a general war and thus no longer be binding. See United States–NATO nuclear weapons sharing above. North Korea has also caused an uproar by its use of this provision of the treaty. Article X.1 only requires a state to give three months' notice in total, and does not provide for other states to question a state's interpretation of "supreme interests of its country". In 1993, North Korea gave notice to withdraw from the NPT. However, after 89 days, North Korea reached agreement with the United States to freeze its nuclear program under the Agreed Framework and "suspended" its withdrawal notice. In October 2002, the United States accused North Korea of violating the Agreed Framework by pursuing a secret uranium enrichment program, and suspended shipments of heavy fuel oil under that agreement. In response, North Korea expelled IAEA inspectors, disabled IAEA equipment, and, on 10 January 2003, announced that it was ending the suspension of its previous NPT withdrawal notification. North Korea said that only one more day's notice was sufficient for withdrawal from the NPT, as it had given 89 days before. The IAEA Board of Governors rejected this interpretation. Most countries held that a new three-months withdrawal notice was required, and some questioned whether North Korea's notification met the "extraordinary events" and "supreme interests" requirements of the treaty. The Joint Statement of 19 September 2005 at the end of the Fourth Round of the Six-Party Talks called for North Korea to "return" to the NPT, implicitly acknowledging that it had withdrawn. Recent and coming events The main outcome of the 2000 Conference was the adoption by consensus of a comprehensive Final Document, which included among other things "practical steps for the systematic and progressive efforts" to implement the disarmament provisions of the NPT, commonly referred to as the Thirteen Steps. On 18 July 2005, US President George W. Bush met Indian Prime Minister Manmohan Singh and declared that he would work to change US law and international rules to permit trade in US civilian nuclear technology with India. At the time, British columnist George Monbiot argued that the U.S.-India nuclear deal, in combination with US attempts to deny Iran (an NPT signatory) civilian nuclear fuel-making technology, might destroy the NPT regime. In the first half of 2010, it was strongly believed that China had signed a civilian nuclear deal with Pakistan claiming that the deal was "peaceful". Arms control advocates criticised the reported China-Pakistan deal as they did in case of U.S.-India deal claiming that both the deals violate the NPT by facilitating nuclear programmes in states which are not parties to the NPT. Some reports asserted that the deal was a strategic move by China to balance US influence in South-Asia. According to a report published by U.S. Department of Defense in 2001, China had provided Pakistan with nuclear materials and has given critical technological assistance in the construction of Pakistan's nuclear weapons development facilities, in violation of the Nuclear Non-Proliferation Treaty, of which China even then was a signatory. At the Seventh Review Conference in May 2005, there were stark differences between the United States, which wanted the conference to focus on non-proliferation, especially on its allegations against Iran, and most other countries, who emphasized the lack of serious nuclear disarmament by the nuclear powers. The non-aligned countries reiterated their position emphasizing the need for nuclear disarmament. The 2010 Review Conference was held in May 2010 in New York City, and adopted a final document that included a summary by the Review Conference President, Ambassador Libran Capactulan of the Philippines, and an Action Plan that was adopted by consensus. The 2010 conference was generally considered a success because it reached consensus where the previous Review Conference in 2005 ended in disarray, a fact that many attributed to the U.S. President Barack Obama's commitment to nuclear nonproliferation and disarmament. Some have warned that this success raised unrealistically high expectations that could lead to failure at the next Review Conference in 2015. The "Global Summit on Nuclear Security" took place 12–13 April 2010. The summit was proposed by President Obama in Prague and was intended to strengthen the Nuclear Non-Proliferation Treaty in conjunction with the Proliferation Security Initiative and the Global Initiative to Combat Nuclear Terrorism. Forty seven states and three international organizations took part in the summit, which issued a communiqué and a work plan. For further information see 2010 Nuclear Security Summit. In a major policy speech at the Brandenburg Gate in Berlin on 19 June 2013, United States President Barack Obama outlined plans to further reduce the number of warheads in the U.S. nuclear arsenal. According to Foreign Policy, Obama proposed a "one-third reduction in strategic nuclear warheads—on top of the cuts already required by the New START treaty—bringing the number of deployed warheads to about 1,000". Obama is seeking to "negotiate these reductions with Russia to continue to move beyond Cold War nuclear postures," according to briefing documents provided to Foreign Policy. In the same speech, Obama emphasized his administration's efforts to isolate any nuclear weapons capabilities emanating from Iran and North Korea. He also called for a renewed bipartisan effort in the United States Congress to ratify the Comprehensive Nuclear-Test-Ban Treaty and called on countries to negotiate a new treaty to end the production of fissile material for nuclear weapons. On 24 April 2014, it was announced that the nation of the Marshall Islands has brought suit in The Hague against the United States, the former Soviet Union, the United Kingdom, France, China, India, Pakistan, North Korea and Israel seeking to have the disarmament provisions of the NNPT enforced. The 2015 Review Conference of the Parties to the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) was held at the United Nations in New York from 27 April to 22 May 2015 and presided over by Ambassador Taous Feroukhi of Algeria. The Treaty, particularly article VIII, paragraph 3, envisages a review of the operation of the Treaty every five years, a provision which was reaffirmed by the States parties at the 1995 NPT Review and Extension Conference and the 2000 NPT Review Conference. At the 2015 NPT Review Conference, States parties examined the implementation of the Treaty's provisions since 2010. Despite intensive consultations, the Conference was not able to reach agreement on the substantive part of the draft Final Document. Criticism and responses Over the years the NPT has come to be seen by many Third World states as "a conspiracy of the nuclear 'haves' to keep the nuclear 'have-nots' in their place". This argument has roots in Article VI of the treaty which "obligates the nuclear weapons states to liquidate their nuclear stockpiles and pursue complete disarmament. The non-nuclear states see no signs of this happening". Some argue that the NWS have not fully complied with their disarmament obligations under Article VI of the NPT. Some countries such as India have criticized the NPT, because it "discriminated against states not possessing nuclear weapons on 1 January 1967," while Iran and numerous Arab states have criticized Israel for not signing the NPT. There has been disappointment with the limited progress on nuclear disarmament, where the five authorized nuclear weapons states still have 13,400 warheads (as of February 2021) among them. As noted above, the International Court of Justice, in its advisory opinion on the Legality of the Threat or Use of Nuclear Weapons, stated that "there exists an obligation to pursue in good faith and bring to a conclusion negotiations leading to nuclear disarmament in all its aspects under strict and effective international control". Some critics of the nuclear-weapons states contend that they have failed to comply with Article VI by failing to make disarmament the driving force in national planning and policy with respect to nuclear weapons, even while they ask other states to plan for their security without nuclear weapons. The United States responds to criticism of its disarmament record by pointing out that, since the end of the Cold War, it has eliminated over 13,000 nuclear weapons, and eliminated over 80% of its deployed strategic warheads and 90% of non-strategic warheads deployed to NATO, in the process eliminating whole categories of warheads and delivery systems and reducing its reliance on nuclear weapons. U.S. officials have
2005, North Korea announced that it would agree to a preliminary accord. Under the accord, North Korea would scrap all of its existing nuclear weapons and nuclear production facilities, rejoin the NPT, and readmit IAEA inspectors. The difficult issue of the supply of light water reactors to replace North Korea's indigenous nuclear power plant program, as per the 1994 Agreed Framework, was left to be resolved in future discussions. On the next day North Korea reiterated its known view that until it is supplied with a light water reactor it will not dismantle its nuclear arsenal or rejoin the NPT. On 2 October 2006, the North Korean foreign minister announced that his country was planning to conduct a nuclear test "in the future", although it did not state when. On Monday, 9 October 2006 at 01:35:28 (UTC) the United States Geological Survey detected a magnitude 4.3 seismic event north of Kimchaek, North Korea indicating a nuclear test. The North Korean government announced shortly afterward that they had completed a successful underground test of a nuclear fission device. In 2007, reports from Washington suggested that the 2002 CIA reports stating that North Korea was developing an enriched uranium weapons program, which led to North Korea leaving the NPT, had overstated or misread the intelligence. On the other hand, even apart from these press allegations, there remains some information in the public record indicating the existence of a uranium effort. Quite apart from the fact that North Korean First Vice Minister Kang Sok Ju at one point admitted the existence of a uranium enrichment program, Pakistan's then-President Musharraf revealed that the A.Q. Khan proliferation network had provided North Korea with a number of gas centrifuges designed for uranium enrichment. Additionally, press reports have cited U.S. officials to the effect that evidence obtained in dismantling Libya's WMD programs points toward North Korea as the source for Libya's uranium hexafluoride (UF6) – which, if true, would mean that North Korea has a uranium conversion facility for producing feedstock for centrifuge enrichment. Iran Iran is a party to the NPT since 1970 but was found in non-compliance with its NPT safeguards agreement, and the status of its nuclear program remains in dispute. In November 2003 IAEA Director General Mohamed ElBaradei reported that Iran had repeatedly and over an extended period failed to meet its safeguards obligations under the NPT with respect to: reporting of nuclear material imported to Iran; reporting of the subsequent processing and use of imported nuclear material; declaring of facilities and other locations where nuclear material had been stored and processed. After about two years of EU3-led diplomatic efforts and Iran temporarily suspending its enrichment program, the IAEA Board of Governors, acting under Article XII.C of the IAEA Statute, found in a rare non-consensus decision with 12 abstentions that these failures constituted non-compliance with the IAEA safeguards agreement. This was reported to the UN Security Council in 2006, after which the Security Council passed a resolution demanding that Iran suspend its enrichment. Instead, Iran resumed its enrichment program. The IAEA has been able to verify the non-diversion of declared nuclear material in Iran, and is continuing its work on verifying the absence of undeclared activities. In February 2008, the IAEA also reported that it was working to address "alleged studies" of weaponization, based on documents provided by certain Member States, which those states claimed originated from Iran. Iran rejected the allegations as "baseless" and the documents as "fabrications". In June 2009, the IAEA reported that Iran had not "cooperated with the Agency in connection with the remaining issues ... which need to be clarified to exclude the possibility of military dimensions to Iran's nuclear program." The United States concluded that Iran violated its Article III NPT safeguards obligations, and further argued based on circumstantial evidence that Iran's enrichment program was for weapons purposes and therefore violated Iran's Article II nonproliferation obligations. The November 2007 US National Intelligence Estimate (NIE) later concluded that Iran had halted an active nuclear weapons program in the fall of 2003 and that it had remained halted as of mid-2007. The NIE's "Key Judgments", however, also made clear that what Iran had actually stopped in 2003 was only "nuclear weapon design and weaponization work and covert uranium conversion-related and uranium enrichment-related work"-namely, those aspects of Iran's nuclear weapons effort that had not by that point already been leaked to the press and become the subject of IAEA investigations. Since Iran's uranium enrichment program at Natanz—and its continuing work on a heavy water reactor at Arak that would be ideal for plutonium production—began secretly years before in conjunction with the very weaponization work the NIE discussed and for the purpose of developing nuclear weapons, many observers find Iran's continued development of fissile material production capabilities distinctly worrying. Particularly because fissile material availability has long been understood to be the principal obstacle to nuclear weapons development and the primary "pacing element" for a weapons program, the fact that Iran has reportedly suspended weaponization work may not mean very much. As The Bush Administration's Director of National Intelligence (DNI) Mike McConnell put it in 2008, the aspects of its work that Iran allegedly suspended were thus "probably the least significant part of the program." Iran stated it has a legal right to enrich uranium for peaceful purposes under the NPT, and further says that it had "constantly complied with its obligations under the NPT and the Statute of the International Atomic Energy Agency". Iran also stated that its enrichment program has been part of its civilian nuclear energy program, which is allowed under Article IV of the NPT. The Non-Aligned Movement has welcomed the continuing cooperation of Iran with the IAEA and reaffirmed Iran's right to the peaceful uses of nuclear technology. Early during his tenure as United Nations Secretary General, between 2007 and 2016, Ban Ki-moon welcomed the continued dialogue between Iran and the IAEA. He urged a peaceful resolution of the issue. In April 2010, during the signing of the U.S.-Russia New START Treaty, President Obama said that the United States, Russia, and other nations were demanding that Iran face consequences for failing to fulfill its obligations under the Nuclear Non-Proliferation Treaty, saying "We will not tolerate actions that flout the NPT, risk an arms race in a vital region, and threaten the credibility of the international community and our collective security." In 2015, Iran negotiated a nuclear deal with the P5+1, a group of countries that consisted of the five permanent members of the UN Security Council (China, France, Russia, the United Kingdom, and the United States) plus Germany. On 14 July 2015, the P5+1 and Iran concluded the Joint Comprehensive Plan of Action, lifting sanctions on Iran in exchange for constraints and on Iran's nuclear activities and increased verification by the IAEA. On 8 May 2018, President Donald Trump withdrew the United States from the JCPOA and reimposed sanctions on Iran. South Africa South Africa is the only country that developed nuclear weapons by itself and later dismantled them – unlike the former Soviet states Ukraine, Belarus and Kazakhstan, which inherited nuclear weapons from the former USSR and also acceded to the NPT as non-nuclear weapon states. During the days of apartheid, the South African government developed a deep fear of both a black uprising and the threat of communism. This led to the development of a secret nuclear weapons program as an ultimate deterrent. South Africa has a large supply of uranium, which is mined in the country's gold mines. The government built a nuclear research facility at Pelindaba near Pretoria where uranium was enriched to fuel grade for the Koeberg Nuclear Power Station as well as weapon grade for bomb production. In 1991, after international pressure and when a change of government was imminent, South African Ambassador to the United States Harry Schwarz signed the Nuclear Non-Proliferation Treaty. In 1993, the then president Frederik Willem de Klerk openly admitted that the country had developed a limited nuclear weapon capability. These weapons were subsequently dismantled before South Africa acceded to the NPT and opened itself up to IAEA inspection. In 1994, the IAEA completed its work and declared that the country had fully dismantled its nuclear weapons program. Libya Libya had signed (in 1968) and ratified (in 1975) the Nuclear Non-Proliferation Treaty and was subject to IAEA nuclear safeguards inspections, but undertook a secret nuclear weapons development program in violation of its NPT obligations, using material and technology provided by the A.Q. Khan proliferation network—including actual nuclear weapons designs allegedly originating in China. Libya began secret negotiations with the United States and the United Kingdom in March 2003 over potentially eliminating its WMD programs. In October 2003, Libya was embarrassed by the interdiction of a shipment of Pakistani-designed centrifuge parts sent from Malaysia, also as part of A. Q. Khan's proliferation ring. In December 2003, Libya announced that it had agreed to eliminate all its WMD programs, and permitted U.S. and British teams (as well as IAEA inspectors) into the country to assist this process and verify its completion. The nuclear weapons designs, gas centrifuges for uranium enrichment, and other equipment—including prototypes for improved SCUD ballistic missiles—were removed from Libya by the United States. (Libyan chemical weapons stocks and chemical bombs were also destroyed on site with international verification, with Libya joining the Chemical Weapons Convention.) Libya's non-compliance with its IAEA safeguards was reported to the U.N. Security Council, but with no action taken, as Libya's return to compliance with safeguards and Article II of the NPT was welcomed. In 2011, the Libyan government of Muammar al-Gaddafi was overthrown in the Libyan Civil War with the assistance of a military intervention by NATO forces acting under the auspices of UN Security Council Resolution 1973. Gaddafi's downfall 8 years after the disarmament of Libya, in which Gaddafi agreed to eliminate Libya's nuclear weapons program, has been repeatedly cited by North Korea, which views Gaddafi's fate as a "cautionary tale" that influences North Korea's decision to maintain and intensify its nuclear weapons program and arsenal despite pressure to denuclearize. Syria Syria is a state party to the NPT since 1969 and has a limited civil nuclear program. Before the advent of the Syrian Civil War it was known to operate only one small Chinese-built research reactor, SRR-1. Despite being a proponent of a Weapons of Mass Destruction Free Zone in the Middle East the country was accused of pursuing a military nuclear program with a reported nuclear facility in a desert Syrian region of Deir ez-Zor. The reactor's components had likely been designed and manufactured in North Korea, with the reactor's striking similarity in shape and size to the North Korean Yongbyon Nuclear Scientific Research Center. That information alarmed Israeli military and intelligence to such a degree that the idea of a targeted airstrike was conceived. It resulted in Operation Orchard, that took place on 6 September 2007 and saw as many as eight Israeli aircraft taking part. The Israeli government is said to have bounced the idea of the operation off of the US Bush administration, although the latter declined to participate. The nuclear reactor was destroyed in the attack, which also killed about ten North Korean workers. The attack did not cause an international outcry or any serious Syrian retaliatory moves as both parties tried to keep it secret: Despite a half-century state of war declared by surrounding states, Israel did not want publicity as regards its breach of the ceasefire, while Syria was not willing to acknowledge its clandestine nuclear program. Leaving the treaty Article X allows a state to leave the treaty if "extraordinary events, related to the subject matter of this Treaty, have jeopardized the supreme interests of its country", giving three months' (ninety days') notice. The state is required to give reasons for leaving the NPT in this notice. NATO states argue that when there is a state of "general war" the treaty no longer applies, effectively allowing the states involved to leave the treaty with no notice. This is a necessary argument to support the NATO nuclear weapons sharing policy. NATO's argument is based on the phrase "the consequent need to make every effort to avert the danger of such a war" in the treaty preamble, inserted at the behest of U.S. diplomats, arguing that the treaty would at that point have failed to fulfill its function of prohibiting a general war and thus no longer be binding. See United States–NATO nuclear weapons sharing above. North Korea has also caused an uproar by its use of this provision of the treaty. Article X.1 only requires a state to give three months' notice in total, and does not provide for other states to question a state's interpretation of "supreme interests of its country". In 1993, North Korea gave notice to withdraw from the NPT. However, after 89 days, North Korea reached agreement with the United States to freeze its nuclear program under the Agreed Framework and "suspended" its withdrawal notice. In October 2002, the United States accused North Korea of violating the Agreed Framework by pursuing a secret uranium enrichment program, and suspended shipments of heavy fuel oil under that agreement. In response, North Korea expelled IAEA inspectors, disabled IAEA equipment, and, on 10 January 2003, announced that it was ending the suspension of its previous NPT withdrawal notification. North Korea said that only one more day's notice was sufficient for withdrawal from the NPT, as it had given 89 days before. The IAEA Board of Governors rejected this interpretation. Most countries held that a new three-months withdrawal notice was required, and some questioned whether North Korea's notification met the "extraordinary events" and "supreme interests" requirements of the treaty. The Joint Statement of 19 September 2005 at the end of the Fourth Round of the Six-Party Talks called for North Korea to "return" to the NPT, implicitly acknowledging that it had withdrawn. Recent and coming events The main outcome of the 2000 Conference was the adoption by consensus of a comprehensive Final Document, which included among other things "practical steps for the systematic and progressive efforts" to implement the disarmament provisions of the NPT, commonly referred to as the Thirteen Steps. On 18 July 2005, US President George W. Bush met Indian Prime Minister Manmohan Singh and declared that he would work to change US law and international rules to permit trade in US civilian nuclear technology with India. At the time, British columnist George Monbiot argued that the U.S.-India nuclear deal, in combination with US attempts to deny Iran (an NPT signatory) civilian nuclear fuel-making technology, might destroy the NPT regime. In the first half of 2010, it was strongly believed that China had signed a civilian nuclear deal with Pakistan claiming that the deal was "peaceful". Arms control advocates criticised the reported China-Pakistan deal as they did in case of U.S.-India deal claiming that both the deals violate the NPT by facilitating nuclear programmes in states which are not parties to the NPT. Some reports asserted that the deal was a strategic move by China to balance US influence in South-Asia. According to a report published by U.S. Department of Defense in 2001, China had provided Pakistan with nuclear materials and has given critical technological assistance in the construction of Pakistan's nuclear weapons development facilities, in violation of the Nuclear Non-Proliferation Treaty, of which China even then was a signatory. At the Seventh Review Conference in May 2005, there were stark differences between the United States, which wanted the conference to focus on non-proliferation, especially on its allegations against Iran, and most other countries, who emphasized the lack of serious nuclear disarmament by the nuclear powers. The non-aligned countries reiterated their position emphasizing the need for nuclear disarmament. The 2010 Review Conference was held in May 2010 in New York City, and adopted a final document that included a summary by the Review Conference President, Ambassador Libran Capactulan of the Philippines, and an Action Plan that was adopted by consensus. The 2010 conference was generally considered a success because it reached consensus where the previous Review Conference in 2005 ended in disarray, a fact that many attributed to the U.S. President Barack Obama's commitment to nuclear nonproliferation and disarmament. Some have warned that this success raised unrealistically high expectations that could lead to failure at the next Review Conference in 2015. The "Global Summit on Nuclear Security" took place 12–13 April 2010. The summit was proposed by President Obama in Prague and was intended to strengthen the Nuclear Non-Proliferation Treaty in conjunction with the Proliferation Security Initiative and the Global Initiative to Combat Nuclear Terrorism. Forty seven states and three international organizations took part in the summit, which issued a communiqué and a work plan. For further information see 2010 Nuclear Security Summit. In a major policy speech at the Brandenburg Gate in Berlin on 19 June 2013, United States President Barack Obama outlined plans to further reduce the number of warheads in the U.S. nuclear arsenal. According to Foreign Policy, Obama proposed a "one-third reduction in strategic nuclear warheads—on top of the cuts already required by the New START treaty—bringing the number of deployed warheads to about 1,000". Obama is seeking to "negotiate these reductions with Russia to continue to move beyond Cold War nuclear postures," according to briefing documents provided to Foreign Policy. In the same speech, Obama emphasized his administration's efforts to isolate any nuclear weapons capabilities emanating from Iran and North Korea. He also called for a renewed bipartisan effort in the United States Congress to ratify the Comprehensive Nuclear-Test-Ban Treaty and called on countries to negotiate a new treaty to end the production of fissile material for nuclear weapons. On 24 April 2014, it was announced that the nation of the Marshall Islands has brought suit in The Hague against the United States, the former Soviet Union, the United Kingdom, France, China, India, Pakistan, North Korea and Israel seeking to have the disarmament provisions of the NNPT enforced. The 2015 Review Conference of the Parties to the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) was held at the United Nations in New York from 27 April to 22 May 2015 and presided over by Ambassador Taous Feroukhi of Algeria. The Treaty, particularly article VIII, paragraph 3, envisages a review of the operation of the Treaty every five years, a provision which was reaffirmed by the States parties at the 1995 NPT Review and Extension Conference and the 2000 NPT Review Conference. At the 2015 NPT Review Conference, States parties examined the implementation of the Treaty's provisions since 2010. Despite intensive consultations, the Conference was not able to reach agreement on the substantive part of the draft Final Document. Criticism and responses Over the years the NPT has come to be seen by many Third World states as "a conspiracy of the nuclear 'haves' to keep the nuclear 'have-nots' in their place". This argument has roots in Article VI of the treaty which "obligates the nuclear weapons states to liquidate their nuclear stockpiles and pursue complete disarmament. The non-nuclear states see no signs of this happening". Some argue that the NWS have not fully complied with their disarmament obligations under Article VI of the NPT. Some countries such as India have criticized the NPT, because it "discriminated against states not possessing nuclear weapons on 1 January 1967," while Iran and numerous Arab states have criticized Israel for not signing the NPT. There has been disappointment with the limited progress on nuclear disarmament, where the five authorized nuclear weapons states still have 13,400 warheads (as of February 2021) among them. As noted above, the International Court of Justice, in its advisory opinion on the Legality of the Threat or Use of Nuclear Weapons, stated that "there exists an obligation to pursue in good faith and bring to a conclusion negotiations leading to nuclear disarmament in all its aspects under strict and effective international control". Some critics of the nuclear-weapons states contend that they have failed to comply with Article VI by failing to make disarmament the driving force in national planning and policy with respect to nuclear weapons, even while they ask other states to plan for their security without nuclear weapons. The United States responds to criticism of its disarmament record by pointing out that, since the end of the Cold War, it has eliminated over 13,000 nuclear weapons, and eliminated over 80% of its deployed strategic warheads and 90% of non-strategic warheads deployed to NATO, in the process eliminating whole categories of warheads and delivery systems and reducing its reliance on nuclear weapons. U.S. officials have also pointed out the ongoing U.S. work to dismantle nuclear warheads. By the time accelerated dismantlement efforts ordered by President George W. Bush were completed, the U.S. arsenal was less than a quarter of its size at the end of the Cold War, and smaller than it had been at any point since the Eisenhower administration, well before the drafting of the NPT. The United States has also purchased many thousands of weapons' worth of uranium formerly in Soviet nuclear weapons for conversion into reactor fuel. As a consequence of this latter effort, it has been estimated that the equivalent of one lightbulb in every ten in the United States is powered by nuclear fuel removed from warheads previously targeted at the United States and its allies during the Cold War. The U.S. Special Representative for Nuclear Nonproliferation agreed that nonproliferation and disarmament are linked, noting that they can be mutually reinforcing but also that growing proliferation risks create an environment that makes disarmament more difficult. The United Kingdom, France and Russia likewise defend their nuclear disarmament records, and the five NPT NWS issued a joint statement in 2008 reaffirming their Article VI disarmament commitments. According to Thomas Reed and Danny Stillman, the "NPT has one giant loophole": Article IV gives each non-nuclear weapon state the "inalienable right" to pursue nuclear energy for the generation of power. A "number of high-ranking officials, even within the United Nations, have argued that they can do little to stop states using nuclear reactors to produce nuclear weapons". A 2009 United Nations report said that: The revival of interest in nuclear power could result in the worldwide dissemination of uranium enrichment and spent fuel reprocessing technologies, which present obvious risks of proliferation as these technologies can produce fissile materials that are directly usable in nuclear weapons. According to critics, those states which possess nuclear weapons, but are not authorized to do so under the NPT, have not paid a significant price for their pursuit of weapons capabilities. Also, the NPT has been explicitly weakened by a number of bilateral deals made by NPT signatories, notably the United States. Based on concerns over the slow pace of nuclear disarmament and the continued reliance on nuclear weapons in military and security concepts, doctrines and policies, the Treaty on the Prohibition of Nuclear Weapons was adopted in July 2017 and was subsequently opened for signature on 20 September 2017. Entering into force on January 22, 2021, it prohibits each state party from the development, testing, production, stockpiling, stationing, transfer, use and threat of use of nuclear weapons, as well as assistance to those activities. It reaffirms in its preamble the vital role of the full and effective implementation of the NPT. See also 13 steps (an important section in the Final Document of the 2000 Review Conference of the Treaty) Comprehensive Nuclear-Test-Ban Treaty (CTBT) Humanitarian Initiative Global Initiative to Combat Nuclear Terrorism (GICNT) List of countries with nuclear weapons List of weapons of mass destruction treaties Missile Technology Control Regime (MTCR) New Agenda Coalition (NAC) Non-Proliferation and Disarmament Initiative (NPDI) Nuclear armament Nuclear warfare Nuclear-weapon-free zone Multi-country zones African Nuclear-Weapon-Free Zone Treaty (Treaty of Pelindaba) Central Asian Nuclear Weapon Free Zone (Treaty of Semei) South Pacific Nuclear Free Zone Treaty (Treaty of Rarotonga) Southeast Asian Nuclear-Weapon-Free Zone Treaty (Treaty of Bangkok) Treaty for the Prohibition of Nuclear Weapons in Latin America and the Caribbean (Treaty of Tlatelolco) Other UN-recognized zones Mongolian Nuclear-Weapons-Free Zone Outer Space Treaty Seabed Arms Control Treaty Nuclear Terrorism Proliferation Security Initiative (PSI) Renovation of the Nuclear Weapon Arsenal of the United States Strategic Arms Limitation Talks (SALT) Strategic Offensive Reductions Treaty (SORT) Treaty on the Prohibition of Nuclear Weapons (also known as the Nuclear Weapon Ban Treaty) Weapon of Mass Destruction (WMD) Zangger Committee References External links Nuclear Non-Proliferation Treaty (PDF) – IAEA UN Office of Disarmament Affairs NPT section Procedural history, related documents and photos on the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) in the Historic Archives of the United Nations Audiovisual Library of International Law Membership/Signatories Annotated Bibliography on the NPT from the Alsos Digital Library for Nuclear Issues Compilation of speeches and papers relevant to NPT Review Cycle, U.S. Department of State Annotated bibliography for the Nuclear Nonproliferation Treaty from the Alsos Digital Library for Nuclear Issues Arms control treaties Non-proliferation treaties Nuclear proliferation Nuclear weapons policy Treaties of the United States Treaties of the Soviet Union Cold War treaties Treaties concluded in 1968 Treaties entered into force in 1970 Treaties of the Kingdom of Afghanistan Treaties of the People's Socialist Republic of Albania Treaties of Algeria Treaties of Andorra Treaties of Angola Treaties of Antigua and Barbuda Treaties of Argentina Treaties of Armenia Treaties of Australia Treaties of Austria Treaties of Azerbaijan Treaties of the Bahamas Treaties of Bahrain Treaties of Bangladesh Treaties of Barbados Treaties of Belarus Treaties of Belgium Treaties of Belize Treaties of the Republic of Dahomey Treaties of Bhutan Treaties of Bolivia Treaties of Bosnia and Herzegovina Treaties of Botswana Treaties of Brazil Treaties of Brunei Treaties of the People's Republic of Bulgaria Treaties of Burkina Faso Treaties of Burundi Treaties of the Khmer Republic Treaties of Cameroon Treaties of Canada Treaties of Cape Verde Treaties of the Central African Republic Treaties of Chad Treaties of Chile Treaties of the People's Republic of China Treaties of Colombia Treaties of the Comoros Treaties of the Republic of the Congo Treaties of the Democratic Republic of the Congo (1964–1971) Treaties of Costa Rica Treaties of Ivory Coast Treaties of Croatia Treaties of Cuba Treaties of Cyprus Treaties of Czechoslovakia Treaties of the Czech Republic Treaties of Denmark Treaties of Djibouti Treaties of Dominica Treaties of the Dominican Republic Treaties of Ecuador Treaties of Egypt Treaties of El Salvador Treaties of Equatorial Guinea Treaties of Eritrea Treaties of Estonia Treaties of the Ethiopian Empire Treaties of Fiji Treaties of Finland Treaties of France Treaties of Gabon Treaties of the Gambia Treaties of Georgia (country) Treaties of West Germany Treaties of East Germany Treaties of Ghana Treaties of the Kingdom of Greece Treaties of Grenada Treaties of Guatemala Treaties of Guinea Treaties of Guinea-Bissau Treaties of Guyana Treaties of Haiti Treaties of the Holy See Treaties of Honduras Treaties of the Hungarian People's Republic Treaties of Iceland Treaties of Indonesia Treaties of Pahlavi Iran Treaties of Ba'athist Iraq Treaties of Ireland Treaties of Italy Treaties of Jamaica Treaties of Japan Treaties of Jordan Treaties of Kazakhstan Treaties of Kenya Treaties of Kiribati Treaties of Kuwait Treaties of Kyrgyzstan Treaties of the Kingdom of Laos Treaties of Latvia Treaties of Lebanon Treaties of Lesotho Treaties of Liberia Treaties of the Libyan Arab Republic Treaties of Liechtenstein Treaties of Lithuania Treaties of Luxembourg Treaties of North Macedonia Treaties of Madagascar Treaties of Malawi Treaties of Malaysia Treaties of the Maldives Treaties of Mali Treaties of Malta Treaties of the Marshall Islands Treaties of Mauritania Treaties of Mauritius Treaties of Mexico Treaties of the Federated States of Micronesia Treaties of Moldova Treaties of Monaco Treaties of the Mongolian People's Republic Treaties of Montenegro Treaties of Morocco Treaties of the People's Republic of Mozambique Treaties of Myanmar Treaties of Namibia Treaties of Nauru Treaties of Nepal Treaties of the Netherlands Treaties of New Zealand Treaties of Nicaragua Treaties of Niger Treaties of Nigeria Treaties of Norway Treaties of Oman Treaties of Palau Treaties of Panama Treaties of Papua New Guinea Treaties of Paraguay Treaties of Peru Treaties of the Philippines Treaties of the Polish People's Republic Treaties of Portugal Treaties of Qatar Treaties of South Korea Treaties of the Socialist Republic of Romania Treaties of Rwanda Treaties of Saint Kitts and Nevis Treaties of Saint Lucia Treaties of Saint Vincent and the Grenadines Treaties of Samoa Treaties of San Marino Treaties of São Tomé and Príncipe Treaties of Saudi Arabia Treaties of Senegal Treaties of Serbia Treaties of Serbia and Montenegro Treaties of Seychelles Treaties of Sierra Leone Treaties of Singapore Treaties of Slovakia Treaties of Slovenia Treaties of the Solomon Islands Treaties of the Somali Democratic Republic Treaties of South Africa Treaties of Spain Treaties of Sri Lanka Treaties of the Democratic Republic of the Sudan Treaties of Suriname Treaties of Eswatini Treaties of Sweden Treaties of Switzerland Treaties of Syria Treaties of Tajikistan Treaties of Tanzania Treaties of Thailand Treaties of East Timor Treaties of Togo Treaties of Tonga Treaties of Trinidad and Tobago Treaties of Tunisia Treaties of Turkey Treaties of Turkmenistan
was expelled from the Politburo in 1929. When the Great Purge began in 1936, some of Bukharin's letters, conversations and tapped phone-calls indicated disloyalty. Arrested in February 1937, Bukharin was charged with conspiring to overthrow the Soviet state. After a show trial that alienated many Western communist sympathisers, he was executed in March 1938. Before 1917 Nikolai Bukharin was born on 27 September (9 October, new style), 1888, in Moscow. He was the second son of two schoolteachers, Ivan Gavrilovich Bukharin and Liubov Ivanovna Bukharina. According to Nikolai his father did not believe in God and often asked him to recite poetry for family friends as young as four years old. His childhood is vividly recounted in his mostly autobiographic novel How It All Began. Bukharin's political life began at the age of sixteen, with his lifelong friend Ilya Ehrenburg, when they participated in student activities at Moscow University related to the Russian Revolution of 1905. He joined the Russian Social Democratic Labour Party in 1906, becoming a member of the Bolshevik faction. With Grigori Sokolnikov, Bukharin convened the 1907 national youth conference in Moscow, which was later considered the founding of Komsomol. By age twenty, he was a member of the Moscow Committee of the party. The committee was widely infiltrated by the Tsarist secret police, the Okhrana. As one of its leaders, Bukharin quickly became a person of interest to them. During this time, he became closely associated with Valerian Obolensky and Vladimir Smirnov. He also met his future first wife, Nadezhda Mikhailovna Lukina, his cousin and the sister of Nikolai Lukin, who was also a member of the party. They married in 1911, soon after returning from internal exile. In 1911, after a brief imprisonment, Bukharin was exiled to Onega in Arkhangelsk, but he soon escaped to Hanover. He stayed in Germany for a year before visiting Kraków (now in Poland) in 1912 to meet Vladimir Lenin for the first time. During the exile, he continued his education and wrote several books that established him in his 20s as a major Bolshevik theorist. His work, Imperialism and World Economy influenced Lenin, who freely borrowed from it in his larger and better-known work, Imperialism, the Highest Stage of Capitalism. He and Lenin also often had hot disputes on theoretical issues, as well as Bukharin's closeness with the European Left and his anti-statist tendencies. Bukharin developed an interest in the works of Austrian Marxists and non-Marxist economic theorists, such as Aleksandr Bogdanov, who deviated from Leninist positions. Also, while in Vienna in 1913, he helped the Georgian Bolshevik Joseph Stalin write an article, "Marxism and the National Question," at Lenin's request. In October 1916, while based in New York City, Bukharin edited the newspaper Novy Mir (New World) with Leon Trotsky and Alexandra Kollontai. When Trotsky arrived in New York in January 1917, Bukharin was the first of the émigrés to greet him. (Trotsky's wife recalled, "with a bear hug and immediately began to tell them about a public library which stayed open late at night and which he proposed to show us at once" dragging the tired Trotskys across town "to admire his great discovery"). From 1917 to 1923 At the news of the Russian Revolution of February 1917, exiled revolutionaries from around the world began to flock back to the homeland. Trotsky left New York on 27 March 1917, sailing for St. Petersburg. Bukharin left New York in early April and returned to Russia by way of Japan (where he was temporarily detained by local police), arriving in Moscow in early May 1917. Politically, the Bolsheviks in Moscow were a minority in relation to the Mensheviks and Social Democrats. As more people began to be attracted to Lenin's promise to bring peace by withdrawing from the Great War, membership in the Bolshevik faction began to increase dramatically — from 24,000 members in February 1917 to 200,000 members in October 1917. Upon his return to Moscow, Bukharin resumed his seat on the Moscow City Committee and also became a member of the Moscow Regional Bureau of the party. To complicate matters further, the Bolsheviks themselves were divided into a right wing and a left wing. The right-wing of the Bolsheviks, including Aleksei Rykov and Viktor Nogin, controlled the Moscow Committee, while the younger left-wing Bolsheviks, including Vladimir Smirnov, Valerian Osinsky, Georgii Lomov, Nikolay Yakovlev, Ivan Kizelshtein and Ivan Stukov, were members of the Moscow Regional Bureau. On 10 October 1917, Bukharin was elected to the Central Committee, along with two other Moscow Bolsheviks: Andrei Bubnov and Grigori Sokolnikov. This strong representation on the Central Committee was a direct recognition of the Moscow Bureau's increased importance. Whereas the Bolsheviks had previously been a minority in Moscow behind the Mensheviks and the Socialist Revolutionaries, by September 1917 the Bolsheviks were in the majority in Moscow. Furthermore, the Moscow Regional Bureau was formally responsible for the party organizations in each of the thirteen central provinces around Moscow — which accounted for 37% of the whole population of Russia and 20% of the Bolshevik membership.While no one dominated revolutionary politics in Moscow during the October Revolution as Trotsky did in St. Petersburg, Bukharin certainly was the most prominent leader in Moscow. During the October Revolution, Bukharin drafted, introduced, and defended the revolutionary decrees of the Moscow Soviet. Bukharin then represented the Moscow Soviet in their report to the revolutionary government in Petrograd. Following the October Revolution, Bukharin became the editor of the party's newspaper, Pravda. Bukharin believed passionately in the promise of world revolution. In the Russian turmoil near the end of World War I, when a negotiated peace with the Central Powers was looming, he demanded a continuance of the war, fully expecting to incite all the foreign proletarian classes to arms. Even as he was uncompromising toward Russia's battlefield enemies, he also rejected any fraternization with the capitalist Allied powers: he reportedly wept when he learned of official negotiations for assistance. Bukharin emerged as the leader of the Left Communists in bitter opposition to Lenin's decision to sign the Treaty of Brest-Litovsk. In this wartime power struggle, Lenin's arrest had been seriously discussed by them and Left Socialist Revolutionaries in 1918. Bukharin revealed this in a Pravda article in 1924 and stated that it had been "a period when the party stood a hair from a split, and the whole country a hair from ruin." After the ratification of the treaty, Bukharin resumed his responsibilities within the party. In March 1919, he became a member of the Comintern's executive committee and a candidate member of the Politburo. During the Civil War period, he published several theoretical economic works, including the popular primer The ABC of Communism (with Yevgeni Preobrazhensky, 1919), and the more academic Economics of the Transitional Period (1920) and Historical Materialism (1921). By 1921, he changed his position and accepted Lenin's emphasis on the survival and strengthening of the Soviet state as the bastion of the future world revolution. He became the foremost supporter of the New Economic Policy (NEP), to which he was to tie his political fortunes. Considered by the left communists as a retreat from socialist policies, the NEP reintroduced money and allowed private ownership and capitalistic practices in agriculture, retail trade, and light industry while the state retained control of heavy industry. Power struggle After Lenin's death in 1924, Bukharin became a full member of the Politburo. In the subsequent power struggle among Leon Trotsky, Grigory Zinoviev, Lev Kamenev and Stalin, Bukharin allied himself with Stalin, who positioned himself as centrist of the Party and supported the NEP against the Left Opposition, which wanted more rapid industrialization, escalation of class struggle against the kulaks (wealthier peasants), and agitation for world revolution. It was Bukharin who formulated the thesis of "Socialism in One Country" put forth by Stalin in 1924, which argued that socialism (in Marxist theory, the period of transition to communism) could be developed in a single country, even one as underdeveloped as Russia. This new theory stated that socialist gains could be consolidated in a single country, without that country relying on simultaneous successful revolutions across the world. The thesis would become a hallmark of Stalinism. Trotsky, the prime force behind the Left Opposition, was defeated by a triumvirate formed by Stalin, Zinoviev, and Kamenev, with the support of Bukharin. At the Fourteenth Party Congress in December 1925, Stalin openly attacked Kamenev and Zinoviev, revealing that they had asked for his aid in expelling Trotsky from the Party. By 1926, the Stalin-Bukharin alliance ousted Zinoviev and Kamenev from the Party leadership, and Bukharin enjoyed the highest degree of power during the 1926–1928 period. He emerged as the leader of the Party's right wing, which included two other Politburo members (Alexei Rykov, Lenin's successor as Chairman of the Council of People's Commissars and Mikhail Tomsky, head of trade unions) and he became General Secretary of the Comintern's executive committee in 1926. However, prompted by a grain shortage in 1928, Stalin reversed himself and proposed a program of rapid industrialization and forced collectivization because he believed that the NEP was not working fast enough. Stalin felt that in the new situation the policies of his former foes—Trotsky, Zinoviev, and Kamenev—were the right ones. Bukharin was worried by the prospect of Stalin's plan, which he feared would lead to "military-feudal exploitation" of the peasantry. Bukharin did want the Soviet Union to achieve industrialization but he preferred the more moderate approach of offering the peasants the opportunity to become prosperous, which would lead to greater grain production for sale abroad. Bukharin pressed his views throughout 1928 in meetings of the Politburo and at the Communist Party Congress, insisting that enforced grain requisition would be counterproductive, as War Communism had been a decade earlier. Fall from power Bukharin's support for the continuation of the NEP was not popular with higher Party cadres, and his slogan to peasants, "Enrich yourselves!" and proposal to achieve socialism "at snail's pace" left him vulnerable to attacks first by Zinoviev and later by Stalin. Stalin attacked Bukharin's views, portraying them as capitalist deviations and declaring that the revolution would be at risk without a strong policy that encouraged rapid industrialization. Having helped Stalin achieve unchecked power against the Left Opposition, Bukharin found himself easily outmaneuvered by Stalin. Yet Bukharin played to Stalin's strength by maintaining the appearance of unity within the Party leadership. Meanwhile, Stalin used his control of the Party machine to replace Bukharin's supporters in the Rightist power base in Moscow, trade unions, and the Comintern.Bukharin attempted to gain support from earlier foes including Kamenev and Zinoviev who had fallen from power and held mid-level positions within the Communist party. The details of his meeting with Kamenev, to whom he confided that Stalin was "Genghis Khan" and changed policies to get rid of rivals, were leaked by the Trotskyist press and subjected him to accusations of factionalism. Jules Humbert-Droz, a former ally and friend of Bukharin, wrote that in spring 1929, Bukharin told him that he had formed an alliance with Zinoviev and Kamenev, and that they were planning to use individual terror (assassination) to get rid of Stalin. Eventually, Bukharin lost his position in the Comintern and the editorship of Pravda in April 1929, and he was expelled from the Politburo on 17 November of that year. Bukharin was forced to renounce his views under pressure. He wrote letters to Stalin pleading for forgiveness and rehabilitation, but through wiretaps of Bukharin's private conversations with Stalin's enemies, Stalin knew Bukharin's repentance was insincere. International supporters of Bukharin, Jay Lovestone of the Communist Party USA among them, were also expelled from the Comintern. They formed an international alliance to promote their views, calling it the International Communist Opposition, though it became better known as the Right Opposition, after a term used by the Trotskyist Left Opposition in the Soviet Union to refer to Bukharin and his supporters there. Even after his fall, Bukharin still did some important work for the Party. For example, he helped write the 1936 Soviet constitution. Bukharin believed the constitution would guarantee real democratization. There is some evidence that Bukharin was thinking of evolution toward some kind of two-party or at least two-slate elections. Boris Nikolaevsky reported that Bukharin said: "A second party is necessary. If there is only one electoral list, without opposition, that's equivalent to Nazism." Grigory Tokaev, a Soviet defector and admirer of Bukharin, reported that: "Stalin aimed at one party dictatorship and complete centralisation. Bukharin envisaged several parties and even nationalist parties, and stood for the maximum of decentralisation." Friendship with Osip Mandelstam and Boris Pasternak In the brief period of thaw in 1934–1936, Bukharin was politically rehabilitated and was made editor of Izvestia in 1934. There, he consistently highlighted the dangers of fascist regimes in Europe and the need for "proletarian humanism". One of his first decisions as editor was to invite Boris Pasternak to contribute to the newspaper and sit in on editorial meetings. Pasternak described Bukharin as "a wonderful, historically extraordinary man, but fate has not been kind to him." They first met during the lying-in-state of the Soviet police chief, Vyacheslav Menzhinsky in May 1934, when Pasternak was seeking help for his fellow poet, Osip Mandelstam, who had been arrested – though at that time neither Pasternak nor Bukharin knew why. Bukharin had acted as Mandelstam's political protector since 1922. According to Mandelstam's wife, Nadezhda, "M. owed him all the pleasant things in his life. His 1928 volume of poetry would never have come out without the active intervention of Bukharin. The journey to Armenia, our apartment and ration cards, contracts for future volumes – all this was arranged by Bukharin." Bukharin wrote to Stalin, pleading clemency for Mandelstam, and appealed personally to the head of the NKVD, Genrikh Yagoda. It was Yagoda who told him about Mandelstam's Stalin Epigram, after which he refused to have any further contact with Nadezhda Mandelstam, who had lied to him by denying that her husband had written "anything rash". – but continued to befriend Pasternak. Soon after
began in 1936, some of Bukharin's letters, conversations and tapped phone-calls indicated disloyalty. Arrested in February 1937, Bukharin was charged with conspiring to overthrow the Soviet state. After a show trial that alienated many Western communist sympathisers, he was executed in March 1938. Before 1917 Nikolai Bukharin was born on 27 September (9 October, new style), 1888, in Moscow. He was the second son of two schoolteachers, Ivan Gavrilovich Bukharin and Liubov Ivanovna Bukharina. According to Nikolai his father did not believe in God and often asked him to recite poetry for family friends as young as four years old. His childhood is vividly recounted in his mostly autobiographic novel How It All Began. Bukharin's political life began at the age of sixteen, with his lifelong friend Ilya Ehrenburg, when they participated in student activities at Moscow University related to the Russian Revolution of 1905. He joined the Russian Social Democratic Labour Party in 1906, becoming a member of the Bolshevik faction. With Grigori Sokolnikov, Bukharin convened the 1907 national youth conference in Moscow, which was later considered the founding of Komsomol. By age twenty, he was a member of the Moscow Committee of the party. The committee was widely infiltrated by the Tsarist secret police, the Okhrana. As one of its leaders, Bukharin quickly became a person of interest to them. During this time, he became closely associated with Valerian Obolensky and Vladimir Smirnov. He also met his future first wife, Nadezhda Mikhailovna Lukina, his cousin and the sister of Nikolai Lukin, who was also a member of the party. They married in 1911, soon after returning from internal exile. In 1911, after a brief imprisonment, Bukharin was exiled to Onega in Arkhangelsk, but he soon escaped to Hanover. He stayed in Germany for a year before visiting Kraków (now in Poland) in 1912 to meet Vladimir Lenin for the first time. During the exile, he continued his education and wrote several books that established him in his 20s as a major Bolshevik theorist. His work, Imperialism and World Economy influenced Lenin, who freely borrowed from it in his larger and better-known work, Imperialism, the Highest Stage of Capitalism. He and Lenin also often had hot disputes on theoretical issues, as well as Bukharin's closeness with the European Left and his anti-statist tendencies. Bukharin developed an interest in the works of Austrian Marxists and non-Marxist economic theorists, such as Aleksandr Bogdanov, who deviated from Leninist positions. Also, while in Vienna in 1913, he helped the Georgian Bolshevik Joseph Stalin write an article, "Marxism and the National Question," at Lenin's request. In October 1916, while based in New York City, Bukharin edited the newspaper Novy Mir (New World) with Leon Trotsky and Alexandra Kollontai. When Trotsky arrived in New York in January 1917, Bukharin was the first of the émigrés to greet him. (Trotsky's wife recalled, "with a bear hug and immediately began to tell them about a public library which stayed open late at night and which he proposed to show us at once" dragging the tired Trotskys across town "to admire his great discovery"). From 1917 to 1923 At the news of the Russian Revolution of February 1917, exiled revolutionaries from around the world began to flock back to the homeland. Trotsky left New York on 27 March 1917, sailing for St. Petersburg. Bukharin left New York in early April and returned to Russia by way of Japan (where he was temporarily detained by local police), arriving in Moscow in early May 1917. Politically, the Bolsheviks in Moscow were a minority in relation to the Mensheviks and Social Democrats. As more people began to be attracted to Lenin's promise to bring peace by withdrawing from the Great War, membership in the Bolshevik faction began to increase dramatically — from 24,000 members in February 1917 to 200,000 members in October 1917. Upon his return to Moscow, Bukharin resumed his seat on the Moscow City Committee and also became a member of the Moscow Regional Bureau of the party. To complicate matters further, the Bolsheviks themselves were divided into a right wing and a left wing. The right-wing of the Bolsheviks, including Aleksei Rykov and Viktor Nogin, controlled the Moscow Committee, while the younger left-wing Bolsheviks, including Vladimir Smirnov, Valerian Osinsky, Georgii Lomov, Nikolay Yakovlev, Ivan Kizelshtein and Ivan Stukov, were members of the Moscow Regional Bureau. On 10 October 1917, Bukharin was elected to the Central Committee, along with two other Moscow Bolsheviks: Andrei Bubnov and Grigori Sokolnikov. This strong representation on the Central Committee was a direct recognition of the Moscow Bureau's increased importance. Whereas the Bolsheviks had previously been a minority in Moscow behind the Mensheviks and the Socialist Revolutionaries, by September 1917 the Bolsheviks were in the majority in Moscow. Furthermore, the Moscow Regional Bureau was formally responsible for the party organizations in each of the thirteen central provinces around Moscow — which accounted for 37% of the whole population of Russia and 20% of the Bolshevik membership.While no one dominated revolutionary politics in Moscow during the October Revolution as Trotsky did in St. Petersburg, Bukharin certainly was the most prominent leader in Moscow. During the October Revolution, Bukharin drafted, introduced, and defended the revolutionary decrees of the Moscow Soviet. Bukharin then represented the Moscow Soviet in their report to the revolutionary government in Petrograd. Following the October Revolution, Bukharin became the editor of the party's newspaper, Pravda. Bukharin believed passionately in the promise of world revolution. In the Russian turmoil near the end of World War I, when a negotiated peace with the Central Powers was looming, he demanded a continuance of the war, fully expecting to incite all the foreign proletarian classes to arms. Even as he was uncompromising toward Russia's battlefield enemies, he also rejected any fraternization with the capitalist Allied powers: he reportedly wept when he learned of official negotiations for assistance. Bukharin emerged as the leader of the Left Communists in bitter opposition to Lenin's decision to sign the Treaty of Brest-Litovsk. In this wartime power struggle, Lenin's arrest had been seriously discussed by them and Left Socialist Revolutionaries in 1918. Bukharin revealed this in a Pravda article in 1924 and stated that it had been "a period when the party stood a hair from a split, and the whole country a hair from ruin." After the ratification of the treaty, Bukharin resumed his responsibilities within the party. In March 1919, he became a member of the Comintern's executive committee and a candidate member of the Politburo. During the Civil War period, he published several theoretical economic works, including the popular primer The ABC of Communism (with Yevgeni Preobrazhensky, 1919), and the more academic Economics of the Transitional Period (1920) and Historical Materialism (1921). By 1921, he changed his position and accepted Lenin's emphasis on the survival and strengthening of the Soviet state as the bastion of the future world revolution. He became the foremost supporter of the New Economic Policy (NEP), to which he was to tie his political fortunes. Considered by the left communists as a retreat from socialist policies, the NEP reintroduced money and allowed private ownership and capitalistic practices in agriculture, retail trade, and light industry while the state retained control of heavy industry. Power struggle After Lenin's death in 1924, Bukharin became a full member of the Politburo. In the subsequent power struggle among Leon Trotsky, Grigory Zinoviev, Lev Kamenev and Stalin, Bukharin allied himself with Stalin, who positioned himself as centrist of the Party and supported the NEP against the Left Opposition, which wanted more rapid industrialization, escalation of class struggle against the kulaks (wealthier peasants), and agitation for world revolution. It was Bukharin who formulated the thesis of "Socialism in One Country" put forth by Stalin in 1924, which argued that socialism (in Marxist theory, the period of transition to communism) could be developed in a single country, even one as underdeveloped as Russia. This new theory stated that socialist gains could be consolidated in a single country, without that country relying on simultaneous successful revolutions across the world. The thesis would become a hallmark of Stalinism. Trotsky, the prime force behind the Left Opposition, was defeated by a triumvirate formed by Stalin, Zinoviev, and Kamenev, with the support of Bukharin. At the Fourteenth Party Congress in December 1925, Stalin openly attacked Kamenev and Zinoviev, revealing that they had asked for his aid in expelling Trotsky from the Party. By 1926, the Stalin-Bukharin alliance ousted Zinoviev and Kamenev from the Party leadership, and Bukharin enjoyed the highest degree of power during the 1926–1928 period. He emerged as the leader of the Party's right wing, which included two other Politburo members (Alexei Rykov, Lenin's successor as Chairman of the Council of People's Commissars and Mikhail Tomsky, head of trade unions) and he became General Secretary of the Comintern's executive committee in 1926. However, prompted by a grain shortage in 1928, Stalin reversed himself and proposed a program of rapid industrialization and forced collectivization because he believed that the NEP was not working fast enough. Stalin felt that in the new situation the policies of his former foes—Trotsky, Zinoviev, and Kamenev—were the right ones. Bukharin was worried by the prospect of Stalin's plan, which he feared would lead to "military-feudal exploitation" of the peasantry. Bukharin did want the Soviet Union to achieve industrialization but he preferred the more moderate approach of offering the peasants the opportunity to become prosperous, which would lead to greater grain production for sale abroad. Bukharin pressed his views throughout 1928 in meetings of the Politburo and at the Communist Party Congress, insisting that enforced grain requisition would be counterproductive, as War Communism had been a decade earlier. Fall from power Bukharin's support for the continuation of the NEP was not popular with higher Party cadres, and his slogan to peasants, "Enrich yourselves!" and proposal to achieve socialism "at snail's pace" left him vulnerable to attacks first by Zinoviev and later by Stalin. Stalin attacked Bukharin's views, portraying them as capitalist deviations and declaring that the revolution would be at risk without a strong policy that encouraged rapid industrialization. Having helped Stalin achieve unchecked power against the Left Opposition, Bukharin found himself easily outmaneuvered by Stalin. Yet Bukharin played to Stalin's strength by maintaining the appearance of unity within the Party leadership. Meanwhile, Stalin used his control of the Party machine to replace Bukharin's supporters in the Rightist power base in Moscow, trade unions, and the Comintern.Bukharin attempted to gain support from earlier foes including Kamenev and Zinoviev who had fallen from power and held mid-level positions within the Communist party. The details of his meeting with Kamenev, to whom he confided that Stalin was "Genghis Khan" and changed policies to get rid of rivals, were leaked by the Trotskyist press and subjected him to accusations of factionalism. Jules Humbert-Droz, a former ally and friend of Bukharin, wrote that in spring 1929, Bukharin told him that he had formed an alliance with Zinoviev and Kamenev, and that they were planning to use individual terror (assassination) to get rid of Stalin. Eventually, Bukharin lost his position in the Comintern and the editorship of Pravda in April 1929, and he was expelled from the Politburo on 17 November of that year. Bukharin was forced to renounce his views under pressure. He wrote letters to Stalin pleading for forgiveness and rehabilitation, but through wiretaps of Bukharin's private conversations with Stalin's enemies, Stalin knew Bukharin's repentance was insincere. International supporters of Bukharin, Jay Lovestone of the Communist Party USA among them, were also expelled from the Comintern. They formed an international alliance to promote their views, calling it the International Communist Opposition, though it became better known as the Right Opposition, after a term used by the Trotskyist Left Opposition in the Soviet Union to refer to Bukharin and his supporters there. Even after his fall, Bukharin still did some important work for the Party. For example, he helped write the 1936 Soviet constitution. Bukharin believed the constitution would guarantee real democratization. There is some evidence that Bukharin was thinking of evolution toward some kind of two-party or at least two-slate elections. Boris Nikolaevsky reported that Bukharin said: "A second party is necessary. If there is only one electoral list, without opposition, that's equivalent to Nazism." Grigory Tokaev, a Soviet defector and admirer of Bukharin, reported that: "Stalin aimed at one party dictatorship and complete centralisation. Bukharin envisaged several parties and even nationalist parties, and stood for the maximum of decentralisation." Friendship with Osip Mandelstam and Boris Pasternak In the brief period of thaw in 1934–1936, Bukharin was politically rehabilitated and was made editor of Izvestia in 1934. There, he consistently highlighted the dangers of fascist regimes in Europe and the need for "proletarian humanism". One of his first decisions as editor was to invite Boris Pasternak to contribute to the newspaper and sit in on editorial meetings. Pasternak described Bukharin as "a wonderful, historically extraordinary man, but fate has not been kind to him." They first met during the lying-in-state of the Soviet police chief, Vyacheslav Menzhinsky in May 1934,
allophone). Semivowels in Portuguese often nasalize before and always after nasal vowels, resulting in and . What would be coda nasal occlusives in other West Iberian languages is only slightly pronounced before dental consonants. Outside this environment the nasality is spread over the vowel or become a nasal diphthong (mambembe , outside the final, only in Brazil, and mantém in all Portuguese dialects). The Japanese syllabary kana ん, typically romanized as n and occasionally m, can manifest as one of several different nasal consonants depending on what consonant follows it; this allophone, colloquially written in IPA as , is known as the moraic nasal, per the language's moraic structure. Welsh has a set of voiceless nasals, [m̥], [n̥] and [ŋ̊], which occur predominantly as a result of nasal mutation of their voiced counterparts ([m], [n] and [ŋ]). The Mapos Buang language of New Guinea has a phonemic uvular nasal, [ɴ], which contrasts with a velar nasal. It is extremely rare for a language to have [ɴ] as a phoneme. Yanyuwa is highly unusual in that it has a seven-way distinction between [m], [n̪], [n], [ɳ], [ṉ] (palato-alveolar), [ŋ̟] (front velar), and [ŋ̠] (back velar). This may be the only language in existence that contrasts nasals at seven distinct points of articulation. The term 'nasal occlusive' (or 'nasal stop') is generally abbreviated to nasal. However, there are also nasalized fricatives, nasalized flaps, nasal glides, and nasal vowels, as in French, Portuguese, and Polish. In the IPA, nasal vowels and nasalized consonants are indicated by placing a tilde (~) over the vowel or consonant in question: French sang , Portuguese bom . Voiceless nasals A few languages have phonemic voiceless nasal occlusives. Among them are Icelandic, Faroese, Burmese, Jalapa Mazatec, Kildin Sami, Welsh, and Central Alaskan Yup'ik. Iaai of New Caledonia has an unusually large number of them, with , along with a number of voiceless approximants. Other kinds of nasal consonant Ladefoged and Maddieson (1996) distinguish purely nasal consonants, the nasal occlusives such as m n ng in which the airflow is purely nasal, from partial nasal consonants such as prenasalized consonants and nasal pre-stopped consonants, which are nasal for only part of their duration, as well as from nasalized consonants, which have simultaneous oral and nasal airflow. In some languages, such as Portuguese, a nasal consonant may have occlusive and non-occlusive allophones. In general, therefore, a nasal consonant may be: a nasal occlusive, such as English m, n, ng a nasal approximant, as in nh in some Portuguese dialects a nasal flap, such as the nasal retroflex lateral flap in Pashto prenasalized consonants, pre-stopped nasals and post-stopped nasals nasal clicks such as Zulu nq, nx, nc other nasalized consonants, such as nasalized fricatives Languages without nasals A few languages, perhaps 2%, contain no phonemically distinctive nasals. This led Ferguson (1963) to assume that all languages have at least one primary nasal occlusive. However, there are exceptions. Lack of phonemic nasals When a language is claimed to lack nasals altogether, as with several Niger–Congo languages or the Pirahã language of the Amazon, nasal and non-nasal or prenasalized consonants usually alternate allophonically, and it is a theoretical claim on the part of the individual linguist that the nasal is not the basic form of the consonant. In the case of some Niger–Congo languages, for
German, Dutch, English and Swedish, as well as varieties of Chinese such as Mandarin and Cantonese, have , and . Tamil has a six-fold distinction between , , , , and . The Nuosu language also contrasts six categories of nasals, , , , , and . They are represented in romanisation by m, n, hm, hn, ny, and ng. Nuosu also contrasts nasalised stops and affricates with their voiced, unvoiced, and aspirated versions. Catalan, Occitan, Spanish, and Italian have , , as phonemes, and and as allophones. Nevertheless, among many younger speakers of Rioplatense Spanish, there is no palatal nasal but only a palatalized nasal, , as in English canyon. In Brazilian Portuguese and Angolan Portuguese , written , is typically pronounced as , a nasal palatal approximant, a nasal glide (in Polish, this feature is also possible as an allophone). Semivowels in Portuguese often nasalize before and always after nasal vowels, resulting in and . What would be coda nasal occlusives in other West Iberian languages is only slightly pronounced before dental consonants. Outside this environment the nasality is spread over the vowel or become a nasal diphthong (mambembe , outside the final, only in Brazil, and mantém in all Portuguese dialects). The Japanese syllabary kana ん, typically romanized as n and occasionally m, can manifest as one of several different nasal consonants depending on what consonant follows it; this allophone, colloquially written in IPA as , is known as the moraic nasal, per the language's moraic structure. Welsh has a set of voiceless nasals, [m̥], [n̥] and [ŋ̊], which occur predominantly as a result of nasal mutation of their voiced counterparts ([m], [n] and [ŋ]). The Mapos Buang language of New Guinea has a phonemic uvular nasal, [ɴ], which contrasts with a velar nasal. It is extremely rare for a language to have [ɴ] as a phoneme. Yanyuwa is highly unusual in that it has a seven-way distinction between [m], [n̪], [n], [ɳ], [ṉ] (palato-alveolar), [ŋ̟] (front velar), and [ŋ̠] (back velar). This may be the only language in existence that contrasts nasals at seven distinct points of articulation. The term 'nasal occlusive' (or 'nasal stop') is generally abbreviated to nasal. However, there are also nasalized fricatives, nasalized flaps, nasal glides, and nasal vowels, as in French, Portuguese, and Polish. In the IPA, nasal vowels and nasalized consonants are indicated by placing a tilde (~) over the vowel or consonant in question: French sang , Portuguese bom . Voiceless nasals A few languages have phonemic voiceless nasal occlusives. Among them are Icelandic, Faroese, Burmese, Jalapa Mazatec, Kildin Sami, Welsh, and Central Alaskan Yup'ik. Iaai of New Caledonia has an unusually large number of them, with , along with a number of voiceless approximants. Other kinds of nasal consonant Ladefoged and Maddieson (1996) distinguish purely nasal consonants, the nasal occlusives such as m n ng in which the airflow is purely nasal, from partial nasal consonants such as prenasalized consonants and nasal pre-stopped consonants, which are nasal for only part of their duration, as well as from nasalized consonants, which have simultaneous oral and nasal airflow. In some languages, such as Portuguese, a nasal consonant may have occlusive and non-occlusive allophones. In general, therefore, a nasal consonant may be: a nasal occlusive, such as English m, n, ng a nasal approximant, as in nh in some Portuguese dialects a nasal flap, such as the nasal retroflex lateral flap in Pashto prenasalized consonants, pre-stopped nasals and post-stopped nasals nasal clicks such as Zulu nq, nx, nc other nasalized consonants, such as nasalized fricatives Languages without nasals A few languages, perhaps 2%, contain no phonemically distinctive nasals. This led Ferguson (1963) to assume that all languages have at least one primary nasal occlusive. However, there are exceptions. Lack of phonemic nasals
recorder whose entire electronics section was based on nuvistors, as well as studio-grade microphones from that era, such as the AKG/Norelco C12a, which employed the 7586. It was also later found that, with minor circuit modification, the nuvistor made a sufficient replacement for the obsolete Telefunken VF14 tube, used in the Neumann U 47 studio microphone. Tektronix also used nuvistors in several of its high end oscilloscopes of the 1960s, before replacing them later with JFET transistors. Nuvistors were used in the Ranger program and in the MiG-25 fighter jet, presumably to harden the fighter's avionics against radiation. (See radiation hardening.) This was discovered following the Defection of Viktor Belenko. Types 7586 - First one released, medium mu triode 7587 - Sharp cutoff tetrode 8056 - triode for low plate voltages 8058 - triode, with plate cap & grid on shell, for UHF performance 7895 - 7586 with higher mu 2CW4 - Same as type 6CW4, but with a 2.1 volt / 450 milliampere heater. Used in television receivers with series heater strings 6CW4 - high mu triode, most common one in consumer electronics 6DS4 - remote cutoff 6CW4 6DV4
47 studio microphone. Tektronix also used nuvistors in several of its high end oscilloscopes of the 1960s, before replacing them later with JFET transistors. Nuvistors were used in the Ranger program and in the MiG-25 fighter jet, presumably to harden the fighter's avionics against radiation. (See radiation hardening.) This was discovered following the Defection of Viktor Belenko. Types 7586 - First one released, medium mu triode 7587 - Sharp cutoff tetrode 8056 - triode for low plate voltages 8058 - triode, with plate cap & grid on shell, for UHF performance 7895 - 7586 with higher mu 2CW4 - Same as type 6CW4, but with a 2.1 volt / 450 milliampere heater. Used in television receivers with series
most influential books about the alter-globalization movement and an international bestseller. Focus The book focuses on branding and often makes connections with the anti-globalization movement. Throughout the four parts ("No Space", "No Choice", "No Jobs", and "No Logo"), Klein writes about issues such as sweatshops in the Americas and Asia, culture jamming, corporate censorship, and Reclaim the Streets. She pays special attention to the deeds and misdeeds of Nike, The Gap, McDonald's, Shell, and Microsoft – and of their lawyers, contractors, and advertising agencies. Many of the ideas in Klein's book derive from the influence of the Situationists, an art/political group founded in the late 1950s. However, while globalization appears frequently as a recurring theme, Klein rarely addresses the topic of globalization itself, and when she does, it is usually indirectly. She goes on to discuss globalization in much greater detail in her book Fences and Windows (2002). Summary The book comprises four sections: "No Space", "No Choice", "No Jobs", and "No Logo". The first three deal with the negative effects of brand-oriented corporate activity, while the fourth discusses various methods people have taken in order to fight back. "No Space" The book begins by tracing the history of brands. Klein argues that there has been a shift in the usage of branding and gives examples of this shift to "anti-brand" branding. Early examples of brands were often used to put a recognizable face on factory-produced products. These slowly gave way to the idea of selling lifestyles. According to Klein, in response to an economic crash in the late 1980s (due to the Latin American debt crisis, Black Monday (1987), the savings and loan crisis, and the Japanese asset price bubble), corporations began to seriously rethink their approach to marketing and to target the youth demographic, as opposed to the baby boomers, who had previously been considered a much more valuable segment. The book discusses how brand names such as Nike or Pepsi expanded beyond the mere products which bore their names, and how these names and logos began to appear everywhere. As this happened, the brands' obsession with the youth market drove them to further associate themselves with whatever the youth considered "cool". Along the way, the brands attempted to associate their names with everything from movie stars and athletes to grassroots social movements. Klein argues that large multinational corporations consider the marketing of a brand name to be more important than the actual manufacture of products; this theme recurs in the book, and Klein suggests that it helps explain the shift to production in Third World countries in such industries as clothing, footwear, and computer hardware. This section also looks at ways in which brands have "muscled" their presence into the school system, and how in doing so, they have pipelined advertisements into the schools and used their position to gather information about the students. Klein argues that this is part of a trend toward targeting younger and younger consumers. "No Choice" In the second section, Klein discusses how brands use their size and clout to limit the number of choices available to the public – whether through market dominance (e.g., Wal-Mart) or through aggressive invasion of a region (e.g., Starbucks). Klein argues that each company's goal is to become the dominant force in its respective field. Meanwhile, other corporations, such as Sony or Disney, simply open their own chains of stores, preventing the competition from even putting their products on the shelves. This section also discusses the way that corporations merge with one another in order to add to their ubiquity and provide greater control over their image. ABC News, for instance, is allegedly under pressure not to air any stories that are overly critical of Disney, its parent company. Other chains, such as Wal-Mart, often threaten to pull various products off their shelves, forcing manufacturers and publishers to comply with their demands. This might mean driving down manufacturing costs or changing the artwork or content of products like magazines or albums so they better fit with Wal-Mart's image of family friendliness. Also discussed is the way that corporations abuse copyright laws in order to silence anyone who might attempt to criticize their brand.
brand name to be more important than the actual manufacture of products; this theme recurs in the book, and Klein suggests that it helps explain the shift to production in Third World countries in such industries as clothing, footwear, and computer hardware. This section also looks at ways in which brands have "muscled" their presence into the school system, and how in doing so, they have pipelined advertisements into the schools and used their position to gather information about the students. Klein argues that this is part of a trend toward targeting younger and younger consumers. "No Choice" In the second section, Klein discusses how brands use their size and clout to limit the number of choices available to the public – whether through market dominance (e.g., Wal-Mart) or through aggressive invasion of a region (e.g., Starbucks). Klein argues that each company's goal is to become the dominant force in its respective field. Meanwhile, other corporations, such as Sony or Disney, simply open their own chains of stores, preventing the competition from even putting their products on the shelves. This section also discusses the way that corporations merge with one another in order to add to their ubiquity and provide greater control over their image. ABC News, for instance, is allegedly under pressure not to air any stories that are overly critical of Disney, its parent company. Other chains, such as Wal-Mart, often threaten to pull various products off their shelves, forcing manufacturers and publishers to comply with their demands. This might mean driving down manufacturing costs or changing the artwork or content of products like magazines or albums so they better fit with Wal-Mart's image of family friendliness. Also discussed is the way that corporations abuse copyright laws in order to silence anyone who might attempt to criticize their brand. "No Jobs" In this section, the book takes a darker tone and looks at the way in which manufacturing jobs move from local factories to foreign countries, and particularly to places known as export processing zones. Such zones often have no labor laws, leading to dire working conditions. The book then shifts back to North America, where the lack of manufacturing jobs has led to an influx of work in the service sector, where most of the jobs are for minimum wage and offer no benefits. The term "McJob" is introduced, defined as a job with poor compensation that does not keep pace with inflation, inflexible or undesirable hours, little chance of advancement, and high levels of stress. Meanwhile, the public is being sold the perception that these jobs are temporary employment for students and recent graduates, and therefore need not offer living wages or benefits. All of this is set against a backdrop of massive profits and wealth being produced within the corporate sector. The result is a new generation of employees who have come to resent the success of the companies they work for. This resentment, along with rising unemployment, labour abuses abroad, disregard for the environment, and the ever-increasing presence of advertising breeds a new disdain for corporations. "No Logo" The final section of the book discusses various movements that have sprung up during the 1990s. These include Adbusters magazine and the culture-jamming movement, as well as Reclaim the Streets and the McLibel trial. Less radical protests are also discussed, such as the various movements aimed at putting an end to sweatshop labour. Klein concludes by contrasting consumerism and citizenship, opting for the latter. "When I started this book," she writes, "I honestly didn't know whether I was covering marginal atomized scenes of resistance or the birth of a potentially broad-based movement. But as time went on, what I clearly saw was a movement forming before my eyes." Criticism After the book's release, Klein was heavily criticized by the newspaper The Economist, leading to a broadcast debate with Klein and the magazine's writers, dubbed "No Logo vs. Pro Logo". The 2004 book The Rebel Sell (published as Nation of Rebels in the United States) specifically criticized No Logo, stating that turning the improving quality of life in the working class into a fundamentally anti-market ideology is shallow. Nike published a point-by-point response to the book. Awards In 2000, No Logo was short-listed for the Guardian First Book Award in 2000. In 2001, the book won the following awards: The 2001 National Business Book Award The 2001 French Prix Médiations Editions Several imprints of No Logo exist, including a hardcover first edition, a subsequent hardcover edition, and a paperback. A 10th anniversary edition was published
James L. Jones, former National Security Advisor and NATO Supreme Allied Commander Europe --K-- John F. Kelly, former White House chief of staff Kristie Kenney, U.S. ambassador to Thailand and to the Philippines Donald Keyser, State Department China expert accused of espionage Mark Kimmitt, assistant secretary of state for politico-military affairs, the State Department Charles C. Krulak, thirty-first commandant of the United States Marine Corps --L-- Bruce Laingen, U.S. ambassador to Malta, American hostage in Iranian Hostage Crisis Jeannie Leavitt, first U.S.A.F. fighter pilot, general Homer Litzenberg, USMC general --M-- James Mattis, former Secretary of Defense John McCain, former U.S. Senator Robert Macfarlane, National Security Advisor under president Ronald Reagan Thomas McInerney, U.S.A.F lieutenant general Merrill A. McPeak, former U.S.A.F Chief of Staff Godfrey McHugh, former military aide to President John F. Kennedy --N-- Lucien Nedzi, U.S. congressman Richard Norland U.S. ambassador to Libya --O-- Robin Olds, brigadier general, "triple ace" in World War II and Vietnam --P-- Peter Pace, former Chairmen of the Joint Chiefs of Staff Donald Parsons former US Military Attache to Canada Andika Perkasa, chief of staff, Indonesian army Czesław Piątas, chief of general staff, Polish army Colin Powell, former U.S. Secretary of State and Chairman of the Joint Chiefs of Staff Edward Pietrzyk, commander in chief, Polish land forces, two-time Polish ambassador --R-- John M. Richardson, admiral, 31st chief of naval operations Robert C. Richardson III, brigadier general, principal in the laconia incident --S-- Beth Sanner, deputy director of national intelligence Norton A. Schwartz, former U.S. Air Force Chief of Staff Dorothy Shea U.S. ambassador to Lebanon Robert Lee Scott Jr., USAF brigadier general and fighter ace Hugh Shelton, former Chairman of the Joint Chiefs of Staff Abraham Sinkov, U.S. cryptanalyst and NSA official Eric Shinseki, former U.S Army Chief of Staff and Secretary of Veterans Affairs Jay B. Silveria, superintendent, United States Air Force Academy James G. Stavridis former Supreme Allied Commander Europe, admiral, U.S. Navy J. Christopher Stevens, the late U.S. Ambassador to Libya Stephanie S. Sullivan, U.S. ambassador to Ghana James C. Swan, United Nations secretary general's special representative for Somalia --W-- Mark Welsh, USAF general Cedric T. Wins, U.S. army general --Y-- Donald Yamamoto, U.S. ambassador to Somalia Stefan Yanev, prime minister of Bulgaria Marie Yovanovitch U.S. ambassador to Ukraine --Z-- Anthony Zinni, former commander, United States Central Command Elmo Zumwalt, former U.S. Chief of Naval Operations James P. Zumwalt, U.S. ambassador to Senegal Roosevelt Hall Roosevelt Hall (built 1903–1907) is a Beaux Arts–style building housing the NWC since its inception in 1946. Designed by the New York architectural firm McKim, Mead & White, it is now designated a National Historic Landmark. It is listed on the National Register of Historic Places. See also Air War College Dwight D. Eisenhower School for National Security and Resource Strategy List of National Historic Landmarks in the District of Columbia Marine Corps War College National Register of Historic Places listings in the District of Columbia Naval War College United States Army War College References External links National War College homepage Military academies of
year, the curriculum was based upon a core standard throughout National Defense University. Because of the NWC's privileged location close to the White House, the Supreme Court, and Capitol Hill, it has been able throughout its history to call upon an extraordinarily well-connected array of speakers to animate its discussions. All lectures at the National War College are conducted under a strict "no quotation nor attribution" policy, which has facilitated discussion on some of the most challenging issues of the day. Commandants Vice Admiral Harry W. Hill (June 1946–1949) Lieutenant General Harold R. Bull (1949–1952) Lieutenant General Harold A. Craig (1952–1955) Vice Admiral Edmund T. Wooldridge (1955–1958) Lieutenant General Thomas L. Harrold (1958–1961) Lieutenant General Francis H. Griswold (1961–1964) Vice Admiral Fitzhugh Lee III (1964–1967) Lieutenant General Andrew Goodpaster (1967–1968) Lieutenant General John E. Kelly (1968–1970) Lieutenant General John B. McPherson (1970–1973) Vice Admiral Marmaduke G. Bayne (1973–1975) Major General James S. Murphy (1975–1976) Major General Harrison Lobdell Jr. (1976–1978) Rear Admiral John C. Barrow (1978–1980) Major General Lee E. Surut (1980–1983) Major General Perry M. Smith (1983–1986) Rear Admiral John F. Addams (1986–1989) Major General Gerald P. Stadler (1989–1992) Major General John C. Fryer Jr. (1992–1995) Rear Admiral Michael McDevitt (1995–1997) Rear Admiral Thomas Marfiak (1997–1999) Rear Admiral Daniel R. Bowler (1999–2000) Major General Reginal G. Clemmons (2000–2003) Rear Admiral Richard D. Jaskot (2003–2006) Major General Teresa Marné Peterson (2006–2007) Major General Robert P. Steel (2007–2010) Rear Admiral Douglas J. McAneny (2011-2013) Brigadier General Guy "Tom" Cosentino (2013-2015) Brigadier General Darren E. Hartford (2015-2017) Brigadier General Chad T. Manske (2017-2019) Rear Admiral Cedric E. Pringle (2019–2021) Brigadier General Jeff H. Hurlbert (2021-present) Source for commandants up to 2010. Alumni and influence American graduates of the National War College include a secretary of state and a secretary of defense, national security advisors, a senator and congressman, and a White House chief of staff, in addition to chairmen of the joint chiefs of staff and numerous other current and former flag officers, general officers, and U.S. ambassadors. No other graduate institution of national security policy in the world has had more impact in the development of the United States senior cadre of national security leaders. Graduates from other countries include prime ministers from nations as diverse as Iran and Bulgaria, as well as many national military leaders from every continent on earth except Antarctica. Notable graduates include: --A-- John R. Allen, president of the Brookings Institution David W. Allvin, general and vice chief of staff of the United States Air Force Gholam Reza Azhari, prime minister of Iran --B-- Robert H. Barrow, former Commandant of the Marine Corps Edward L. Beach Jr., World War II submarine officer and best-selling novelist William B. Black, Jr., deputy director National Security Agency John Beyrle, U.S. Ambassador to Russia Arnold W. Braswell, retired Air Force General Bernard Brodie, one of the initial nuclear theorists William Brownfield, U.S. Ambassador to Venezuela, Chile, and Colombia John Ray Budner, the late Brigadier General, formerly in command of the North American Air Defense Command Combat Operations Center --C-- Richard D. Clarke U.S. army general, commander, special operations command Wesley Clark, former NATO Supreme Allied Commander Europe Bernard A. Clarey, U.S. admiral --D-- Raymond G. Davis, assistant commandant of the Marine Corps Eugene Peyton Deatrick, USAF general Martin Dempsey, former Chairman of the Joint Chiefs of Staff R. Scott Dingle, U.S. army general 45th surgeon general of the United States Army --F-- John D. Feeley, U.S. ambassador --G-- Charles A. Gillespie Jr., U.S. ambassador to Colombia Alan L. Gropman, military officer, author, and academic --H-- Eric T. Hill, U.S.A.F major general --J-- James
is a small community in the Canadian province of Manitoba. It is located on Manitoba Provincial Highway
province of Manitoba. It is located on Manitoba Provincial Highway 5 in the Rural
"Danish") as being in a state of decline and generally indicate that the language remained stronger in Shetland than in Orkney. A source from 1670 states that there are "only three or four parishes" in Orkney where people speak "Noords or rude Danish" and that they do so "chiefly when they are at their own houses". Another from 1701 indicates that there were still a few monoglot "Norse" speakers who were capable of speaking "no other thing," and notes that there were more speakers of the language in Shetland than in Orkney. It was said in 1703 that the people of Shetland generally spoke a Lowland Scots dialect brought to Shetland from the end of the fifteenth century by settlers from Fife and Lothian, but that "many among them retain the ancient Danish Language"; while in 1750 Orkney-born James Mackenzie wrote that Norn was not yet entirely extinct, being "retained by old people," who still spoke it among each other. The last reports of Norn speakers are claimed to be from the 19th century, with some claims of a very limited use up until the early 20th century, but it is more likely that the language was dying out in the late 18th century. The isolated islands of Foula and Unst are variously claimed as the last refuges of the language in Shetland, where there were people "who could repeat sentences in Norn", probably passages from folk songs or poems, as late as 1893. Walter Sutherland from Skaw in Unst, who died about 1850, has been cited as the last native speaker of the Norn language. However, fragments of vocabulary survived the death of the main language and remain to this day, mainly in place-names and terms referring to plants, animals, weather, mood, and fishing vocabulary. Norn had also been a spoken language in Caithness but had probably become extinct there by the 15th century, replaced by Scots. Hence, some scholars also speak about "Caithness Norn", but others avoid this. Even less is known about "Caithness Norn" than about Orkney and Shetland Norn. Almost no written Norn has survived, but what little remains includes a version of the Lord's Prayer and a ballad, "Hildina". Michael P Barnes, professor of Scandinavian Studies at University College London, has published a study, The Norn Language of Orkney and Shetland. Classification Norn is an Indo-European language belonging to the North Germanic branch of the Germanic languages. Together with Faroese, Icelandic and Norwegian, it belongs to the West Scandinavian group, separating it from the East Scandinavian and Gutnish groups consisting of Swedish, Danish and Gutnish. While this classification is based on the differences between the North Germanic languages at the time they split, their present-day characteristics justify another classification, dividing them into Insular Scandinavian and Mainland Scandinavian language groups based on mutual intelligibility. Under this system, Norwegian is grouped together with Danish and Swedish because the last millennium has seen all three undergo important changes, especially in grammar and lexis, which have set them apart from Faroese and Icelandic. Norn is generally considered to have been fairly similar to Faroese, sharing many phonological and grammatical traits, and might even have been mutually intelligible with it. Thus, it can be considered an Insular Scandinavian language. Few written texts remain. It is distinct from the present-day Shetland dialect, which evolved from Middle English. Phonology The phonology of Norn can never be determined with much precision because of the lack of source material, but the general aspects can be extrapolated from the few written sources that exist. Norn shared many traits with the dialects of southwest Norway. That includes a voicing of to after vowels and (in the Shetland dialect but only partially in the Orkney dialect) a conversion of and ("thing" and "that" respectively) to and respectively. Morphology Norn grammar had features very similar to the other Scandinavian languages.
Norn shared many traits with the dialects of southwest Norway. That includes a voicing of to after vowels and (in the Shetland dialect but only partially in the Orkney dialect) a conversion of and ("thing" and "that" respectively) to and respectively. Morphology Norn grammar had features very similar to the other Scandinavian languages. There were two numbers, three genders and four cases (nominative, accusative, genitive and dative). The two main conjugations of verbs in present and past tense were also present. Like all other North Germanic languages, it used a suffix instead of a prepositioned article to indicate definiteness as in modern Scandinavian: ("man"); ("the man"). Though it is difficult to be certain of many of the aspects of Norn grammar, documents indicate that it may have featured subjectless clauses, which were common in the West Scandinavian languages. Sample text The following are Norn, Old Norse and contemporary Scandinavian versions of the Lord's Prayer: Orkney Norn: Shetland Norn: Old West Norse: Faroese Icelandic Norwegian (Landsmål 1920, present-day Nynorsk) Swedish A Shetland "guddick" (riddle) in Norn, which Jakob Jakobsen heard told on Unst, the northernmost island in Shetland, in the 1890s. The same riddle is also known from the Faroe Islands, Norway, and Iceland, and a variation also occurs in England. The answer is a cow: four teats hang, four legs walk, two horns and two ears stand skyward, two eyes show the way to the field and one tail comes shaking (dangling) behind. Modern use Most of the use of Norn/Norse in modern-day Shetland and Orkney is purely ceremonial, and mostly in Old Norse, for example the Shetland motto, which is ("with law shall land be built") which is the same motto used by the Icelandic police force and inspired by the old Norwegian Frostathing Law. Another example of the use of Norse/Norn in the Northern Isles can be found in the names of ferries: NorthLink Ferries has ships named MV Hamnavoe (after the old name for Stromness), and MV Hjaltland (Shetland) and MV Hrossey ("Horse Island", an old name for Mainland, Orkney). The Yell Sound Ferry sails from Ulsta on the island to Toft on the Shetland Mainland. The service is operated by two ferries—Daggri (Norse for "dawn"), launched in 2003 and Dagalien (Norse for "dusk"), launched in 2004. Norn words are still used to describe many of the colour and pattern variations in the native sheep of Shetland and Orkney, which survive as the Shetland and North Ronaldsay breeds. Icelandic uses similar words for many of the same colour variations in Icelandic sheep. There are some enthusiasts who are engaged in developing and disseminating a modern form called Nynorn ("New Norn"), based upon linguistic analysis of the known records and Norse linguistics
co-existed with several Pagan cultures (those of the Gens Barbaricina, i.e. "Barbarian People") mainly located in the island's interior. As the Byzantine control waned, the Judicates appeared. A small village known as Nugor appears on some medieval doeuments of XI-XIII centuries. In the two following centuries it grew to more than 1000 inhabitants. Nuoro remained a town of average importance under the Aragonese and Spanish domination of Sardinia, until famine and plague struck it in the late 17th century. After the annexation to the Kingdom of Sardinia, the town became the administrative center of the area, obtaining the title of city in 1836. Culture ISRE Since 1972 in Nuoro is active the Istituto superiore regionale etnografico (ISRE), which is an institution that promotes the study and documentation of the social and cultural life of Sardinia in its traditional manifestations and its transformations. In fact, in addition to managing museums and libraries, it organizes national and international events, including: the Sardinia International Ethnographic Film Festival (SIEFF) and the Festival Biennale Italiano dell’Etnografia (ETNU) (Italian Biennial Festival of Ethnography). Museums Sardinian Ethnographic Museum (Museo Etnografico Sardo). Grazia Deledda's Museum (Museo Deleddiano). M.A.N., Museo d’Arte Provincia di Nuoro (Modern Art Museum of the Nuoro Province). National Archeological Museum Nuoro (Museo Archeologico Nazionale di Nuoro). Museo Ciusa, Museum dedicated to Francesco Ciusa and other artists Spazio Ilisso Monuments and historical sites Cattedrale della Madonna della Neve Piazza Sebastiano Satta Chiesa di Nostra Signora delle Grazie Chiesa della Solitudine The Redeemer's statue, Monte Ortobene, the 7 meters tall Vincenzo Gerace's bronze statue installed 29 August 1901. Nuraghe Ugolio Chiesa di San Carlo, church built in the 17th century containing a copy of Francesco Ciusa's masterpiece La madre dell'ucciso. Sas Birghines, Domus de Janas located in Monte Ortobene Sanctuary Madonna of Montenero, Monte Ortobene Language Along with Italian, the traditional language spoken in Nuoro is Sardinian, in its Logudorese-Nuorese variety. Food Nuoro is home to the world's rarest pasta, su filindeu. The name in Sardinian language means "the threads (or wool) of God" and is made exclusively by the women of a single family in the town, with the recipe being passed down through generations. Cultural international events Sardinia International Ethnographic Film Festival Government Transport Road Nuoro is served by the SS 131 DCN (Olbia-Abbasanta), the SS 129 (Orosei-Macomer), and the SS 389 (Monti-Lanusei). Bus ARST, Azienda Regionale Sarda Trasporti provide regular connections to Cagliari, Sassari, Olbia, and to several minor centres in the province and the region. Other private operators (including Deplano Autolinee, Turmotravel, Redentours) connects Nuoro to various cities and airports in the island. Rail Nuoro is connected by train to Macomer via Ferrovie
International Ethnographic Film Festival (SIEFF) and the Festival Biennale Italiano dell’Etnografia (ETNU) (Italian Biennial Festival of Ethnography). Museums Sardinian Ethnographic Museum (Museo Etnografico Sardo). Grazia Deledda's Museum (Museo Deleddiano). M.A.N., Museo d’Arte Provincia di Nuoro (Modern Art Museum of the Nuoro Province). National Archeological Museum Nuoro (Museo Archeologico Nazionale di Nuoro). Museo Ciusa, Museum dedicated to Francesco Ciusa and other artists Spazio Ilisso Monuments and historical sites Cattedrale della Madonna della Neve Piazza Sebastiano Satta Chiesa di Nostra Signora delle Grazie Chiesa della Solitudine The Redeemer's statue, Monte Ortobene, the 7 meters tall Vincenzo Gerace's bronze statue installed 29 August 1901. Nuraghe Ugolio Chiesa di San Carlo, church built in the 17th century containing a copy of Francesco Ciusa's masterpiece La madre dell'ucciso. Sas Birghines, Domus de Janas located in Monte Ortobene Sanctuary Madonna of Montenero, Monte Ortobene Language Along with Italian, the traditional language spoken in Nuoro is Sardinian, in its Logudorese-Nuorese variety. Food Nuoro is home to the world's rarest pasta, su filindeu. The name in Sardinian language means "the threads (or wool) of God" and is made exclusively by the women of a single family in the town, with the recipe being passed down through generations. Cultural international events Sardinia International Ethnographic Film Festival Government Transport Road Nuoro is served by the SS 131 DCN (Olbia-Abbasanta), the SS 129 (Orosei-Macomer), and the SS 389 (Monti-Lanusei). Bus ARST, Azienda Regionale Sarda Trasporti provide regular connections to Cagliari, Sassari, Olbia, and to several minor centres in the province and the region. Other private operators (including Deplano Autolinee, Turmotravel, Redentours) connects Nuoro to various cities and airports in the island. Rail Nuoro is connected by train to Macomer via Ferrovie della Sardegna. Local transportation ATP Nuoro's bus system provides service
Mercedes-Benz Compressor. In addition, the track was opened to the public in the evenings and on weekends, as a one-way toll road. The whole track consisted of 174 bends (prior to 1971 changes), and averaged in width. The fastest time ever around the full Gesamtstrecke was by Louis Chiron, at an average speed of in his Bugatti. In 1929 the full Nürburgring was used for the last time in major racing events, as future Grands Prix would be held only on the Nordschleife. Motorcycles and minor races primarily used the shorter and safer Südschleife. Memorable pre-war races at the circuit featured the talents of early Ringmeister (Ringmasters) such as Rudolf Caracciola, Tazio Nuvolari and Bernd Rosemeyer. 1947–1970: "The Green Hell" After World War II, racing resumed in 1947 and in 1951, the Nordschleife of the Nürburgring again became the main venue for the German Grand Prix as part of the Formula One World Championship (with the exception of 1959, when it was held on the AVUS in Berlin). A new group of Ringmeister arose to dominate the race – Alberto Ascari, Juan Manuel Fangio, Stirling Moss, Jim Clark, John Surtees, Jackie Stewart and Jacky Ickx. On 5 August 1961, during practice for the 1961 German Grand Prix, Phil Hill became the first person to complete a lap of the Nordschleife in under 9 minutes, with a lap of 8 minutes 55.2 seconds (153.4 km/h or 95.3 mph) in the Ferrari 156 "Sharknose" Formula One car. Over half a century later, even the highest performing road cars still have difficulty breaking 8 minutes without a professional race driver or one very familiar with the track. Also, several rounds of the German motorcycle Grand Prix were held, mostly on the Südschleife, but the Hockenheimring and the Solitudering were the main sites for Grand Prix motorcycle racing. In 1953, the ADAC 1000 km Nürburgring race was introduced, an Endurance race and Sports car racing event that counted towards the World Sportscar Championship for decades. The 24 Hours Nürburgring for touring car racing was added in 1970. By the late 1960s, the Nordschleife and many other tracks were becoming increasingly dangerous for the latest generation of F1 cars. In 1967, a chicane was added before the start/finish straight, called Hohenrain, in order to reduce speeds at the pit lane entry. This made the track longer. Even this change, however, was not enough to keep Stewart from nicknaming it "The Green Hell" () following his victory in the 1968 German Grand Prix amid a driving rainstorm and thick fog. In 1970, after the fatal crash of Piers Courage at Zandvoort, the F1 drivers decided at the French Grand Prix to boycott the Nürburgring unless major changes were made, as they did at Spa the year before. The changes were not possible on short notice, and the German GP was moved to the Hockenheimring, which had already been modified. 1971–1983: Changes In accordance with the demands of the F1 drivers, the Nordschleife was reconstructed by taking out some bumps, smoothing out some sudden jumps (particularly at Brünnchen), and installing Armco safety barriers. The track was made straighter, following the race line, which reduced the number of corners. The German GP could be hosted at the Nürburgring again, and was for another six years from 1971 to 1976. In 1973 the entrance into the dangerous and bumpy Kallenhard corner was made slower by adding another left-hand corner after the fast Metzgesfeld sweeping corner. Safety was improved again later on by removing the jumps on the long main straight and widening it, and taking away the bushes right next to the track at the main straight, which had made that section of the Nürburgring dangerously narrow. A second series of three more F1 races was held until 1976. However, primarily due to its length of over , and the lack of space due to its situation on the sides of the mountains, increasing demands by the F1 drivers and the FIA's CSI commission were too expensive or impossible to meet. For instance, by the 1970s the German Grand Prix required five times the marshals and medical staff as a typical F1 race, something the German organizers were unwilling to provide. Additionally, even with the 1971 modifications it was still possible for cars to become airborne off the track. The Nürburgring was also unsuitable for the burgeoning television market; its vast expanse made it almost impossible to effectively cover a race there. As a result, early in the season it was decided that the 1976 race would be the last to be held on the old circuit. Niki Lauda, the reigning world champion and only person ever to lap the full Nordschleife in under seven minutes (6:58.6, 1975), proposed to the other drivers that they boycott the circuit in 1976. Lauda was not only concerned about the safety arrangements and the lack of marshals around the circuit, he also did not like the prospect of running the race in another rainstorm. Usually when that happened, some parts of the circuit were wet and other parts were dry, which is what the conditions of the circuit were for that race. The other drivers voted against the idea and the race went ahead. Lauda crashed in his Ferrari coming out of the left-hand kink before Bergwerk after a new magnesium component on his Ferrari's rear suspension failed. He was badly burned as his car was still loaded with fuel in lap 2. Lauda was saved by the combined actions of fellow drivers Arturo Merzario, Guy Edwards, Brett Lunger, and Harald Ertl. The crash also showed that the track's distances were too long for regular fire engines and ambulances, even though the "ONS-Staffel" was equipped with a Porsche 911 rescue car, marked (R). The old Nürburgring never hosted another F1 race again, as the German Grand Prix was moved to the Hockenheimring for 1977. The German motorcycle Grand Prix was held for the last time on the old Nürburgring in 1980, also permanently moving to Hockenheim. By its very nature, the Nordschleife was impossible to make safe in its old configuration. It soon became apparent that it would have to be completely overhauled if there was any prospect of Formula One returning there - the Nürburgring's administration and race organizers were not willing to provide the enormous expense of providing the number of marshals needed for a Grand Prix - up to six times the amount that most other circuits needed. With this in mind, in 1981 work began on a -long new circuit, which was built on and around the old pit area. At the same time, a bypass shortened the Nordschleife to , and with an additional small pit lane, this version was used for races in 1983, e.g. the 1000km Nürburgring endurance race, while construction work was going on nearby. During qualifying for that race, the late Stefan Bellof set a lap of 6:11.13 for the Nordschleife in his Porsche 956, or on average. This lap held the all-time record for 35 years (partially because no major racing has taken place there since 1984) until it was surpassed by Timo Bernhard in the Porsche 919 Hybrid Evo, which ran the slightly longer version of the circuit in 5:19.546- averaging on 29 June 2018. Meanwhile, more run-off areas were added at corners like Aremberg and Brünnchen, where originally there were just embankments protected by Armco barriers. The track surface was made safer in some spots where there had been nasty bumps and jumps. Racing line markers were added to the corners all around the track as well. Also, bushes and hedges at the edges of corners were taken out and replaced with Armco and grass. The former Südschleife had not been modified in 1970–1971 and was abandoned a few years later in favour of the improved Nordschleife. It is now mostly gone (in part due to the construction of the new circuit) or converted to a normal public road, but since 2005 a vintage car event has been hosted on the old track layout, including part of the parking area. 1984: New Grand Prix track The new track was completed in 1984 and named GP-Strecke (: literally, "Grand Prix Course"). It was built to meet the highest safety standards. However, it was considered in character a mere shadow of its older sibling. Some fans, who had to sit much farther away from the track, called it Eifelring, Ersatzring, Grünering or similar nicknames, believing it did not deserve to be called Nürburgring. Like many circuits of the time, it offered few overtaking opportunities. Prior to the 2013 German Grand Prix both Mark Webber and Lewis Hamilton said they liked the track. Webber described the layout as "an old school track" before adding, "It’s a beautiful little circuit for us to still drive on so I think all the guys enjoy driving here." While Hamilton said "It’s a fantastic circuit, one of the classics and it hasn’t lost that feel of an old classic circuit." To celebrate its opening, an exhibition race was held on 12 May. The 1984 Nürburgring Race of Champions featured an array of notable drivers driving identical Mercedes 190E 2.3–16's: the line-up was Elio de Angelis, Jack Brabham (Formula 1 World Champion 1959, 1960, 1966), Phil Hill (1961), Denis Hulme (1967), James Hunt (1976), Alan Jones (1980), Jacques Laffite, Niki Lauda (1975, 1977)*, Stirling Moss, Alain Prost*, Carlos Reutemann, Keke Rosberg (1982), Jody Scheckter (1979), Ayrton Senna*, John Surtees (1964) and John Watson. [Drivers marked with * won the Formula 1 World Championship subsequent to the race]. Senna won ahead of Lauda, Reutemann, Rosberg, Watson, Hulme and Jody Scheckter, being the only one to resist Lauda's performance who – having missed the qualifying – had to start from the last row and overtook all the others except Senna. There were nine former and two future Formula 1 World Champions competing, in a field of 20 cars with 16 Formula 1 drivers; the other four were local drivers: Klaus Ludwig, Manfred Schurti, Udo Schütz and Hans Herrmann. Besides other major international events, the Nürburgring has seen the brief return of Formula One racing, as the 1984 European Grand Prix was held at the track, followed by the 1985 German Grand Prix. As F1 did not stay, other events were the highlights at the new Nürburgring, including the 1000km Nürburgring, DTM, motorcycles, and newer types of events, like truck racing, vintage car racing at the AvD "Oldtimer Grand Prix", and even the "Rock am Ring" concerts. Following the success and first world championship of Michael Schumacher, a second German F1 race was held at the Nürburgring between 1995 and 2006, called the European Grand Prix, or in 1997 and 1998, the Luxembourg Grand Prix. For 2002, the track was changed, by replacing the former "Castrol-chicane" at the end of the start/finish straight with a sharp right-hander (nicknamed "Haug-Hook"), in order to create an overtaking opportunity. Also, a slow Omega-shaped section was inserted, on the site of the former kart track. This extended the GP track from , while at the same time, the Hockenheimring was shortened from . Both the Nürburgring and the Hockenheimring events have been losing money due to high and rising Formula One license fees charged by Bernie Ecclestone and low attendance due to high ticket prices; starting with the 2007 Formula One season, Hockenheim and Nürburgring alternated in hosting the German GP. In Formula One, Ralf Schumacher collided with his teammate Giancarlo Fisichella and his brother at the start of the 1997 race which was won by Jacques Villeneuve. In 1999, in changing conditions, Johnny Herbert managed to score the only win for the team of former Ringmeister Jackie Stewart. One of the highlights of the 2005 season was Kimi Räikkönen's spectacular exit while in the last lap of the race, when his suspension gave way after being rattled lap after lap by a flat-spotted tyre that was not changed due to the short-lived 'one set of tyres' rule. Prior to the 2007 European Grand Prix, the Audi S (turns 8 and 9) was renamed Michael Schumacher S after Michael Schumacher. Schumacher had retired from Formula One the year before, but returned in 2010, and in 2011 became the second Formula One driver to drive through a turn named after them (after Ayrton Senna driving his "S for Senna" at Autódromo José Carlos Pace). Alternation with Hockenheim In 2007, the FIA announced that Hockenheimring and Nürburgring would alternate with the German Grand Prix with Nürburgring hosting in 2007. Due to name-licensing problems, it was held as the European Grand Prix that year. In 2014, the new owners of the Nürburgring were unable to secure a deal to continue hosting the German Grand Prix in the odd-numbered years, so the 2015 and 2017 German Grands Prix were cancelled. Return of Formula One In July 2020, it was announced that after seven years, the race track would be an official Formula One Grand Prix with the event taking place from 9 to 11 October 2020. This race was called the Eifel Grand Prix in honour of the nearby mountain range, meaning the venue held a Grand Prix under a fourth different name having hosted races under the German, European and Luxembourg Grands Prix titles previously. That race was won by Lewis Hamilton, who equalled Michael Schumacher's record of wins. Fatal accidents While it is unusual for deaths to occur during sanctioned races, there are many accidents and several deaths each year during public sessions. It is common for the track to be closed several times a day for cleanup, repair, and medical intervention. While track management does not publish any official figures, several regular visitors to the track have used police reports to estimate the number of fatalities as between 3 and 12 in a full year. Jeremy Clarkson noted in Top Gear in 2004 that "over the years this track has claimed over 200 lives". Nordschleife racing today Several touring car series still compete on the Nordschleife, using either only the simple version with its separate small pit lane, or a combined -long track that uses a part of the original modern F1 track (without the Mercedes Arena section, which is often used for support pits) plus its huge pit facilities. Entry-level competition requires a regularity test (GLP) for street-legal cars. Two racing series (RCN/CHC and VLN) compete on 15 Saturdays each year, for several hours. The annual highlight is the 24 Hours Nürburgring weekend, held usually in mid-May, featuring 220 cars – from small cars to Turbo Porsches or factory race cars built by BMW, Opel, Audi, and Mercedes-Benz, over 700 drivers (amateurs and professionals), and up to 290,000 spectators. As of 2015 the World Touring Car Championship holds the FIA WTCC Race of Germany at the Nordschleife as a support category to the 24 Hours. Automotive media outlets and manufacturers use the Nordschleife as a standard to publish their lap times achieved with their production vehicles. BMW Sauber’s Nick Heidfeld made history on 28 April 2007 as the first driver in over thirty years to tackle the Nürburgring Nordschleife track in a contemporary Formula One car. Heidfeld's three laps in an F1.06 were part of festivities celebrating BMW's contribution to motorsport. About 45,000 spectators showed up for the main event, the third four-hour VLN race of the season. Conceived largely as a photo opportunity, the lap times were not as fast as the car was capable of, BMW instead choosing to run the chassis at a particularly high ride height to allow for the Nordschleifes abrupt gradient changes and to limit maximum speeds accordingly. Former F1 driver Hans-Joachim Stuck was injured during the race when he crashed his BMW Z4. As part of the festivities before the 2013 24 Hours Nürburgring race, Michael Schumacher and other Mercedes-Benz drivers took part in a promotional event which saw Schumacher complete a demonstration lap of the Nordschleife at the wheel of a 2011 Mercedes W02. As with Heidfeld's lap, and also partly due to Formula One's strict in-season testing bans, the lap left many motorsport fans underwhelmed. Nordschleife public access Since its opening in 1927, the track has been used by the public for the so-called Touristenfahrten: anyone with a road-legal car or motorcycle, as well as tour buses, motor homes, or cars with trailers, are able to access the Nordschleife. It is opened every day of the week, except when races take place. The track may be closed for weeks during the winter months, depending on weather conditions and maintenance work. Passing on the right is prohibited, and some sections have speed limits; the normal traffic rules (StVO in German) apply also here. The Nürburgring is a popular attraction for many driving enthusiasts and riders from all over the world,
a further attraction. Normal ticket buyers on tourist days cannot quite complete a full lap of the Nordschleife, which bypasses the modern GP-Strecke, as they are required to slow down and pass through a "pit lane" section where toll gates are installed. On busier days, a mobile ticket barrier is installed on the main straight in order to reduce the length of queues at the fixed barriers. This is open to all ticket holders. On rare occasions, it is possible to drive both the Nordschleife and the Grand Prix circuit combined. Drivers interested in lap times often time themselves from the first bridge after the barriers to the last gantry (aka Bridge-to-Gantry or BTG time) before the exit. However, the track's general conditions state that any form of racing, including speed record attempts, is forbidden. The driver's insurance coverage may consequently be voided, leaving the driver fully liable for damage. Normal, non-racing, non-timed driving accidents might be covered by driver's insurance, but it is increasingly common for insurers to insert exclusion clauses that mean drivers and riders on the Nürburgring only have third-party coverage or are not covered at all. Drivers who have crashed into the barriers, suffered mechanical failure or been otherwise required to be towed off track during Touristenfahrten sessions are referred to as having joined the "Bongard Club". This nickname is derived from the name of the company which operates the large yellow recovery flatbed trucks which ferry those unfortunate drivers and their vehicles to the nearest exit. Due to the high volume of traffic, there is an emphasis on quickly clearing and repairing any compromised safety measures so the track can be immediately re-opened for use. Additionally, those found responsible for damage to the track or safety barriers are required to pay for repairs, along with the time and cost associated with personnel and equipment to address those damages, making any accident or breakdown a potentially expensive incident. Because it is technically operated as a public toll road, failing to report an accident or instance where track surfaces are affected is considered unlawfully leaving the scene of an accident. This is all part of the rules and regulations which aim to ensure a safe experience for all visitors to the track. Südschleife Public access The entire Nürburgring Gesamtstrecke was open to the public from its initial opening. At several points around the circuit there were access roads and toll points from which drivers and riders could begin or end a drive. The Südschleife had one of these at the bottom of the uphill stretch near Müllenbach. Commercial aspects One of the original purposes of the Nordschleife was as a test track for auto manufacturers, and its demanding layout had been traditionally used as a proving ground. Weekdays are often booked for so-called Industriefahrten for auto makers and the media. With the advent of the Internet, awareness of the Nordschleife has risen in Germany and abroad, in addition to publicity in print media. In 1999, Porsche reported that their new 996 GT3 had lapped the Nürburgring in under eight minutes, and in subsequent years, manufacturers from overseas also showed up to test cars. Some high-performance models are promoted with videotaped laps published on the web, and the claimed lap times generate discussion. Few of these supercars are actually entered in racing where the claims could be backed up. Industry pool For sixteen weeks per year, the industry pool () rents exclusive daytime use of the track for automotive development, and endurance testing. the industry pool consisted of approximately 30 car manufacturers, associations, and component suppliers. By 2019, the track was being rented by the industry pool for 18 weeks per year. Television and games The TV series Top Gear has also used the Nordschleife for its challenges, often involving Sabine Schmitz. In addition, during series 17 (summer 2011) of Top Gear, James May was very critical of the ride quality of cars whose development processes included testing on the Nordschleife, saying that cars which were tested at Nordschleife got ruined. Multiple layouts of the Nürburgring have been featured in video games, such as the Gran Turismo series, the Forza Motorsport series, the Need for Speed: Shift series, iRacing and Assetto Corsa. Grand Prix Legends, a historic racing simulator also included the Nürburgring on its roster of default Grand Prix tracks. Leisure development Other pastimes are hosted at the Nürburgring, such as the Rock am Ring, Germany's biggest rock music festival, attracting close to 100,000 rock fans each year since 1985. Since 1978, the Nordschleife is also the venue of a major running event (Nürburgring-Lauf/Run am Ring). In 2003, a major bicycling event (Rad am Ring) was added and it became the multi-sports event Rad & Run am Ring. In 2009, new commercial areas opened, including a hotel and shopping mall. In the summer of 2009, ETF Ride Systems opened a new interactive dark ride application called "Motor Mania" at the racetrack, in collaboration with Lagotronics B.V. The roller coaster "ring°racer" was scheduled to open in 2011 however was delayed significantly due to technical issues. It eventually opened October 31, 2013 however closed after just four days of operation on November third. Ownership In 2012, the track was preparing to file for bankruptcy as a result of nearly $500 million in debts and the inability to secure financing. On 1 August 2012, the government of Rheinland-Pfalz guaranteed $312 million to allow the track to meet its debt obligations. In 2013, the Nürburgring was for sale for US$165 million (€127.3 million). The sale process was by sealed-bid auction with an expected completion date of "Late Summer". This meant there was to be a new owner in 2013, unencumbered by the debts of the previous operation, with the circuit expected to return to profitability. On 11 March 2014 it was reported that the Nürburgring was sold for 77 million euros ($106.8 million). Düsseldorf-based Capricorn Development was the buyer. The company was to take full ownership of the Nürburgring on 1 January 2015. But in October 2014, Russian billionaire, the chairman of Moscow-based Pharmstandard, Viktor Kharitonin, bought a majority stake in the Nürburgring. In May 2015, the Nürburgring was set to hold the first Grüne Hölle Rock festival as a replacement for the Rock am Ring festival, but the project did not take place. Grüne Hölle Rock changed their name to Rock im Revier and the event was held in the Schalke area. Nordschleife layout The Nordschleife operates in a clockwise direction, and was formerly known for its abundance of sharp crests, causing fast-moving, firmly-sprung racing cars to jump clear off the track surface at many locations. Flugplatz ("air field", a small airport) Although by no means the most fearsome, Flugplatz is perhaps the most aptly (although coincidentally) named and widely remembered. The name of this part of the track comes from a small airfield, which in the early years was located close to the track in this area. The track features a very short straight that climbs sharply uphill for a short time, then suddenly drops slightly downhill, and this is immediately followed by two very fast right-hand kinks. Chris Irwin's career was ended following a massive accident at Flugplatz, in a Ford 3L GT sports car in 1968. Manfred Winkelhock flipped his March Formula Two car at the same corner in 1980. This section of the track was renovated in 2016 after an accident in which a Nissan GTR flew over the fence and killed a spectator. The Flugplatz is one of the most important parts of the Nürburgring because after the two very fast right-handers comes what is possibly the fastest part of the track: a downhill straight called Kottenborn, into a very fast curve called Schwedenkreuz (Swedish Cross). Drivers are flat out for some time here. Right before Flugplatz is Quiddelbacher-Höhe (peak, as in "mountain summit"), where the track crosses a bridge over the Bundesstraße 257. Fuchsröhre ("Fox Hole") The Fuchsröhre is soon after the very fast downhill section succeeding the Flugplatz. After negotiating a long right-hand corner called Aremberg (which is after Schwedenkreuz) the road goes slightly uphill, under a bridge and then it plunges downhill, and the road switches back left and right and finding a point of reference for the racing line is difficult. This whole sequence is flat out and then, the road climbs sharply uphill. The road then turns left and levels out at the same time; this is one of the many jumps of the Nürburgring where the car goes airborne. This leads to the Adenauer Forst (forest) turns. The Fuchsröhre is one of the fastest and most dangerous parts of the Nürburgring because of the extremely high speeds in such a tight and confined place; this part of the Nürburgring goes right through a forest and there is only about 7–8 feet of grass separating the track from Armco barrier, and beyond the barriers is a wall of trees. Bergwerk ("Mine") Perhaps the most notorious corner on the long circuit, Bergwerk has been responsible for some serious and sometimes fatal accidents. A tight right-hand corner, coming just after a long, fast section and a left-hand kink on a small crest, was where Carel Godin de Beaufort fatally crashed. The fast kink was also the scene of Niki Lauda's infamous fiery accident during the 1976 German Grand Prix. This left kink is often referred to as the Lauda-Links (Lauda left). The Bergwerk, along with the Breidscheid/Adenauer Bridge corners before it, are one of the series of corners that make or break one's lap time around the Nürburgring because of the fast, lengthy uphill section called Kesselchen (Little Valley) that comes after the Bergwerk. Caracciola Karussell ("Carousel") Although being one of the slower corners on the Nordschleife, the Karussell is perhaps its most famous and one of its most iconic- it is one of two berm-style, banked corners on the track. Soon after the driver has negotiated the long uphill section after Bergwerk and gone through a section called Klostertal (Monastery Valley), the driver turns right through a long hairpin, past an abandoned section called Steilstrecke (Steep Route) and then goes up another hill towards the Karrusell. The entrance to the corner is blind, although Juan Manuel Fangio is reputed to have advised a young driver to "aim for the tallest tree," a feature that was also built into the rendering of the circuit in the Gran Turismo 4 and Grand Prix Legends video games. Once the driver has reached the top of the hill, the road then becomes sharply banked on one side and level on the other- this banking drops off, rather than climbing up like most bankings on circuits. The sharply banked side has a concrete surface, and there is a foot-wide tarmac surface on the bottom of the banking for cars to get extra grip through the very rough concrete banking. Cars drop into the concrete banking, and keep the car in the corner (which is 210 degrees, much like a hairpin bend) until the road levels out and the concrete surface becomes tarmac again. This corner is very hard on the driver's wrists and hands because of the prolonged bumpy cornering the driver must do while in the Karrusell. Usually, cars come out of the top of the end of the banking to hit the apex that comes right after the end of the Karrusell. The combination of a recognisable corner, slow-moving cars, and the variation in viewing angle as cars rotate around the banking, means that this is one of the circuit's most popular locations for photographers. It is named after German pre-WWII racing driver Rudolf Caracciola, who reportedly made the corner his own by hooking the inside tires into a drainage ditch to help his car "hug" the curve. As more concrete was uncovered and more competitors copied him, the trend took hold. At a later reconstruction, the corner was remade with real concrete banking, as it remains to this day. Shortly after the Karussell is a steep section, with gradients in excess of 16%, leading to a right-hander called Hohe Acht, which is some 300 m higher in altitude than Breidscheid. Brünnchen ("Small Well") A favourite spectator vantage point, the Brünnchen section is composed of two right-hand corners and a very short straight. The first corner goes sharply downhill and the next, after the very short downhill straight, goes uphill slightly. This is a section of the track where on public days, accidents happen particularly at the blind uphill right-hand corner. Like almost every corner at the Nürburgring, both right-handers are blind. The short straight used to have a steep and sudden drop-off that caused cars to take off and a bridge that went over a pathway; these were taken out and smoothed over when the circuit was rebuilt in 1970 and 1971. Pflanzgarten ("Planting Garden") and Stefan Bellof S ("Stefan Bellof Esses") The Pflanzgarten, which is soon after the Brünnchen, is one of the fastest, trickiest and most difficult sections of the Nürburgring. It is full of jumps, including two huge ones, one of which is called Sprunghügel (Jump Hill). This very complex section is unique in that it is made up of two different sections; getting the entire Pflanzgarten right is crucial to a good lap time around the Nürburgring. This section was the scene of Briton Peter Collins's fatal accident during the 1958 German Grand Prix, and the scene of a number of career-ending accidents in Formula One in the 1970s —Britons Mike Hailwood and Ian Ashley were two victims of the Pflanzgarten. Pflanzgarten 1 is made up of a slightly banked, downhill left-hander which then suddenly switches back left, then right. Then immediately, giving the driver nearly no time to react (knowledge of this section is key) the road drops away twice: the first jump is only slight, then right after (somewhat like a staircase) the road drops away very sharply which usually causes almost all cars to go airborne at this jump; the drop is so sudden. Then, immediately after the road levels out very shortly after the jump and the car touches the ground again, the road immediately and suddenly goes right very quickly and then right again; this is what makes up the end of the first Pflanzgarten- a very fast multiple apex sequence of right-hand corners. The road then goes slightly uphill and then through another jump; the road suddenly drops away and levels out and at the same time, the road turns through a flat-out left-hander. Then, the road drops away again very suddenly, which is the second huge jump of the Pflanzgarten known as the Sprunghügel. The road then goes downhill then quickly levels out, then it goes through a flat-out right-hander and this starts the Stefan Bellof S (named as such because Bellof crashed a Porsche 956 there during the 1983 Nurburgring 1000 km), which was known as Pflanzgarten 2 prior to 2013. The Stefan Bellof S is very tricky because the road quickly switches back left and right—a car is going so fast through here that it is like walking on a tightrope. It is very difficult to find the racing line here because the curves come up so quickly, so it is hard to find any point of reference. Then, after a jump at the end of the switchback section, it goes through a flat-out, top gear right-hander and into a short straight that leads into two very fast curves called the Schwalbenschwanz (Swallow's Tail). The room for error on every part of the consistently high-speed Pflanzgarten and the Stefan Bellof S is virtually non-existent (much like the entire track itself). The road and the surface of the Pflanzgarten and the Stefan Bellof S moves around unpredictably; knowledge of this section is key to getting through cleanly. Schwalbenschwanz/Kleines Karussell ("Swallow's Tail"/"Little Carousel") The Schwalbenschwanz is a sequence of very fast sweepers located after the Stefan Bellof S. After a short straight, there is a very fast right-hand sweeper that progressively goes uphill, and this leads into a blind left-hander that is a bit slower. The apex is completely blind, and the corner then changes gradient a bit; it goes up then down, which leads into a short straight that ends at the Kleines Karussell. Originally, this part had a bridge that went over a stream and was very bumpy; this bridge was taken out and replaced with a culvert (large industrial pipe) so that the road could be smoothed over. The Kleines Karussell is similar to its bigger brother, except that it is a 90-degree corner instead of 210 degrees, and is faster and slightly less banked. Once this part of the track is dealt with, the drivers are near the end of the lap; with two more corners to negotiate before the 2.135 km long Döttinger Höhe straight. Südschleife layout The Nürburgring Südschleife''' (south loop) was a German motor racing circuit which was built in 1927 at the same time as the world-famous Nürburgring Nordschleife (north loop). The Südschleife and Nordschleife layouts were joined together by the Start und Ziel (start/finish) area, and could therefore be driven as one track that was over long. Races were held at the combined layout only until 1931. The Südschleife was used for the ADAC Eifelrennen from 1928 until 1931 and from 1958 until 1968, as well as for the Eifelpokal and other minor races. The Südschleife was rarely used after the Nordschleife was rebuilt and updated in 1970 and 1971, and was finally destroyed by the building of the current Nürburgring Grand Prix circuit in the early 1980s. Today only small sections of the original track remain. Track description The shared start/finish area of the Nürburgring complex consisted of two back-to-back straights joined together at the southern end by a tight loop. The entrance to the Südschleife lay on the outside edge of this hairpin and was signposted as the road to Bonn. It immediately dropped sharply downhill and under a public road before winding through a heavily-wooded section. Tight corners soon gave way to fast downhill sections with flowing bends until, at the outskirts of the nearby town of Müllenbach, the track turned sharply right northwards and began a long climb up the hill. At the end of this run came a right hairpin turn which led to a long left curve around the bottom of a hill. This led onto the back straight of the start/finish area. At this point it was possible to continue onto the Nordschleife or take two sharp right-hand turns in order to enter the starting straight once again. Photographs of the track in use show that trees and hedges were not cut back in many areas, being allowed to grow right up to the trackside. Although the Nordschleife had very little in the way of run-off areas, the Südschleife seems to have had none at all, which was likely to have been a factor in the choice of circuit for major events. Sections of routes The route sections bore the following names, among others Bränkekopf, Aschenschlag, Seifgen, Bocksberg, Müllenbach und Scharfer Kopf. Stichstraße shortcut In 1938 a small section of new track (the Stichstraße) was laid which allowed drivers nearing the end of the Südschleife to bypass the start/finish straights and take a right turn which led back to the start of the downhill twists. This shortened a lap to around . This layout was used for tourist rides and for testing. Remaining sections The current Grand Prix circuit required the complete destruction of the start/finish area but at a point around into the Südschleife, a modern public road now follows the route, although the bends have been eased and the vegetation does not come as close to the road as it did when the track was open. This public road continues into the town of Müllenbach but leaves the route of the old track on the outskirts. Nothing remains of the famous corners there. The road up the hill still exists and is sometimes used to allow access to parking areas for the Grand Prix track. The lower sections are no longer maintained. Layout history Current circuit configurations Previous configurations Lap times The lap records at the Nürburgring (except Nordschleife) are listed as: Lap times recorded on the Nürburgring Nordschleife are published by several manufacturers. They are published and discussed in print media, and online. For lap times from various sources, see Nürburgring lap times For lap times in official racing events, on several track variants from up to see List of Nordschleife lap times (racing) The lap record on the Südschleife is held by Helmut Kelleners with 2:38.6 minutes = , driven with a March 707 in the CanAm run of the 3rd International AvD SCM circuit race on 18 October 1970. Previous record holder was Brian Redman, who achieved 2:47.0 minutes = in the Formula 2 race on 21 April 1968 with a Ferrari. Climate The Nürburgring is known for its frequently changing weather. The near-fatal accident of Niki Lauda in 1976 was accompanied by poor weather conditions and also the 2007 Grand Prix race saw an early deluge take several cars out through aquaplaning, with Vitantonio Liuzzi making a lucky escape, hitting
North America are entirely on Earth's Northern Hemisphere. Geography and climate The Arctic is a region around the North Pole (90° latitude). Its climate is characterized by cold winters and cool summers. Precipitation mostly comes in the form of snow. Areas inside the Arctic Circle (66°34′ latitude) experience some days in summer when the Sun never sets, and some days during the winter when it never rises. The duration of these phases varies from one day for locations right on the Arctic Circle to several months near the Pole, which is the middle of the Northern Hemisphere. Between the Arctic Circle and the Tropic of Cancer (23°26′ latitude) lies the Northern temperate zone. The changes in these regions between summer and winter are generally mild, rather than extreme hot or cold. However, a temperate climate can have very unpredictable weather. Tropical regions (between the Tropic of Cancer and the Equator, 0° latitude) are generally hot all year round and tend to experience a rainy season during the summer months, and a dry season during the winter months. In the Northern Hemisphere, objects moving across or above the surface of the Earth tend to turn to the right because of the Coriolis effect. As a result, large-scale horizontal flows of air or water tend to form clockwise-turning gyres. These are best seen in ocean circulation patterns in the North Atlantic and North Pacific oceans. Within the northern hemisphere, oceanic
the Northern Hemisphere lasts from the December solstice (typically December 21 UTC) to the March equinox (typically March 20 UTC), while summer lasts from the June solstice through to the September equinox (typically on 23 September UTC). The dates vary each year due to the difference between the calendar year and the astronomical year. Within the northern hemisphere, oceanic currents can change the weather patterns that affect many factors within the north coast. Such events include ENSO (El Niño-Southern Oscillation). Trade winds blow from east to west just above the equator. The winds pull surface water with them, creating currents, which flow westward due to the Coriolis effect. The currents then bend to the right, heading north. At about 30 degrees north latitude, a different set of winds, the westerlies, push the currents back to the east, producing a closed clockwise loop. Its surface is 60.7% water, compared with 80.9% water in the case of the Southern Hemisphere, and it contains 67.3% of Earth's land. Europe and North America are entirely on Earth's Northern Hemisphere. Geography and climate The Arctic is a region around the North Pole (90° latitude). Its climate is characterized by cold winters and cool summers. Precipitation mostly comes in the form of snow. Areas inside the Arctic Circle (66°34′ latitude) experience some days in summer when the Sun never sets, and some days during the winter when it never rises. The duration of these phases varies from one day for locations right on the Arctic Circle to several months near the Pole, which is the middle of the Northern Hemisphere. Between the Arctic Circle and the Tropic of Cancer (23°26′ latitude) lies the Northern temperate zone. The changes in these regions between summer and winter are generally mild, rather than extreme hot or cold. However, a temperate climate can have very unpredictable weather. Tropical regions (between the Tropic of Cancer and the Equator, 0° latitude) are generally hot all year round and tend to experience a rainy season during the summer months, and a dry season during the winter months. In the Northern Hemisphere, objects moving across or above the surface of the Earth tend to turn to the right because of the Coriolis effect. As a result, large-scale horizontal flows of air or water tend to form clockwise-turning gyres. These are best seen in ocean circulation patterns in the North Atlantic and North Pacific oceans. Within the northern hemisphere, oceanic currents can change the weather patterns that affect many factors within the north coast; such as El Niño. For the same reason, flows of air down toward
themselves, but on the verbs of which the nouns are the subject or direct object. For example, in the sentence "My shirt is lying on the bed", the verb "lies" is used because the subject "my shirt" is a flat, flexible object. In the sentence "My belt is lying on the bed", the verb "lies" is used because the subject "my belt" is a slender, flexible object. Koyukon (Northern Athabaskan) has a more intricate system of classification. Like Navajo, it has classificatory verb stems that classify nouns according to animacy, shape, and consistency. However, in addition to these verb stems, Koyukon verbs have what are called "gender prefixes" that further classify nouns. That is, Koyukon has two different systems that classify nouns: (a) a classificatory verb system and (b) a gender system. To illustrate, the verb stem -tonh is used for enclosed objects. When -tonh is combined with different gender prefixes, it can result in daaltonh which refers to objects enclosed in boxes or etltonh which refers to objects enclosed in bags. Australian Aboriginal languages The Dyirbal language is well known for its system of four noun classes, which tend to be divided along the following semantic lines: The class usually labeled "feminine", for instance, includes the word for fire and nouns relating to fire, as well as all dangerous creatures and phenomena. (This inspired the title of the George Lakoff book Women, Fire, and Dangerous Things.) The Ngangikurrunggurr language has noun classes reserved for canines and hunting weapons. The Anindilyakwa language has a noun class for things that reflect light. The Diyari language distinguishes only between female and other objects. Perhaps the most noun classes in any Australian language are found in Yanyuwa, which has 16 noun classes, including nouns associated with food, trees and abstractions, in addition to separate classes for men and masculine things, women and feminine things. In the men's dialect, the classes for men and for masculine things have simplified to a single class, marked the same way as the women's dialect marker reserved exclusively for men. Basque In Basque there are two classes, animate and inanimate; however, the only difference is in the declension of locative cases (inessive, ablative, allative, terminal allative, and directional allative). For inanimate nouns, the locative case endings are attached directly if the noun is singular, and plural and indefinite number are marked by the suffixes -eta- and -(e)ta-, respectively, before the case ending (this is in contrast to the non-locative cases, which follow a different system of number marking where the indefinite form of the ending is the most basic). For example, the noun etxe "house" has the singular ablative form etxetik "from the house", the plural ablative form etxeetatik "from the houses", and the indefinite ablative form etxetatik (the indefinite form is mainly used with determiners that precede the noun: zenbat etxetatik "from how many houses"). For animate nouns, on the other hand, the locative case endings are attached (with some phonetic adjustments) to the suffix -gan-, which is itself attached to the singular, plural, or indefinite genitive case ending. Alternatively, -gan- may attach to the absolutive case form of the word if it ends in a vowel. For example, the noun ume "child" has the singular ablative form umearengandik or umeagandik "from the child", the plural ablative form umeengandik "from the children", and the indefinite ablative form umerengandik or umegandik (cf. the genitive forms umearen, umeen, and umeren and the absolutive forms umea, umeak, and ume). In the inessive case, the case suffix is replaced entirely by -gan for animate nouns (compare etxean "in/at the house" and umearengan/umeagan "in/at the child"). Caucasian languages Some members of the Northwest Caucasian family, and almost all of the Northeast Caucasian languages, manifest noun class. In the Northeast Caucasian family, only Lezgian, Udi, and Aghul do not have noun classes. Some languages have only two classes, whereas Bats has eight. The most widespread system, however, has four classes: male, female, animate beings and certain objects, and finally a class for the remaining nouns. The Andi language has a noun class reserved for insects. Among Northwest Caucasian languages, only Abkhaz and Abaza have noun class, making use of a human male/human female/non-human distinction. In all Caucasian languages that manifest class, it is not marked on the noun itself but on the dependent verbs, adjectives, pronouns and prepositions. Atlantic–Congo languages Atlantic–Congo languages can have ten or more noun classes, defined according to non-sexual criteria. Certain nominal classes are reserved for humans. The Fula language has about 26 noun classes (the exact number varies slightly by dialect). According to Steven Pinker, the Kivunjo language has 16 noun classes including classes for precise locations and for general locales, classes for clusters or pairs of objects and classes for the objects that come in pairs or clusters, and classes for abstract qualities. Bantu languages According to Carl Meinhof, the Bantu languages have a total of 22 noun classes called nominal classes (this notion was introduced by W. H. J. Bleek). While no single language is known to express all of them, most of them have at least 10 noun classes. For example, by Meinhof's numbering, Shona has 20 classes, Swahili has 15, Sotho has 18 and Ganda has 17. Additionally, there are polyplural noun classes. A polyplural noun class is a plural class for more than one singular class. For example, Proto-Bantu class 10 contains plurals of class 9 nouns and class 11 nouns, while class 6 contains plurals of class 5 nouns and class 15 nouns. Classes 6 and 10 are inherited as polyplural classes by most surviving Bantu languages, but many languages have developed new polyplural classes that are not widely shared by other languages. Specialists in Bantu emphasize that there is a clear difference between genders (such as known from Afro-Asiatic and Indo-European) and nominal classes (such as known from Niger–Congo). Languages with nominal classes divide nouns formally on the base of hyperonymic meanings. The category of nominal class replaces not only the category of gender, but also the categories of number and case. Critics of the Meinhof's approach notice that his numbering system of nominal classes counts singular and plural numbers of the same noun as belonging to separate classes. This seems to them to be inconsistent with the way other languages are traditionally considered, where number is orthogonal to gender (according to the critics, a Meinhof-style analysis would give Ancient Greek 9 genders). If one follows broader linguistic tradition and counts singular and plural as belonging to the same class, then Swahili has 8 or 9 noun classes, Sotho has 11 and Ganda has 10. The Meinhof numbering tends to be used in scientific works dealing with comparisons of different Bantu languages. For instance, in Swahili the word rafiki ‘friend’ belongs to the class 9 and its "plural form" is marafiki of the class 6, even if
person), she (female person), and it (object, abstraction, or animal), and their other inflected forms. Countable and uncountable nouns are distinguished by the choice of many/much. The choice between the relative pronoun who (persons) and which (non-persons) may also be considered a form of agreement with a semantic noun class. A few nouns also exhibit vestigial noun classes, such as stewardess, where the suffix -ess added to steward denotes a female person. This type of noun affixation is not very frequent in English, but quite common in languages which have the true grammatical gender, including most of the Indo-European family, to which English belongs. In languages without inflectional noun classes, nouns may still be extensively categorized by independent particles called noun classifiers. Common criteria for noun classes Common criteria that define noun classes include: animate vs. inanimate (as in Ojibwe) rational vs. non-rational (as in Tamil) human vs. non-human human vs. animal (zoic) vs. inanimate (as in Polish in masculine) male vs. other male human vs. other masculine vs. feminine masculine vs. feminine vs. neuter common vs. neuter strong vs. weak augmentative vs. diminutive countable vs. uncountable Language families Algonquian languages The Ojibwe language and other members of the Algonquian languages distinguish between animate and inanimate classes. Some sources argue that the distinction is between things which are powerful and things which are not. All living things, as well as sacred things and things connected to the Earth are considered powerful and belong to the animate class. Still, the assignment is somewhat arbitrary, as "raspberry" is animate, but "strawberry" is inanimate. Athabaskan languages In Navajo (Southern Athabaskan) nouns are classified according to their animacy, shape, and consistency. Morphologically, however, the distinctions are not expressed on the nouns themselves, but on the verbs of which the nouns are the subject or direct object. For example, in the sentence "My shirt is lying on the bed", the verb "lies" is used because the subject "my shirt" is a flat, flexible object. In the sentence "My belt is lying on the bed", the verb "lies" is used because the subject "my belt" is a slender, flexible object. Koyukon (Northern Athabaskan) has a more intricate system of classification. Like Navajo, it has classificatory verb stems that classify nouns according to animacy, shape, and consistency. However, in addition to these verb stems, Koyukon verbs have what are called "gender prefixes" that further classify nouns. That is, Koyukon has two different systems that classify nouns: (a) a classificatory verb system and (b) a gender system. To illustrate, the verb stem -tonh is used for enclosed objects. When -tonh is combined with different gender prefixes, it can result in daaltonh which refers to objects enclosed in boxes or etltonh which refers to objects enclosed in bags. Australian Aboriginal languages The Dyirbal language is well known for its system of four noun classes, which tend to be divided along the following semantic lines: The class usually labeled "feminine", for instance, includes the word for fire and nouns relating to fire, as well as all dangerous creatures and phenomena. (This inspired the title of the George Lakoff book Women, Fire, and Dangerous Things.) The Ngangikurrunggurr language has noun classes reserved for canines and hunting weapons. The Anindilyakwa language has a noun class for things that reflect light. The Diyari language distinguishes only between female and other objects. Perhaps the most noun classes in any Australian language are found in Yanyuwa, which has 16 noun classes, including nouns associated with food, trees and abstractions, in addition to separate classes for men and masculine things, women and feminine things. In the men's dialect, the classes for men and for masculine things have simplified to a single class, marked the same way as the women's dialect marker reserved exclusively for men. Basque In Basque there are two classes, animate and inanimate; however, the only difference is in the declension of locative cases (inessive, ablative, allative, terminal allative, and directional allative). For inanimate nouns, the locative case endings are attached directly if the noun is singular, and plural and indefinite number are marked by the suffixes -eta- and -(e)ta-, respectively, before the case ending (this is in contrast to the non-locative cases, which follow a different system of number marking where the indefinite form of the ending is the most basic). For example, the noun etxe "house" has the singular ablative form etxetik "from the house", the plural ablative form etxeetatik "from the houses", and the indefinite ablative form etxetatik (the indefinite form is mainly used with determiners that precede the noun: zenbat etxetatik "from how many houses"). For animate nouns, on the other hand, the locative case endings are attached (with some phonetic adjustments) to the suffix -gan-, which is itself attached to the singular, plural, or indefinite genitive case ending. Alternatively, -gan- may attach to the absolutive case form of the word if it ends in a vowel. For example, the noun ume "child" has the singular ablative form umearengandik or umeagandik "from the child", the plural ablative form umeengandik "from the children", and the indefinite ablative form umerengandik or umegandik (cf. the genitive forms umearen, umeen, and umeren and the absolutive forms umea, umeak, and ume). In the inessive case, the case suffix is replaced entirely by -gan for animate nouns (compare etxean "in/at the house" and umearengan/umeagan "in/at the child"). Caucasian languages Some members of the Northwest Caucasian family, and almost all of the Northeast Caucasian languages, manifest noun class. In the Northeast Caucasian family, only Lezgian, Udi, and Aghul do not have noun classes. Some languages have only two classes, whereas Bats has eight. The most widespread system, however, has four classes: male, female, animate beings and certain objects, and finally a class for the remaining nouns. The Andi language has a noun class reserved for insects. Among Northwest Caucasian languages, only Abkhaz and Abaza have noun class, making use of a human male/human female/non-human distinction. In all Caucasian languages that manifest class, it is not marked on the noun itself but on the dependent verbs, adjectives, pronouns and prepositions. Atlantic–Congo languages Atlantic–Congo languages can have ten or more noun classes, defined according to non-sexual criteria. Certain nominal classes are reserved for humans. The Fula language has about 26 noun classes (the exact number varies slightly by dialect). According to Steven Pinker, the Kivunjo language has 16 noun classes including classes for precise locations and for general locales, classes for clusters or pairs of objects and classes for the objects that come in pairs or clusters, and classes for abstract qualities. Bantu languages According to Carl Meinhof, the Bantu languages have a total of 22 noun classes called nominal classes (this notion was introduced by W. H. J. Bleek). While no single language is known to express all of them, most of them have at least 10 noun classes. For example, by Meinhof's numbering,
meters of gas from its Changning-Weiyuan demonstration zone. Town gas Town gas is a flammable gaseous fuel made by the destructive distillation of coal. It contains a variety of calorific gases including hydrogen, carbon monoxide, methane, and other volatile hydrocarbons, together with small quantities of non-calorific gases such as carbon dioxide and nitrogen, and is used in a similar way to natural gas. This is a historical technology and is not usually economically competitive with other sources of fuel gas today. Most town "gashouses" located in the eastern US in the late 19th and early 20th centuries were simple by-product coke ovens that heated bituminous coal in air-tight chambers. The gas driven off from the coal was collected and distributed through networks of pipes to residences and other buildings where it was used for cooking and lighting. (Gas heating did not come into widespread use until the last half of the 20th century.) The coal tar (or asphalt) that collected in the bottoms of the gashouse ovens was often used for roofing and other waterproofing purposes, and when mixed with sand and gravel was used for paving streets. Crystallized natural gas – hydrates Huge quantities of natural gas (primarily methane) exist in the form of hydrates under sediment on offshore continental shelves and on land in arctic regions that experience permafrost, such as those in Siberia. Hydrates require a combination of high pressure and low temperature to form. In 2010, the cost of extracting natural gas from crystallized natural gas was estimated to be as much as twice the cost of extracting natural gas from conventional sources, and even higher from offshore deposits. In 2013, Japan Oil, Gas and Metals National Corporation (JOGMEC) announced that they had recovered commercially relevant quantities of natural gas from methane hydrate. Processing The image below is a schematic block flow diagram of a typical natural gas processing plant. It shows the various unit processes used to convert raw natural gas into sales gas pipelined to the end user markets. The block flow diagram also shows how processing of the raw natural gas yields byproduct sulfur, byproduct ethane, and natural gas liquids (NGL) propane, butanes and natural gasoline (denoted as pentanes +). Depletion As of mid-2020, natural gas production in the US has peaked three times, with current levels exceeding both previous peaks. It reached 24.1 trillion cubic feet per year in 1973, followed by a decline, and reached 24.5 trillion cubic feet in 2001. After a brief drop, withdrawals increased nearly every year since 2006 (owing to the shale gas boom), with 2017 production at 33.4 trillion cubic feet and 2019 production at 40.7 trillion cubic feet. After the third peak in December 2019, extraction continued to fall from March onward due to decreased demand caused by the COVID-19 pandemic in the US. The 2021 global energy crisis was driven by a global surge in demand as the world quit the economic recession caused by COVID-19, particularly due to strong energy demand in Asia. Storage and transport Because of its low density, it is not easy to store natural gas or to transport it by vehicle. Natural gas pipelines are impractical across oceans, since the gas needs to be cooled down and compressed, as the friction in the pipeline causes the gas to heat up. Many existing pipelines in America are close to reaching their capacity, prompting some politicians representing northern states to speak of potential shortages. The large trade cost implies that natural gas markets are globally much less integrated, causing significant price differences across countries. In Western Europe, the gas pipeline network is already dense. New pipelines are planned or under construction in Eastern Europe and between gas fields in Russia, Near East and Northern Africa and Western Europe. Whenever gas is bought or sold at custody transfer points, rules and agreements are made regarding the gas quality. These may include the maximum allowable concentration of , and . Usually sales quality gas that has been treated to remove contamination is traded on a "dry gas" basis and is required to be commercially free from objectionable odours, materials, and dust or other solid or liquid matter, waxes, gums and gum forming constituents, which might damage or adversely affect operation of equipment downstream of the custody transfer point. LNG carriers transport liquefied natural gas (LNG) across oceans, while tank trucks can carry liquefied or compressed natural gas (CNG) over shorter distances. Sea transport using CNG carrier ships that are now under development may be competitive with LNG transport in specific conditions. Gas is turned into liquid at a liquefaction plant, and is returned to gas form at regasification plant at the terminal. Shipborne regasification equipment is also used. LNG is the preferred form for long distance, high volume transportation of natural gas, whereas pipeline is preferred for transport for distances up to over land and approximately half that distance offshore. CNG is transported at high pressure, typically above . Compressors and decompression equipment are less capital intensive and may be economical in smaller unit sizes than liquefaction/regasification plants. Natural gas trucks and carriers may transport natural gas directly to end-users, or to distribution points such as pipelines. In the past, the natural gas which was recovered in the course of recovering petroleum could not be profitably sold, and was simply burned at the oil field in a process known as flaring. Flaring is now illegal in many countries. Additionally, higher demand in the last 20–30 years has made production of gas associated with oil economically viable. As a further option, the gas is now sometimes re-injected into the formation for enhanced oil recovery by pressure maintenance as well as miscible or immiscible flooding. Conservation, re-injection, or flaring of natural gas associated with oil is primarily dependent on proximity to markets (pipelines), and regulatory restrictions. Natural gas can be indirectly exported through the absorption in other physical output. A recent study suggests that the expansion of shale gas production in the US has caused prices to drop relative to other countries. This has caused a boom in energy intensive manufacturing sector exports, whereby the average dollar unit of US manufacturing exports has almost tripled its energy content between 1996 and 2012. A "master gas system" was invented in Saudi Arabia in the late 1970s, ending any necessity for flaring. Satellite and nearby infra-red camera observations, however, shows that flaring and venting are still happening in some countries. Natural gas is used to generate electricity and heat for desalination. Similarly, some landfills that also discharge methane gases have been set up to capture the methane and generate electricity. Natural gas is often stored underground inside depleted gas reservoirs from previous gas wells, salt domes, or in tanks as liquefied natural gas. The gas is injected in a time of low demand and extracted when demand picks up. Storage nearby end users helps to meet volatile demands, but such storage may not always be practicable. With 15 countries accounting for 84% of the worldwide extraction, access to natural gas has become an important issue in international politics, and countries vie for control of pipelines. In the first decade of the 21st century, Gazprom, the state-owned energy company in Russia, engaged in disputes with Ukraine and Belarus over the price of natural gas, which have created concerns that gas deliveries to parts of Europe could be cut off for political reasons. The United States is preparing to export natural gas. Floating liquefied natural gas Floating liquefied natural gas (FLNG) is an innovative technology designed to enable the development of offshore gas resources that would otherwise remain untapped due to environmental or economic factors which currently make them impractical to develop via a land-based LNG operation. FLNG technology also provides a number of environmental and economic advantages: Environmental – Because all processing is done at the gas field, there is no requirement for long pipelines to shore, compression units to pump the gas to shore, dredging and jetty construction, and onshore construction of an LNG processing plant, which significantly reduces the environmental footprint. Avoiding construction also helps preserve marine and coastal environments. In addition, environmental disturbance will be minimised during decommissioning because the facility can easily be disconnected and removed before being refurbished and re-deployed elsewhere. Economic – Where pumping gas to shore can be prohibitively expensive, FLNG makes development economically viable. As a result, it will open up new business opportunities for countries to develop offshore gas fields that would otherwise remain stranded, such as those offshore East Africa. Many gas and oil companies are considering the economic and environmental benefits of floating liquefied natural gas (FLNG). There are currently projects underway to construct five FLNG facilities. Petronas is close to completion on their FLNG-1 at Daewoo Shipbuilding and Marine Engineering and are underway on their FLNG-2 project at Samsung Heavy Industries. Shell Prelude is due to start production 2017. The Browse LNG project will commence FEED in 2019. Uses Natural gas is primarily used in the northern hemisphere. North America and Europe are major consumers. Mid-stream natural gas Often well head gases require removal of various hydrocarbon molecules contained within the gas. Some of these gases include heptane, pentane, propane and other hydrocarbons with molecular weights above methane (). The natural gas transmission lines extend to the natural gas processing plant or unit which removes the higher molecular weighted hydrocarbons to produce natural gas with energy content between . The processed natural gas may then be used for residential, commercial and industrial uses. Natural gas flowing in the distribution lines is called mid-stream natural gas and is often used to power engines which rotate compressors. These compressors are required in the transmission line to pressurize and repressurize the mid-stream natural gas as the gas travels. Typically, natural gas powered engines require natural gas to operate at the rotational name plate specifications. Several methods are used to remove these higher molecular weighted gases for use by the natural gas engine. A few technologies are as follows: Joule–Thomson skid Cryogenic or chiller system Chemical enzymology system Power generation Natural gas is a major source of electricity generation through the use of cogeneration, gas turbines and steam turbines. Natural gas is also well suited for a combined use in association with renewable energy sources such as wind or solar and for alimenting peak-load power stations functioning in tandem with hydroelectric plants. Most grid peaking power plants and some off-grid engine-generators use natural gas. Particularly high efficiencies can be achieved through combining gas turbines with a steam turbine in combined cycle mode. Natural gas burns more cleanly than other fuels, such as oil and coal. Because burning natural gas produces both water and carbon dioxide, it produces less carbon dioxide per unit of energy released than coal, which produces mostly carbon dioxide. Burning natural gas produces only about half the carbon dioxide per kilowatt-hour (kWh) that coal does. For transportation, burning natural gas produces about 30% less carbon dioxide than burning petroleum. The US Energy Information Administration reports the emissions in million metric tons of carbon dioxide in the world for 2012: Natural gas: 6,799 Petroleum: 11,695 Coal: 13,787 Coal-fired electric power generation emits around of carbon dioxide for every megawatt-hour (MWh) generated, which is almost double the carbon dioxide released by natural gas-fired generation. Because of this higher carbon efficiency of natural gas generation, as the fuel mix in the United States has changed to reduce coal and increase natural gas generation, carbon dioxide emissions have unexpectedly fallen. Those measured in the first quarter of 2012 were the lowest of any recorded for the first quarter of any year since 1992. Combined cycle power generation using natural gas is currently the cleanest available source of power using hydrocarbon fuels, and this technology is widely and increasingly used as natural gas can be obtained at increasingly reasonable costs. Fuel cell technology may eventually provide cleaner options for converting natural gas into electricity, but as yet it is not price-competitive. Locally produced electricity and heat using natural gas powered Combined Heat and Power plant (CHP or Cogeneration plant) is considered energy efficient and a rapid way to cut carbon emissions. Natural gas generated power has increased from 740 TWh in 1973 to 5140 TWh in 2014, generating 22% of the worlds total electricity, approximately half as much as generated with coal. Efforts around the world to reduce the use of coal has led some regions to switch to natural gas. Domestic use Natural gas dispensed in a residential setting can generate temperatures in excess of making it a powerful domestic cooking and heating fuel. In much of the developed world it is supplied through pipes to homes, where it is used for many purposes including ranges and ovens, gas-heated clothes dryers, heating/cooling, and central heating. Heaters in homes and other buildings may include boilers, furnaces, and water heaters. Both North America and Europe are major consumers of natural gas. Domestic appliances, furnaces, and boilers use low pressure, usually 6 to 7 inches of water (6" to 7" WC), which is about 0.25 psig. The pressures in the supply lines vary, either utilization pressure (UP, the aforementioned 6" to 7" WC) or elevated pressure (EP), which may be anywhere from 1 psig to 120 psig. Systems using EP have a regulator at the service entrance to step down the pressure to UP. Natural Gas Piping Systems inside buildings are often designed with pressures of 2 to 5 psi (13.8 to 34.5 kPa), and have downstream pressure regulators to reduce pressure as needed. The maximum allowable operating pressure for natural gas piping systems within a building is based on NFPA 54: National Fuel Gas Code, except when approved by the Public Safety Authority or when insurance companies have more stringent requirements. Generally, natural gas system pressures are not allowed to exceed 5 psig (34.5 kPa) unless all of the following conditions are met: The AHJ will allow a higher pressure. The distribution pipe is welded. (Note: 2. Some jurisdictions may also require that welded joints be radiographed to verify continuity). The pipes are closed for protection and placed in a ventilated area that does not allow gas accumulation. The pipe is installed in the areas used for industrial processes, research, storage or mechanical equipment rooms. Generally, a maximum liquefied petroleum gas pressure of 20 psig (138 kPa) is allowed, provided the building is used specifically for industrial or research purposes and is constructed in accordance with NFPA 58: Liquefied Petroleum Gas Code, Chapter 7. A seismic earthquake valve operating at a pressure of 55 psig (3.7 bar) can stop the flow of natural gas into the site wide natural gas distribution piping network (that runs (outdoors underground, above building roofs, and or within the upper supports of a canopy roof). Seismic earthquake valves are designed for use at a maximum of 60 psig. In Australia, natural gas is transported from gas processing facilities to regulator stations via Transmission pipelines. Gas is then regulated down to distributed pressures and the gas is distributed around a gas network via gas mains. Small branches from the network, called services, connect individual domestic dwellings, or multi-dwelling buildings to the network. The networks typically range in pressures from 7 kPa (low pressure) to 515 kPa (high pressure). Gas is then regulated down to 1.1 kPa or 2.75 kPa, before being metered and passed to the consumer for domestic use. Natural gas mains are made from a variety of materials: historically cast iron, though more modern mains are made from steel or polyethylene. In the US compressed natural gas (CNG) is available in some rural areas as an alternative to less expensive and more abundant LPG (liquefied petroleum gas), the dominant source of rural gas. It is used in homes lacking direct connections to public utility provided gas, or to fuel portable grills. Natural gas is also supplied by independent natural gas suppliers through Natural Gas Choice programs throughout the United States. Transportation CNG is a cleaner and also cheaper alternative to other automobile fuels such as gasoline (petrol). By the end of 2014, there were over 20 million natural gas vehicles worldwide, led by Iran (3.5 million), China (3.3 million), Pakistan (2.8 million), Argentina (2.5 million), India (1.8 million), and Brazil (1.8 million). The energy efficiency is generally equal to that of gasoline engines, but lower compared with modern diesel engines. Gasoline/petrol vehicles converted to run on natural gas suffer because of the low compression ratio of their engines, resulting in a cropping of delivered power while running on natural gas (10–15%). CNG-specific engines, however, use a higher compression ratio due to this fuel's higher octane number of 120–130. Besides use in road vehicles, CNG can also be used in aircraft. Compressed natural gas has been used in some aircraft like the Aviat Aircraft Husky 200 CNG and the Chromarat VX-1 KittyHawk LNG is also being used in aircraft. Russian aircraft manufacturer Tupolev for instance is running a development program to produce LNG- and hydrogen-powered aircraft. The program has been running since the mid-1970s, and seeks to develop LNG and hydrogen variants of the Tu-204 and Tu-334 passenger aircraft, and also the Tu-330 cargo aircraft. Depending on the current market price for jet fuel and LNG, fuel for an LNG-powered aircraft could cost 5,000 rubles (US$100) less per tonne, roughly 60%, with considerable reductions to carbon monoxide, hydrocarbon and nitrogen oxide emissions. The advantages of liquid methane as a jet engine fuel are that it has more specific energy than the standard kerosene mixes do and that its low temperature can help cool the air which the engine compresses for greater volumetric efficiency, in effect replacing an intercooler. Alternatively, it can be used to lower the temperature of the exhaust. Fertilizers Natural gas is a major feedstock for the production of ammonia, via the Haber process, for use in fertilizer production. Hydrogen Natural gas can be used to produce hydrogen, with one common method being the hydrogen reformer. Hydrogen has many applications: it is a primary feedstock for the chemical industry, a hydrogenating agent, an important commodity for oil refineries, and the fuel source in hydrogen vehicles. Animal and fish feed Protein rich animal and fish feed is produced by feeding natural gas to Methylococcus capsulatus bacteria on commercial scale. Other Natural gas is also used in the manufacture of fabrics, glass, steel, plastics, paint, synthetic oil, and other products. The first step in the valorization of natural gas components is usually the of the alkane into olefin. The oxidative dehydrogenation of ethane leads to ethylene which can be converted forther to ethylene epoxide, ethylene glycol, acetaldehyde or other olefins. Propane can be converted to propylene or can be oxidized to acrylic acid and acrylonitrile. Environmental effects Greenhouse gas effect of natural gas release Human activity is responsible for about 60% of all methane emissions and for most of the resulting increase in atmospheric methane. Natural gas is intentionally released or is otherwise known to leak during the extraction, storage, transportation, and distribution of fossil fuels. Globally, methane accounts for an estimated 33% of anthropogenic greenhouse gas warming. The decomposition of municipal solid waste (a source of landfill gas) and wastewater account for an additional 18% of such emissions. These estimates include substantial uncertainties which should be reduced in the near future with improved satellite measurements, such as those planned for MethaneSAT. After release to the atmosphere, methane is removed by gradual oxidation to carbon dioxide and water by hydroxyl radicals () formed in the troposphere or stratosphere, giving the overall chemical reaction + 2 → + 2. While the lifetime of atmospheric methane is relatively short when compared to carbon dioxide, with a half-life of about 7 years, it is more efficient at trapping heat in the atmosphere, so that a given quantity of methane has 84 times the global-warming potential of carbon dioxide over a 20-year period and 28 times over a 100-year period. Natural gas is thus a potent greenhouse gas due to the strong radiative forcing of methane in the short term, and the continuing effects of carbon dioxide in the longer term. Targeted efforts to reduce warming quickly by reducing anthropogenic methane emissions is a climate change mitigation strategy supported by the Global Methane Initiative. Greenhouse
gas mixture consisting of methane and commonly includes various amounts of other higher alkanes, and sometimes a small percentage of carbon dioxide, nitrogen, hydrogen sulfide, or helium. Natural gas is colorless and odorless, so a sulfur-smell (similar to rotten eggs) is usually added for early detection of leaks. Natural gas is formed when layers of decomposing plant and animal matter are exposed to intense heat and pressure under the surface of the Earth over millions of years. The energy that the plants originally obtained from the sun is stored in the form of chemical bonds in the gas. Natural gas is a fossil fuel. Natural gas is a non-renewable hydrocarbon used as a source of energy for heating, cooking, and electricity generation. It is also used as a fuel for vehicles and as a chemical feedstock in the manufacture of plastics and other commercially important organic chemicals. The extraction and consumption of natural gas is a major and growing driver of climate change. It is a potent greenhouse gas itself when released into the atmosphere, and creates carbon dioxide when burned. Natural gas can be efficiently burned to generate heat and electricity, emitting less waste and toxins at the point of use relative to other fossil and biomass fuels. However, gas venting and flaring, along with unintended fugitive emissions throughout the supply chain, can result in a similar carbon footprint overall. Natural gas is found in deep underground rock formations or associated with other hydrocarbon reservoirs in coal beds and as methane clathrates. Petroleum is another fossil fuel found close to and with natural gas. Most natural gas was created over time by two mechanisms: biogenic and thermogenic. Biogenic gas is created by methanogenic organisms in marshes, bogs, landfills, and shallow sediments. Deeper in the earth, at greater temperature and pressure, thermogenic gas is created from buried organic material. In petroleum production, gas is sometimes burned as flare gas. Before natural gas can be used as a fuel, most, but not all, must be processed to remove impurities, including water, to meet the specifications of marketable natural gas. The by-products of this processing include ethane, propane, butanes, pentanes, and higher molecular weight hydrocarbons, hydrogen sulfide (which may be converted into pure sulfur), carbon dioxide, water vapor, and sometimes helium and nitrogen. Natural gas is sometimes informally referred to simply as "gas", especially when it is being compared to other energy sources, such as oil or coal. However, it is not to be confused with gasoline, which is often shortened in colloquial usage to "gas", especially in North America, where the term gasoline is often shortened in colloquial usage to gas. History Natural gas was discovered accidentally in ancient China, as it resulted from the drilling for brines. Natural gas was first used by the Chinese in about 500 BC (possibly even 1000 BC). They discovered a way to transport gas seeping from the ground in crude pipelines of bamboo to where it was used to boil salt water to extract the salt in the Ziliujing District of Sichuan. The discovery and identification of natural gas in the Americas happened in 1626. In 1821, William Hart successfully dug the first natural gas well at Fredonia, New York, United States, which led to the formation of the Fredonia Gas Light Company. The city of Philadelphia created the first municipally owned natural gas distribution venture in 1836. By 2009, 66 000 km3 (16,000 cu. mi.) (or 8%) had been used out of the total 850 000 km3 (200,000 cu. mi.) of estimated remaining recoverable reserves of natural gas. Based on an estimated 2015 world consumption rate of about 3400 km3 (815 cu. mi.) of gas per year, the total estimated remaining economically recoverable reserves of natural gas would last 250 years at current consumption rates. An annual increase in usage of 2–3% could result in currently recoverable reserves lasting significantly less, perhaps as few as 80 to 100 years. Sources Natural gas In the 19th century, natural gas was primarily obtained as a by-product of producing oil. The small, light gas carbon chains came out of solution as the extracted fluids underwent pressure reduction from the reservoir to the surface, similar to uncapping a soft drink bottle where the carbon dioxide effervesces. The gas was often viewed as a by-product, a hazard, and a disposal problem in active oil fields. The large volumes produced could not be utilized until relatively expensive pipeline and storage facilities were constructed to deliver the gas to consumer markets. Until the early part of the 20th century, most natural gas associated with oil was either simply released or burned off at oil fields. Gas venting and production flaring are still practised in modern times, but efforts are ongoing around the world to retire them, and to replace them with other commercially viable and useful alternatives. Unwanted gas (or stranded gas without a market) is often returned to the reservoir with 'injection' wells while awaiting a possible future market or to re-pressurize the formation, which can enhance oil extraction rates from other wells. In regions with a high natural gas demand (such as the US), pipelines are constructed when it is economically feasible to transport gas from a wellsite to an end consumer. In addition to transporting gas via pipelines for use in power generation, other end uses for natural gas include export as liquefied natural gas (LNG) or conversion of natural gas into other liquid products via gas to liquids (GTL) technologies. GTL technologies can convert natural gas into liquids products such as gasoline, diesel or jet fuel. A variety of GTL technologies have been developed, including Fischer–Tropsch (F–T), methanol to gasoline (MTG) and syngas to gasoline plus (STG+). F–T produces a synthetic crude that can be further refined into finished products, while MTG can produce synthetic gasoline from natural gas. STG+ can produce drop-in gasoline, diesel, jet fuel and aromatic chemicals directly from natural gas via a single-loop process. In 2011, Royal Dutch Shell's per day F–T plant went into operation in Qatar. Natural gas can be "associated" (found in oil fields), or "non-associated" (isolated in natural gas fields), and is also found in coal beds (as coalbed methane). It sometimes contains a significant amount of ethane, propane, butane, and pentane—heavier hydrocarbons removed for commercial use prior to the methane being sold as a consumer fuel or chemical plant feedstock. Non-hydrocarbons such as carbon dioxide, nitrogen, helium (rarely), and hydrogen sulfide must also be removed before the natural gas can be transported. Natural gas extracted from oil wells is called casinghead gas (whether or not truly produced up the annulus and through a casinghead outlet) or associated gas. The natural gas industry is extracting an increasing quantity of gas from challenging resource types: sour gas, tight gas, shale gas, and coalbed methane. There is some disagreement on which country has the largest proven gas reserves. Sources that consider that Russia has by far the largest proven reserves include the US CIA (47 600 km3), the US Energy Information Administration (47 800 km3), and OPEC (48 700 km3). However, BP credits Russia with only 32 900 km3, which would place it in second place, slightly behind Iran (33 100 to 33 800 km3, depending on the source). With Gazprom, Russia is frequently the world's largest natural gas extractor. Major proven resources (in cubic kilometers) are world 187 300 (2013), Iran 33 600 (2013), Russia 32 900 (2013), Qatar 25 100 (2013), Turkmenistan 17 500 (2013) and the United States 8500 (2013). It is estimated that there are about 900 000 km3 of "unconventional" gas such as shale gas, of which 180 000 km3 may be recoverable. In turn, many studies from MIT, Black & Veatch and the DOE predict that natural gas will account for a larger portion of electricity generation and heat in the future. The world's largest gas field is the offshore South Pars / North Dome Gas-Condensate field, shared between Iran and Qatar. It is estimated to have of natural gas and of natural gas condensates. Because natural gas is not a pure product, as the reservoir pressure drops when non-associated gas is extracted from a field under supercritical (pressure/temperature) conditions, the higher molecular weight components may partially condense upon isothermic depressurizing—an effect called retrograde condensation. The liquid thus formed may get trapped as the pores of the gas reservoir get depleted. One method to deal with this problem is to re-inject dried gas free of condensate to maintain the underground pressure and to allow re-evaporation and extraction of condensates. More frequently, the liquid condenses at the surface, and one of the tasks of the gas plant is to collect this condensate. The resulting liquid is called natural gas liquid (NGL) and has commercial value. Shale gas Shale gas is natural gas produced from shale. Because shale has matrix permeability too low to allow gas to flow in economical quantities, shale gas wells depend on fractures to allow the gas to flow. Early shale gas wells depended on natural fractures through which gas flowed; almost all shale gas wells today require fractures artificially created by hydraulic fracturing. Since 2000, shale gas has become a major source of natural gas in the United States and Canada. Because of increased shale gas production the United States was in 2014 the number one natural gas producer in the world. The production of shale gas in the United States has been described as a "shale gas revolution" and as "one of the landmark events in the 21st century." Following the increased production in the United States, shale gas exploration is beginning in countries such as Poland, China, and South Africa. Chinese geologists have identified the Sichuan Basin as a promising target for shale gas drilling, because of the similarity of shales to those that have proven productive in the United States. Production from the Wei Wei 201 well is 1×104–2×104 m3 per day. In late 2020, China National Petroleum Corporation claimed daily production of 20 million cubic meters of gas from its Changning-Weiyuan demonstration zone. Town gas Town gas is a flammable gaseous fuel made by the destructive distillation of coal. It contains a variety of calorific gases including hydrogen, carbon monoxide, methane, and other volatile hydrocarbons, together with small quantities of non-calorific gases such as carbon dioxide and nitrogen, and is used in a similar way to natural gas. This is a historical technology and is not usually economically competitive with other sources of fuel gas today. Most town "gashouses" located in the eastern US in the late 19th and early 20th centuries were simple by-product coke ovens that heated bituminous coal in air-tight chambers. The gas driven off from the coal was collected and distributed through networks of pipes to residences and other buildings where it was used for cooking and lighting. (Gas heating did not come into widespread use until the last half of the 20th century.) The coal tar (or asphalt) that collected in the bottoms of the gashouse ovens was often used for roofing and other waterproofing purposes, and when mixed with sand and gravel was used for paving streets. Crystallized natural gas – hydrates Huge quantities of natural gas (primarily methane) exist in the form of hydrates under sediment on offshore continental shelves and on land in arctic regions that experience permafrost, such as those in Siberia. Hydrates require a combination of high pressure and low temperature to form. In 2010, the cost of extracting natural gas from crystallized natural gas was estimated to be as much as twice the cost of extracting natural gas from conventional sources, and even higher from offshore deposits. In 2013, Japan Oil, Gas and Metals National Corporation (JOGMEC) announced that they had recovered commercially relevant quantities of natural gas from methane hydrate. Processing The image below is a schematic block flow diagram of a typical natural gas processing plant. It shows the various unit processes used to convert raw natural gas into sales gas pipelined to the end user markets. The block flow diagram also shows how processing of the raw natural gas yields byproduct sulfur, byproduct ethane, and natural gas liquids (NGL) propane, butanes and natural gasoline (denoted as pentanes +). Depletion As of mid-2020, natural gas production in the US has peaked three times, with current levels exceeding both previous peaks. It reached 24.1 trillion cubic feet per year in 1973, followed by a decline, and reached 24.5 trillion cubic feet in 2001. After a brief drop, withdrawals increased nearly every year since 2006 (owing to the shale gas boom), with 2017 production at 33.4 trillion cubic feet and 2019 production at 40.7 trillion cubic feet. After the third peak in December 2019, extraction continued to fall from March onward due to decreased demand caused by the COVID-19 pandemic in the US. The 2021 global energy crisis was driven by a global surge in demand as the world quit the economic recession caused by COVID-19, particularly due to strong energy demand in Asia. Storage and transport Because of its low density, it is not easy to store natural gas or to transport it by vehicle. Natural gas pipelines are impractical across oceans, since the gas needs to be cooled down and compressed, as the friction in the pipeline causes the gas to heat up. Many existing pipelines in America are close to reaching their capacity, prompting some politicians representing northern states to speak of potential shortages. The large trade cost implies that natural gas markets are globally much less integrated, causing significant price differences across countries. In Western Europe, the gas pipeline network is already dense. New pipelines are planned or under construction in Eastern Europe and between gas fields in Russia, Near East and Northern Africa and Western Europe. Whenever gas is bought or sold at custody transfer points, rules and agreements are made regarding the gas quality. These may include the maximum allowable concentration of , and . Usually sales quality gas that has been treated to remove contamination is traded on a "dry gas" basis and is required to be commercially free from objectionable odours, materials, and dust or other solid or liquid matter, waxes, gums and gum forming constituents, which might damage or adversely affect operation of equipment downstream of the custody transfer point. LNG carriers transport liquefied natural gas (LNG) across oceans, while tank trucks can carry liquefied or compressed natural gas (CNG) over shorter distances. Sea transport using CNG carrier ships that are now under development may be competitive with LNG transport in specific conditions. Gas is turned into liquid at a liquefaction plant, and is returned to gas form at regasification plant at the terminal. Shipborne regasification equipment is also used. LNG is the preferred form for long distance, high volume transportation of natural gas, whereas pipeline is preferred for transport for distances up to over land and approximately half that distance offshore. CNG is transported at high pressure, typically above . Compressors and decompression equipment are less capital intensive and may be economical in smaller unit sizes than liquefaction/regasification plants. Natural gas trucks and carriers may transport natural gas directly to end-users, or to distribution points such as pipelines. In the past, the natural gas which was recovered in the course of recovering petroleum could not be profitably sold, and was simply burned at the oil field in a process known as flaring. Flaring is now illegal in many countries. Additionally, higher demand in the last 20–30 years has made production of gas associated with oil economically viable. As a further option, the gas is now sometimes re-injected into the formation for enhanced oil recovery by pressure maintenance as well as miscible or immiscible flooding. Conservation, re-injection, or flaring of natural gas associated with oil is primarily dependent on proximity to markets (pipelines), and regulatory restrictions. Natural gas can be indirectly exported through the absorption in other physical output. A recent study suggests that the expansion of shale gas production in the US has caused prices to drop relative to other countries. This has caused a boom in energy intensive manufacturing sector exports, whereby the average dollar unit
expected number depends on several factors, usually between 2.5 and 3.0) are ejected from the reaction. These free neutrons will then interact with the surrounding medium, and if more fissile fuel is present, some may be absorbed and cause more fissions. Thus, the cycle repeats to give a reaction that is self-sustaining. Nuclear power plants operate by precisely controlling the rate at which nuclear reactions occur. Nuclear weapons, on the other hand, are specifically engineered to produce a reaction that is so fast and intense it cannot be controlled after it has started. When properly designed, this uncontrolled reaction will lead to an explosive energy release. Nuclear fission fuel Nuclear weapons employ high quality, highly enriched fuel exceeding the critical size and geometry (critical mass) necessary in order to obtain an explosive chain reaction. The fuel for energy purposes, such as in a nuclear fission reactor, is very different, usually consisting of a low-enriched oxide material (e.g. UO2). There are two primary isotopes used for fission reactions inside of nuclear reactors. The first and most common is U-235 or uranium-235. This is the fissile isotope of uranium and it makes up approximately 0.7% of all naturally occurring uranium. Because of the small amount of uranium-235 that exists, it is considered a non-renewable energy source despite being found in rock formations around the world. U-235 cannot be used as fuel in its base form for energy production. It must undergo a process known as refinement to produce the compound UO2 or uranium dioxide. The uranium dioxide is then pressed and formed into ceramic pellets, which can subsequently be placed into fuel rods. This is when the compound uranium dioxide can be used for nuclear power production. The second most common isotope used in nuclear fission is Pu-239 or plutonium-239. This is due to its ability to become fissile with slow neutron interaction. This isotope is formed inside nuclear reactors through exposing U-238 to the neutrons released by the radioactive U-235 isotope. This neutron capture causes beta particle decay that enables U-238 to transform into Pu-239. Plutonium was once found naturally in the earth's crust but only trace amounts remain. The only way it is accessible in large quantities for energy production is through the neutron capture method. Another proposed fuel for nuclear reactors, which however plays no commercial role as of 2021, is which is "bred" by neutron capture and subsequent beta decays from natural thorium, which is almost 100% composed of the isotope Thorium-232. This is called the Thorium fuel cycle. Enrichment Process The fissile isotope uranium-235 in its natural concentration is unfit for the vast majority nuclear reactors. In order to be prepared for use as fuel in energy production, it must be enriched. The enrichment process does not apply to plutonium. Reactor-grade plutonium is created as a byproduct of neutron interaction between two different isotopes of uranium. The first step to enriching uranium begins by converting uranium oxide (created through the uranium milling process) into a gaseous form. This gas is known as uranium hexafluoride, which is created by combining hydrogen fluoride, fluorine gas, and uranium oxide. Uranium dioxide is also present in this process and it is sent off to be used in reactors not requiring enriched fuel. The remaining uranium hexafluoride compound is drained into strong metal cylinders where it solidifies. The next step is separating the uranium hexafluoride from the depleted U-235 left over. This is typically done with centrifuges that spin fast enough to allow for the 1% mass difference in uranium isotopes to separate themselves. A laser is then used to enrich the hexafluoride compound. The final step involves reconverting the now enriched compound back into uranium oxide, leaving the final product: enriched uranium oxide. This form of UO2 can now be used in fission reactors inside power plants to produce energy. Fission reaction products When a fissile atom undergoes nuclear fission, it breaks into two or more fission fragments. Also, several free neutrons, gamma rays, and neutrinos are emitted, and a large amount of energy is released. The sum of the rest masses of the fission fragments and ejected neutrons is less than the sum of the rest masses of the original atom and incident neutron (of course the fission fragments are not at rest). The mass difference is accounted for in the release of energy according to the equation E=Δmc2: mass of released energy = Due to the extremely large value of the speed of light, c, a small decrease in mass is associated with a tremendous release of active energy (for example, the kinetic energy of the fission fragments). This energy (in the form of radiation and heat) carries the missing mass, when it leaves the reaction system (total mass, like total energy, is always conserved). While typical chemical reactions release energies on the order of a few eVs (e.g. the binding energy of the electron to hydrogen is 13.6 eV), nuclear fission reactions typically release energies on the order of hundreds of millions of eVs. Two typical fission reactions are shown below with average values of energy released and number of neutrons ejected: Note that these equations are for fissions caused by slow-moving (thermal) neutrons. The average energy released and number of neutrons ejected is a function of the incident neutron speed. Also, note that these equations exclude energy from neutrinos since these subatomic particles are extremely non-reactive and, therefore, rarely
the Earth's crust. made up a larger share of uranium on earth in the geological past due to the different half life of the isotopes and , the former decaying almost an order of magnitude faster than the latter. Kuroda's prediction was verified with the discovery of evidence of natural self-sustaining nuclear chain reactions in the past at Oklo in Gabon in September 1972. To sustain a nuclear fission chain reaction at present isotope ratios in natural uranium on earth would require the presence of a neutron moderator like heavy water or high purity carbon (e.g. graphite) in the absence of neutron poisons, which is even more unlikely to arise by natural geological processes than the conditions at Oklo some two billion years ago. Fission chain reaction Fission chain reactions occur because of interactions between neutrons and fissile isotopes (such as 235U). The chain reaction requires both the release of neutrons from fissile isotopes undergoing nuclear fission and the subsequent absorption of some of these neutrons in fissile isotopes. When an atom undergoes nuclear fission, a few neutrons (the exact number depends on uncontrollable and unmeasurable factors; the expected number depends on several factors, usually between 2.5 and 3.0) are ejected from the reaction. These free neutrons will then interact with the surrounding medium, and if more fissile fuel is present, some may be absorbed and cause more fissions. Thus, the cycle repeats to give a reaction that is self-sustaining. Nuclear power plants operate by precisely controlling the rate at which nuclear reactions occur. Nuclear weapons, on the other hand, are specifically engineered to produce a reaction that is so fast and intense it cannot be controlled after it has started. When properly designed, this uncontrolled reaction will lead to an explosive energy release. Nuclear fission fuel Nuclear weapons employ high quality, highly enriched fuel exceeding the critical size and geometry (critical mass) necessary in order to obtain an explosive chain reaction. The fuel for energy purposes, such as in a nuclear fission reactor, is very different, usually consisting of a low-enriched oxide material (e.g. UO2). There are two primary isotopes used for fission reactions inside of nuclear reactors. The first and most common is U-235 or uranium-235. This is the fissile isotope of uranium and it makes up approximately 0.7% of all naturally occurring uranium. Because of the small amount of uranium-235 that exists, it is considered a non-renewable energy source despite being found in rock formations around the world. U-235 cannot be used as fuel in its base form for energy production. It must undergo a process known as refinement to produce the compound UO2 or uranium dioxide. The uranium dioxide is then pressed and formed into ceramic pellets, which can subsequently be placed into fuel rods. This is when the compound uranium dioxide can be used for nuclear power production. The second most common isotope used in nuclear fission is Pu-239 or plutonium-239. This is due to its ability to become fissile with slow neutron interaction. This isotope is formed inside nuclear reactors through exposing U-238 to the neutrons released by the radioactive U-235 isotope. This neutron capture causes beta particle decay that enables U-238 to transform into Pu-239. Plutonium was once found naturally in the earth's crust but only trace amounts remain. The only way it is accessible in large quantities for energy production is through the neutron capture method. Another proposed fuel for nuclear reactors, which however plays no commercial role as of 2021, is which is "bred" by neutron capture and subsequent beta decays from natural thorium, which is almost 100% composed of the isotope Thorium-232. This is called the Thorium fuel cycle. Enrichment Process The fissile isotope uranium-235 in its natural concentration is unfit for the vast majority nuclear reactors. In order to be prepared for use as fuel in energy production, it must be enriched. The enrichment process does not apply to plutonium. Reactor-grade plutonium is created as a byproduct of neutron interaction between two different isotopes of uranium. The first step to enriching uranium begins by converting uranium oxide (created through the uranium milling process) into a gaseous form. This gas is known as uranium hexafluoride, which is created by combining hydrogen fluoride, fluorine gas, and uranium oxide. Uranium dioxide is also present in this process and it is sent off to be used in reactors not requiring enriched fuel. The remaining uranium hexafluoride compound is drained into strong metal cylinders where it solidifies. The next step is separating the uranium hexafluoride from the depleted U-235 left over. This is typically done with centrifuges that spin fast enough to allow for the 1% mass difference in uranium isotopes to separate themselves. A laser is then used to enrich the hexafluoride compound. The final step involves reconverting the now enriched compound back into uranium oxide, leaving the final product: enriched uranium oxide. This form of UO2 can now be used in fission reactors inside power plants to produce energy. Fission reaction products When a fissile atom undergoes nuclear fission, it breaks into two or more fission fragments. Also, several free neutrons, gamma rays, and neutrinos are emitted, and a large amount of energy is released. The sum of the rest masses of the fission fragments and ejected neutrons is less than the sum of the rest masses of the original atom and incident neutron (of course the fission fragments are not at rest). The mass difference is accounted for in the release of energy according to the equation E=Δmc2: mass of released energy = Due to the extremely large value of the speed of light, c, a small decrease in mass is associated with a tremendous release of active energy (for example, the kinetic energy of the fission fragments). This energy (in the form of radiation and heat) carries the missing mass, when it leaves the reaction system (total mass, like total energy, is always conserved). While typical chemical reactions release energies on the order of a few eVs (e.g. the binding energy of the electron to hydrogen is 13.6 eV), nuclear fission reactions typically release energies on the order of hundreds of millions of eVs. Two typical fission reactions are shown below with average values of energy released and number of neutrons ejected: Note that these equations are for fissions caused by slow-moving (thermal) neutrons. The average energy released and number of neutrons ejected is a function of the incident neutron speed. Also, note that these equations exclude energy from neutrinos since these subatomic particles are extremely non-reactive and, therefore, rarely deposit their energy in the system. Timescales of nuclear chain reactions Prompt neutron lifetime The prompt neutron lifetime, l, is the average time between the emission of neutrons and either their absorption in the system or their escape from the system. The neutrons that occur directly from fission are called "prompt neutrons", and the ones that are a result of radioactive decay of fission fragments are called "delayed neutrons". The term lifetime is used because the emission of a neutron is often considered its "birth", and the subsequent absorption is considered its "death". For thermal (slow-neutron) fission reactors, the typical prompt neutron lifetime is on the order of 10−4 seconds, and for fast fission reactors, the prompt neutron lifetime is on the order of 10−7 seconds. These extremely short lifetimes mean that in 1 second, 10,000 to 10,000,000 neutron lifetimes can pass. The average (also referred to as the adjoint unweighted) prompt neutron lifetime takes into account all prompt neutrons regardless of their importance in the reactor core; the effective prompt neutron lifetime (referred to as the adjoint weighted over space, energy, and angle) refers to a neutron with average importance. Mean generation time The
daimoku, the object of worship (honzon), and the ordination platform (kaidan). These became the means for people to directly access the Buddha's enlightenment. At the bottom of each mandala he wrote: "This is the great mandala never before revealed in Jambudvipa during the more than 2,200 years since the Buddha's nirvana." He inscribed many Mandala Gohonzon during the rest of his life. More than a hundred Mandala Gohonzon preserved today are attributed to Nichiren's own hand. Return to Kamakura Nichiren was pardoned on 14 February 1274 and returned to Kamakura one month later on 26 March. Nichiren wrote that his innocence and the accuracy of his predictions caused the regent Hōjō Tokimune to intercede on his behalf. Scholars have suggested that some of his well-connected followers might have had influence on the government's decision to release him. On 8 April he was summoned by Hei no Saemon, who inquired about the timing of the next Mongol invasion. Nichiren predicted that it would occur within the year. He used the audience as yet another opportunity to remonstrate with the government. Claiming that reliance on prayers based on esoteric rituals would invite further calamity, he urged the bakufu to ground itself exclusively on the Lotus Sutra. Deeply disappointed by the government's refusal to heed his advice, Nichiren left Kamakura one month later, on 12 May, determined to become a solitary wayfarer. Five days later, however, on a visit to the residence of Lord Hakii Sanenaga of Mt. Minobu, he learned that followers in nearby regions had held steadfast during his exile. Despite severe weather and deprivation, Nichiren remained in Minobu for the rest of his career. Retirement to Mount Minobu During his self-imposed exile at Mount Minobu, a location 100 miles west of Kamakura, Nichiren led a widespread movement of followers in Kanto and Sado mainly through his prolific letter-writing. During the so-called "Atsuhara affair" of 1279 when governmental attacks were aimed at Nichiren's followers rather than himself, Nichiren's letters reveal an assertive and well-informed leader who provided detailed instructions through a sophisticated network of disciples serving as liaisons between Minobu and other affected areas in Japan. He also showed the ability to provide a compelling narrative of events that gave his followers a broad perspective of what was unfolding. More than half of the extant letters of Nichiren were written during his years at Minobu. Some consisted of moving letters to followers expressing appreciation for their assistance, counseling on personal matters, and explaining his teachings in more understandable terms. Two of his works from this period, the and the constitute, along with his Risshō Ankoku Ron ("On Establishing the Correct Teaching for the Peace of the Land"), Kaimoku Shō ("The Opening of the Eyes"), and Kanjin no Honzon Shō ("The Object of Devotion for Observing the Mind"), what is commonly regarded as his five major writings. During his years at Minobu Nichiren intensified his attacks on that had been incorporated into the Japanese Tendai school. It becomes clear at this point that he understood that he was creating his own form of Lotus Buddhism. Nichiren and his disciples completed the in 1281. In the 19th century this structure burned down to be replaced by a new structure completed in the second half of the Meiji era. While at Minobu Nichiren also inscribed numerous Mandala Gohonzon for bestowal upon specific disciples and lay believers. Nichiren Shoshu believers claim that after the execution of the three Atsuhara farmers he inscribed the Dai Gohonzon on 12 October 1279, a Gohonzon specifically addressed to all humanity. This assertion has been disputed by other schools as historically and textually incorrect. It is enshrined currently at the Tahō Fuji Dai-Nichirenge-Zan Taiseki-ji, informally known as the Head Temple Taiseki-ji of the Nichiren Shōshū Order of Buddhism, located at the foot of Mount Fuji in Fujinomiya, Shizuoka. Several of these Mandala Gohonzon are prominently retained by the Nichiren-shū in Yamanashi Prefecture. Others survive today in the repositories of Nichiren Shōshū temples such as in Fujinomiya, Shizuoka, which has a particularly large collection of scrolls that is publicly aired once a year. It is apparent that Nichiren took great care in deciding which of his disciples were eligible to receive a Gohonzon inscribed by him. In the case of a letter written to Lady Niiama he took great care to explain why he would not inscribe a Gohonzon despite a deep personal bond. Among the Gohonzon he inscribed were several that were quite large and perhaps intended for congregational use in chapels maintained by some lay followers. Death In 1282, after years of seclusion, Nichiren fell ill. His followers encouraged him to travel to the hot springs in Hitachi for their medicinal benefits. He was also encouraged by his disciples to travel there for the warmer weather, and to use the land offered by Hagiri Sanenaga for recuperation. En route, unable to travel further, he stopped at the home of a disciple in Ikegami, outside of present-day Tokyo, and died on 13 October 1282. According to legend, he died in the presence of fellow disciples after having spent several days lecturing from his sickbed on the Lotus Sutra, writing a final letter, and leaving instructions for the future of his movement after his death, namely the designation of the six senior disciples. His funeral and cremation took place the following day. His disciples left Ikegami with Nichiren's ashes on 21 October, reaching back to Minobu on 25 October. Nichiren Shu sects claims his tomb is sited, as per his request, at Kuon-ji on Mount Minobu where his ashes remain. Nichiren Shoshu asserts that Nikko Shonin later confiscated his cremated ashes along with other articles and brought them to Mount Fuji which, they claim are now enshrined on the left side next to the Dai Gohonzon within the Hoando storage house. Teachings Nichiren's teachings developed over the course of his career and their evolution can be seen through the study of his writings as well as in the annotations he made in his personal copy of the Lotus Sutra, the so-called Chū-hokekyō. Some scholars set a clear demarcation in his teachings at the time he arrived at Sado Island whereas others see a threefold division of thought: up to and through the Izu exile, from his return to Kamakura through the Sado Island exile, and during his years at Minobu. According to Anesaki, Nichiren, upon his arrival at Minobu, quickly turned his attention to consolidating his teachings toward their perpetuation. The scope of his thinking was outlined in an essay , considered by Nikkō Shōnin as one of Nichiren's ten major writings. Anesaki also claims that later during his Minobu years, in lectures he is said to have transmitted to his disciples, Nichiren summarized the key ideas of his teachings in one paragraph: Buddhahood is eternal, all people can and should manifest it in their lives; Nichiren is the personage in the Lotus Sutra whose mission it is to enable people to realize their enlightenment; his followers who share his vow are the Bodhisattvas of the Earth. This requires a spiritual and moral unity among followers based on their inherent Buddhahood; Nichiren established the seeds of this community and his followers to come must extend it globally. Thus the enlightened individual, country, and world are different expressions of the ideal of the Buddha land; and the enlightened heart of the individual plays out its role with the world and cosmos as its stage. This is Nichiren's vision of Kosen-rufu, a time when the teachings of the Lotus Sutra would be widely spread throughout the world. Nichiren set a precedent for Buddhist social activism centuries before its emergence in other Buddhist schools. The uniqueness of his teachings was his attempt to move Buddhism from the theoretical to the actualizable. He held adamantly that his teachings would permit a nation to right itself and ultimately lead to world peace. Some of his religious thinking was derived from the Tendai understanding of the Lotus Sutra, syncretic beliefs that were deeply rooted in the culture of his times, and new perspectives that were products of Kamakura Buddhism. Other ideas were completely original and unique to him. Contributions based on Tendai or contemporary thought Nichiren was a product of his times and some of his teachings were drawn from existing schools of thought or from emerging ideas in Kamakura Buddhism. Nichiren appropriated and expanded on these ideas. Immanence Nichiren stressed the concept of immanence, meaning that the Buddha's pure land is to be found in this present world (shaba soku jakkōdo). Related concepts such as attaining enlightenment in one's current form (sokushin jōbutsu) and the belief that enlightenment is not attained but is originally existing within all people (hongaku) had been introduced by Kūkai and Saicho several centuries earlier. These concepts were based on Chih-i's cosmology of the unity and interconnectedness of the universe called Three Thousand Realms in a Single Moment of Life (ichinen sanzen). Nichiren advanced these concepts by declaring that they were actualizable rather than theoretical. Cause and effect were simultaneous instead of linear. Contemplation of one's mind (kanjin) took place within the singular belief in and commitment to the Lotus Sutra. According to Nichiren these phenomena manifest when a person chants the title of the Lotus Sutra (date) and shares its validity with others, even at the cost of one's life if need be. Nichiren constructed a triad relationship between faith, practice, and study. Faith meant embracing his new paradigm of the Lotus Sutra. It was something that needed to be continually deepened. "To accept (ju) [faith in the sutra] is easy," he explained to a follower, "to uphold it (ji) is difficult. But the realization of Buddhahood lies in upholding [faith]." This could only be manifested by the practice of chanting the daimoku as well as teaching others to do the same, and study. Consequently, Nichiren consistently and vehemently objected to the perspective of the Pure Land school that stressed an other-worldly aspiration to some pure land. Behind his assertion is the concept of the nonduality of the subjective realm (the individual) and the objective realm (the land that the individual inhabits) which indicates that when the individual taps buddhahood, his or her present world becomes peaceful and harmonious. For Nichiren the widespread propagation of the Lotus Sutra and consequent world peace ("kosen-rufu") was achievable and inevitable and tasked his future followers with a mandate to accomplish it. The Latter Day of the Law The Kamakura period of 13th century Japan was characterized by a sense of foreboding. Nichiren, as well as the others of this time, believed that they had entered the Latter Day of the Law (Mappō), the time which Shakyamuni predicted his teachings would lose their efficacy. Indeed, Japan had entered an era of extreme natural disasters, internal strife and political conflict. Although Nichiren attributed the turmoils and disasters in society to the widespread practice of what he deemed inferior Buddhist teachings that were under government sponsorship, he was enthusiastically upbeat about the portent of the age. He asserted, in contrast to other Mahayana schools, this was the best possible moment to be alive, the era in which the Lotus Sutra was to spread, and the time in which the Bodhisattvas of the Earth would appear to propagate it. "It is better to be a leper who chants Nam(u)-myōhō-renge-kyō than be a chief abbot of the Tendai school." Debate and polemics The tradition of conducting open and sustained debate to clarify matters of fundamental Buddhist principles has deep-seated roots in Tibet, China, and Korea. This tradition was also quite pronounced in Japan. In addition to formalized religious debates, the Kamakura period was marked by flourishing and competitive oral religious discourse. Temples began to compete for the patronage of the wealthy and powerful through oratorical sermonizing and temple lecturers (kōshi) faced pressure to attract crowds. Sermonizing spread from within the confines of temples to homes and the streets as wandering mendicants (shidōso, hijiri, or inja) preached to both the educated and illiterate in exchange for alms. In order to teach principles of faith preachers incorporated colorful storytelling, music, vaudeville, and drama—which later evolved into Noh. A predominant topic of debate in Kamakura Buddhism was the concept of rebuking "slander of the Dharma." The Lotus Sutra itself strongly warns about slander of the Dharma. Hōnen, in turn, employed harsh polemics instructing people to , , , and the Lotus Sutra and other non-Pure Land teachings. His ideas were vociferously attacked by many including Nichiren. Nichiren, however, elevated countering slander of the Dharma into a pillar of Buddhist practice. In fact, far more of his extant writings deal with the clarification of what constitutes the essence of Buddhist teachings than expositions of how to meditate. At age 32, Nichiren began a career of denouncing other Mahayana Buddhist schools of his time and declaring what he asserted was the correct teaching, the Universal Dharma (Nam(u)-Myōhō-Renge-Kyō), and chanting its words as the only path for both personal and social salvation. The first target of his polemics was Pure Land Buddhism which had begun to gain ascendancy among the leaders and populace and even had established itself within the Tendai school. Nichiren's detailed rationale is most famously articulated in his : "Treatise On Establishing the Correct Teaching for the Peace of the Land," his first major treatise and the first of his three remonstrations with the bakufu authorities. Although his times were harsh and permeated by bakufu culture, Nichiren always chose the power of language over bearing arms or resorting to violence. He didn't mince his words and was relentless to pursue dialogue whether in the form of debate, conversations, or correspondence. His spirit of engaging in discourse is captured in his statement, "Whatever obstacles I may encounter, as long as men [persons] of wisdom do not prove my teachings to be false, I will never yield." "Single Practice" Buddhism Hōnen introduced the concept of "single practice" Buddhism. Basing himself on the writings of the Chinese Buddhist Shandao, he advocated the singular practice of Nianfo, the recitation of the Buddha Amida's name. This practice was revolutionary because it was accessible to all and minimalized the monopolistic role of the entire monastic establishment. Nichiren appropriated the structure of a universally accessible single practice but substituted the Nianfo with the daimoku of Nam(u)-myōhō-renge-kyō. This constituted renouncing the principle of aspirating to a Pure Land after death and asserting instead the Lotus perspective
seemingly fulfilling the prediction he had made in the Rissho Ankoku Ron of rebellion in the domain. At this point Nichiren was transferred to much better accommodations. While on Sado island, Nichiren inscribed the first Mandala . Although there is evidence of a Gohonzon in embryonic form as far back as the days right before his exile, the first in full form is dated to 8 July 1273 and includes the inscription of "Nichiren inscribes this for the first time." His writings on Sado provide his rationale for a calligraphic mandala depicting the assembly at Eagle Peak which was to be used as an object of devotion or worship. By increasingly associating himself with Visistacaritra he implied a direct link to the original and universal Buddha. He read in the 16th (Life span) chapter of the Lotus Sutra a three-fold "secret Dharma" of the daimoku, the object of worship (honzon), and the ordination platform (kaidan). These became the means for people to directly access the Buddha's enlightenment. At the bottom of each mandala he wrote: "This is the great mandala never before revealed in Jambudvipa during the more than 2,200 years since the Buddha's nirvana." He inscribed many Mandala Gohonzon during the rest of his life. More than a hundred Mandala Gohonzon preserved today are attributed to Nichiren's own hand. Return to Kamakura Nichiren was pardoned on 14 February 1274 and returned to Kamakura one month later on 26 March. Nichiren wrote that his innocence and the accuracy of his predictions caused the regent Hōjō Tokimune to intercede on his behalf. Scholars have suggested that some of his well-connected followers might have had influence on the government's decision to release him. On 8 April he was summoned by Hei no Saemon, who inquired about the timing of the next Mongol invasion. Nichiren predicted that it would occur within the year. He used the audience as yet another opportunity to remonstrate with the government. Claiming that reliance on prayers based on esoteric rituals would invite further calamity, he urged the bakufu to ground itself exclusively on the Lotus Sutra. Deeply disappointed by the government's refusal to heed his advice, Nichiren left Kamakura one month later, on 12 May, determined to become a solitary wayfarer. Five days later, however, on a visit to the residence of Lord Hakii Sanenaga of Mt. Minobu, he learned that followers in nearby regions had held steadfast during his exile. Despite severe weather and deprivation, Nichiren remained in Minobu for the rest of his career. Retirement to Mount Minobu During his self-imposed exile at Mount Minobu, a location 100 miles west of Kamakura, Nichiren led a widespread movement of followers in Kanto and Sado mainly through his prolific letter-writing. During the so-called "Atsuhara affair" of 1279 when governmental attacks were aimed at Nichiren's followers rather than himself, Nichiren's letters reveal an assertive and well-informed leader who provided detailed instructions through a sophisticated network of disciples serving as liaisons between Minobu and other affected areas in Japan. He also showed the ability to provide a compelling narrative of events that gave his followers a broad perspective of what was unfolding. More than half of the extant letters of Nichiren were written during his years at Minobu. Some consisted of moving letters to followers expressing appreciation for their assistance, counseling on personal matters, and explaining his teachings in more understandable terms. Two of his works from this period, the and the constitute, along with his Risshō Ankoku Ron ("On Establishing the Correct Teaching for the Peace of the Land"), Kaimoku Shō ("The Opening of the Eyes"), and Kanjin no Honzon Shō ("The Object of Devotion for Observing the Mind"), what is commonly regarded as his five major writings. During his years at Minobu Nichiren intensified his attacks on that had been incorporated into the Japanese Tendai school. It becomes clear at this point that he understood that he was creating his own form of Lotus Buddhism. Nichiren and his disciples completed the in 1281. In the 19th century this structure burned down to be replaced by a new structure completed in the second half of the Meiji era. While at Minobu Nichiren also inscribed numerous Mandala Gohonzon for bestowal upon specific disciples and lay believers. Nichiren Shoshu believers claim that after the execution of the three Atsuhara farmers he inscribed the Dai Gohonzon on 12 October 1279, a Gohonzon specifically addressed to all humanity. This assertion has been disputed by other schools as historically and textually incorrect. It is enshrined currently at the Tahō Fuji Dai-Nichirenge-Zan Taiseki-ji, informally known as the Head Temple Taiseki-ji of the Nichiren Shōshū Order of Buddhism, located at the foot of Mount Fuji in Fujinomiya, Shizuoka. Several of these Mandala Gohonzon are prominently retained by the Nichiren-shū in Yamanashi Prefecture. Others survive today in the repositories of Nichiren Shōshū temples such as in Fujinomiya, Shizuoka, which has a particularly large collection of scrolls that is publicly aired once a year. It is apparent that Nichiren took great care in deciding which of his disciples were eligible to receive a Gohonzon inscribed by him. In the case of a letter written to Lady Niiama he took great care to explain why he would not inscribe a Gohonzon despite a deep personal bond. Among the Gohonzon he inscribed were several that were quite large and perhaps intended for congregational use in chapels maintained by some lay followers. Death In 1282, after years of seclusion, Nichiren fell ill. His followers encouraged him to travel to the hot springs in Hitachi for their medicinal benefits. He was also encouraged by his disciples to travel there for the warmer weather, and to use the land offered by Hagiri Sanenaga for recuperation. En route, unable to travel further, he stopped at the home of a disciple in Ikegami, outside of present-day Tokyo, and died on 13 October 1282. According to legend, he died in the presence of fellow disciples after having spent several days lecturing from his sickbed on the Lotus Sutra, writing a final letter, and leaving instructions for the future of his movement after his death, namely the designation of the six senior disciples. His funeral and cremation took place the following day. His disciples left Ikegami with Nichiren's ashes on 21 October, reaching back to Minobu on 25 October. Nichiren Shu sects claims his tomb is sited, as per his request, at Kuon-ji on Mount Minobu where his ashes remain. Nichiren Shoshu asserts that Nikko Shonin later confiscated his cremated ashes along with other articles and brought them to Mount Fuji which, they claim are now enshrined on the left side next to the Dai Gohonzon within the Hoando storage house. Teachings Nichiren's teachings developed over the course of his career and their evolution can be seen through the study of his writings as well as in the annotations he made in his personal copy of the Lotus Sutra, the so-called Chū-hokekyō. Some scholars set a clear demarcation in his teachings at the time he arrived at Sado Island whereas others see a threefold division of thought: up to and through the Izu exile, from his return to Kamakura through the Sado Island exile, and during his years at Minobu. According to Anesaki, Nichiren, upon his arrival at Minobu, quickly turned his attention to consolidating his teachings toward their perpetuation. The scope of his thinking was outlined in an essay , considered by Nikkō Shōnin as one of Nichiren's ten major writings. Anesaki also claims that later during his Minobu years, in lectures he is said to have transmitted to his disciples, Nichiren summarized the key ideas of his teachings in one paragraph: Buddhahood is eternal, all people can and should manifest it in their lives; Nichiren is the personage in the Lotus Sutra whose mission it is to enable people to realize their enlightenment; his followers who share his vow are the Bodhisattvas of the Earth. This requires a spiritual and moral unity among followers based on their inherent Buddhahood; Nichiren established the seeds of this community and his followers to come must extend it globally. Thus the enlightened individual, country, and world are different expressions of the ideal of the Buddha land; and the enlightened heart of the individual plays out its role with the world and cosmos as its stage. This is Nichiren's vision of Kosen-rufu, a time when the teachings of the Lotus Sutra would be widely spread throughout the world. Nichiren set a precedent for Buddhist social activism centuries before its emergence in other Buddhist schools. The uniqueness of his teachings was his attempt to move Buddhism from the theoretical to the actualizable. He held adamantly that his teachings would permit a nation to right itself and ultimately lead to world peace. Some of his religious thinking was derived from the Tendai understanding of the Lotus Sutra, syncretic beliefs that were deeply rooted in the culture of his times, and new perspectives that were products of Kamakura Buddhism. Other ideas were completely original and unique to him. Contributions based on Tendai or contemporary thought Nichiren was a product of his times and some of his teachings were drawn from existing schools of thought or from emerging ideas in Kamakura Buddhism. Nichiren appropriated and expanded on these ideas. Immanence Nichiren stressed the concept of immanence, meaning that the Buddha's pure land is to be found in this present world (shaba soku jakkōdo). Related concepts such as attaining enlightenment in one's current form (sokushin jōbutsu) and the belief that enlightenment is not attained but is originally existing within all people (hongaku) had been introduced by Kūkai and Saicho several centuries earlier. These concepts were based on Chih-i's cosmology of the unity and interconnectedness of the universe called Three Thousand Realms in a Single Moment of Life (ichinen sanzen). Nichiren advanced these concepts by declaring that they were actualizable rather than theoretical. Cause and effect were simultaneous instead of linear. Contemplation of one's mind (kanjin) took place within the singular belief in and commitment to the Lotus Sutra. According to Nichiren these phenomena manifest when a person chants the title of the Lotus Sutra (date) and shares its validity with others, even at the cost of one's life if need be. Nichiren constructed a triad relationship between faith, practice, and study. Faith meant embracing his new paradigm of the Lotus Sutra. It was something that needed to be continually deepened. "To accept (ju) [faith in the sutra] is easy," he explained to a follower, "to uphold it (ji) is difficult. But the realization of Buddhahood lies in upholding [faith]." This could only be manifested by the practice of chanting the daimoku as well as teaching others to do the same, and study. Consequently, Nichiren consistently and vehemently objected to the perspective of the Pure Land school that stressed an other-worldly aspiration to some pure land. Behind his assertion is the concept of the nonduality of the subjective realm (the individual) and the objective realm (the land that the individual inhabits) which indicates that when the individual taps buddhahood, his or her present world becomes peaceful and harmonious. For Nichiren the widespread propagation of the Lotus Sutra and consequent world peace ("kosen-rufu") was achievable and inevitable and tasked his future followers with a mandate to accomplish it. The Latter Day of the Law The Kamakura period of 13th century Japan was characterized by a sense of foreboding. Nichiren, as well as the others of this time, believed that they had entered the Latter Day of the Law (Mappō), the time which Shakyamuni predicted his teachings would lose their efficacy. Indeed, Japan had entered an era of extreme natural disasters, internal strife and political conflict. Although Nichiren attributed the turmoils and disasters in society to the widespread practice of what he deemed inferior Buddhist teachings that were under government sponsorship, he was enthusiastically upbeat about the portent of the age. He asserted, in contrast to other Mahayana schools, this was the best possible moment to be alive, the era in which the Lotus Sutra was to spread, and the time in which the Bodhisattvas of the Earth would appear to propagate it. "It is better to be a leper who chants Nam(u)-myōhō-renge-kyō than be a chief abbot of the Tendai school." Debate and polemics The tradition of conducting open and sustained debate to clarify matters of fundamental Buddhist principles has deep-seated roots in Tibet, China, and Korea. This tradition was also quite pronounced in Japan. In addition to formalized religious debates, the Kamakura period was marked by flourishing and competitive oral religious discourse. Temples began to compete for the patronage of the wealthy and powerful through oratorical sermonizing and temple lecturers (kōshi) faced pressure to attract crowds. Sermonizing spread from within the confines of temples to homes and the streets as wandering mendicants (shidōso, hijiri, or inja) preached to both the educated and illiterate in exchange for alms. In order to teach principles of faith preachers incorporated colorful storytelling, music, vaudeville, and drama—which later evolved into Noh. A predominant topic of debate in Kamakura Buddhism was the concept of rebuking "slander of the Dharma." The Lotus Sutra itself strongly warns about slander of the Dharma. Hōnen, in turn, employed harsh polemics instructing people to , , , and the Lotus Sutra and other non-Pure Land teachings. His ideas were vociferously attacked by many including Nichiren. Nichiren, however, elevated countering slander of the Dharma into a pillar of Buddhist practice. In fact, far more of his extant writings deal with the clarification of what constitutes the essence of Buddhist teachings than expositions of how to meditate. At age 32, Nichiren began a career of denouncing other Mahayana Buddhist schools of his time and declaring what he asserted was the correct teaching, the Universal Dharma (Nam(u)-Myōhō-Renge-Kyō), and chanting its words as the only path for both personal and social salvation. The first target of his polemics was Pure Land Buddhism which had begun to gain ascendancy among the leaders and populace and even had established itself within the Tendai school. Nichiren's detailed rationale is most famously articulated in his : "Treatise On Establishing the Correct Teaching for the Peace of the Land," his first major treatise and the first of his three remonstrations with the bakufu authorities. Although his times were harsh and permeated by bakufu culture, Nichiren always chose the power of language over bearing arms or resorting to violence. He didn't mince his words and was relentless to pursue dialogue whether in the form of debate, conversations, or correspondence. His spirit of engaging in discourse is captured in his statement, "Whatever obstacles I may encounter, as long as men [persons] of wisdom do not prove my teachings to be false, I will never yield." "Single Practice" Buddhism Hōnen introduced the concept of "single practice" Buddhism. Basing himself on the writings of the Chinese Buddhist Shandao, he advocated the singular practice of Nianfo, the recitation of the Buddha Amida's name. This practice was revolutionary because it was accessible to all and minimalized the monopolistic role of the entire monastic establishment. Nichiren appropriated the structure of a universally accessible single practice but substituted the Nianfo with the daimoku of Nam(u)-myōhō-renge-kyō. This constituted renouncing the principle of aspirating to a Pure Land after death and asserting instead the Lotus perspective of attaining Buddhahood in one's present form in this lifetime. Protective forces Japan had a long-established system of folk beliefs that existed outside of and parallel to the schools of the Buddhist establishment. Many of these beliefs had an influence on the various religious schools which, in turn, influenced each other, a phenomenon known as syncretism. Among these beliefs were the existence of kami, indigenous gods and goddesses or protective forces, that influenced human and natural occurrences in a holistic universe. Some beliefs ascribed kami to traces of the Buddha. The belief in kami was deeply embedded in the episteme of the time. Human agency through prayers and rituals could summon forth kami who would engage in nation-protection (chingo kokka). According to some of his accounts, Nichiren undertook his study of Buddhism to largely understand why the kami had seemingly abandoned Japan, as witnessed by the decline of the imperial court. Because the court and the people had turned to teachings that had weakened their minds and resolve, he came to conclude, both people of wisdom and the protective forces had abandoned the nation. By extension, he argued, through proper prayer and action his troubled society would transform into an ideal world in which peace and wisdom prevail and "the wind will not thrash the branches nor the rain fall hard enough to break clods." Unique teachings From Nichiren's corpus appear several lines of unique Buddhist thought. "The Five Guides of Propagation" Developed during his Izu exile, the Five Guides (gogi) are five criteria through which Buddhist teachings can be evaluated and ranked. They are the quality of the teaching (kyō), the innate human capacity (ki) of the people, the time (ji), the characteristic of the land or country (koku), and the sequence of dharma propagation (kyōhō rufu no zengo). From these five interrelated perspectives Nichiren declared his interpretation of the Lotus Sutra as the supreme teaching. The Four Denunciations Throughout his career Nichiren harshly denounced Buddhist practices other than his own as well as the existing social and political system. The tactic he adopted was shakubuku, conversion, in which he shocked his adversaries with his denunciations while attracting followers through his outward display of supreme confidence. Modern detractors criticize his exclusivist single-truth perspective as intolerant. Apologists argue his arguments should be understood in the context of his samurai society and not through post-modern lenses such as tolerance. Both of them may be regarded as having seized an aspect of the truth, namely that Nichiren, rather like Dogen, was not less brilliantly original for being a rigid dogmatist in doctrine. As his career advanced, Nichiren's vehement polemics against Pure Land teachings came to include sharp criticisms of the Shingon, Zen, and Ritsu schools of Buddhism. Collectively his criticisms have become known as "the Four Denunciations." Later in his career he critiqued the Japanese Tendai school for its appropriation of Shingon elements. Reliance on Shingon rituals, he claimed, was magic and would decay the nation. He held that Zen was devilish in its belief that attaining enlightenment was possible without relying on the Buddha's words; Ritsu was thievery because it hid behind token deeds such as public works. In modern parlance the Four Denunciations rebuked thinking that demoralized and disengaged people by encouraging resignation and escapism. The doctrine of the Three Great Secret Laws Nichiren deemed the world to be in a degenerative age and believed that people required a simple and effective means to rediscover the core of Buddhism and thereby restore their spirits and times. He described his Three Great Secret Laws (Sandai hiho) as this very means. In a writing entitled Sandai Hiho Sho, or "On the Transmission of the Three Great Secret Laws", Nichiren delineated three teachings in the heart of the 16th chapter of the Lotus Sutra which are secret because he claimed he received them as the leader of the Bodhisattvas of the Earth through a silent transmission from Shakyamuni. They are the invocation (daimoku), the object of worship (honzon), and the platform of ordination or place of worship (kaidan). The daimoku, the rhythmic chanting of Nam(u)-myōhō-renge-kyō is the means to discover that one's own life, the lives of others, and the environment is the essence of the Buddha of absolute freedom. The chanting is to be done while contemplating the honzon. At the age of 51, Nichiren inscribed his own Mandala Gohonzon, the object of veneration or worship in his Buddhism, "never before known," as he described it. The Gohonzon is a calligraphic representation of the cosmos and chanting daimoku to it is Nichiren's method of meditation to experience the truth of Buddhism. He believed this practice was efficacious, simple to perform, and suited to the capacity of the people and the time. Nichiren describes the first two secret laws in numerous other writings but the reference to the platform of ordination appears only in the Sandai Hiho Sho, a work whose authenticity has been questioned by some scholars. Nichiren apparently left the fulfillment of this secret Dharma to his successors and its interpretation has been a matter of heated debate. Some state that it refers to the construction of a physical national ordination platform sanctioned by the emperor; others contend that the ordination platform is the community of believers (sangha) or, simply, the place where practitioners of the Lotus Sutra live and make collective efforts to realize the ideal of establishing the true Dharma in order to establish peace to the land (rissho ankoku). The latter conception entails a robust interplay between religion and secular life and an egalitarian structure in which people are dedicated to perfecting an ideal society. According to Nichiren, practicing the Three Secret Laws results in the "Three Proofs" which verify their validity. The first proof is "documentary," whether the religion's fundamental texts, here the writings of Nichiren, make a lucid case for the eminence of the religion. "Theoretical proof" is an intellectual standard of whether a religion's teachings reasonably clarify the mysteries of life and death. "Actual proof," deemed the most important by Nichiren, demonstrates the validity of the teaching through the actual
the Lotus Sutra with the vow to spread the correct teaching and thereby establish a peaceful and just society. For Nichiren, enlightenment is not limited to one's inner life, but is "something that called for actualization in endeavors toward the transformation of the land, toward the realization of an ideal society." The specific task to be pursued by Nichiren's disciples was the widespread propagation of his teachings (the invocation and the Gohonzon) in a way that would affect actual change in the world's societies so that the sanctuary, or seat, of Buddhism could be built. Nichiren saw this sanctuary as a specific seat of his Buddhism, but there is thought that he also meant it in a more general sense, that is, wherever his Buddhism would be practiced. This sanctuary, along with the invocation and Gohonzon, comprise "the three great secret laws (or dharmas)" found in the Lotus Sutra. Nichiren Nichiren and his time Nichiren Buddhism originated in 13th-century feudal Japan. It is one of six new forms of Shin Bukkyo (English: "New Buddhism") of "Kamakura Buddhism." The arrival of these new schools was a response to the social and political upheaval in Japan during this time as power passed from the nobility to a shogunate military dictatorship led by the Minamoto clan and later to the Hōjō clan. A prevailing pessimism existed associated with the perceived arrival of the Age of the Latter Day of the Law. The era was marked by an intertwining relationship between Buddhist schools and the state which included clerical corruption. By Nichiren's time the Lotus Sūtra was firmly established in Japan. From the ninth century, Japanese rulers decreed that the Lotus Sūtra be recited in temples for its "nation-saving" qualities. It was the most frequently read and recited sutra by the literate lay class and its message was disseminated widely through art, folk tales, music, and theater. It was commonly held that it had powers to bestow spiritual and worldly benefits to individuals. However, even Mount Hiei, the seat of Tiantai Lotus Sutra devotion, had come to adopt an eclectic assortment of esoteric rituals and Pure Land practices as "expedient means" to understand the sutra itself. Development during Nichiren's life Nichiren developed his thinking in this midst of confusing Lotus Sutra practices and a competing array of other "Old Buddhism" and "New Buddhism" schools. The biographical development of his thinking is sourced almost entirely from his extant writings as there is no documentation about him in the public records of his times. Modern scholarship on Nichiren's life tries to provide sophisticated textual and sociohistorical analyses to cull longstanding myths about Nichiren that accrued over time from what is actually concretized. It is clear that from an early point in his studies Nichiren came to focus on the Lotus Sutra as the culmination and central message of Shakyamuni. As his life unfolded he engaged in a "circular hermeneutic" in which the interplay of the Lotus Sutra text and his personal experiences verified and enriched each other in his mind. As a result, there are significant turning points as his teachings reach full maturity. Scholar Yoshirō Tamura categorizes the development of Nichiren's thinking into three periods: An early period extending up to Nichiren's submission of the "Risshō Ankoku Ron" ("Establishment of the Legitimate Teaching for the Protection of the Country") to Hōjō Tokiyori in 1260; A middle period bookmarked by his first exile (to Izu Peninsula, 1261) and his release from his second exile (to Sado Island, 1273); A final period (1274–1282) in which Nichiren lived in Mount Minobu directing his movement from afar. Early stage: From initial studies to 1260 For more than 20 years Nichiren examined Buddhist texts and commentaries at Mount Hiei's Enryaku-ji temple and other major centers of Buddhist study in Japan. In later writings he claimed he was motivated by four primary questions: (1) What were the essentials of the competing Buddhist sects so they could be ranked according to their merits and flaws? (2) Which of the many Buddhist scriptures that had reached Japan represented the essence of Shakyamuni's teaching? (3) How could he be assured of the certainty of his own enlightenment? (4) Why was the Imperial house defeated by the Kamakura regime in 1221 despite the prayers and rituals of Tendai and Shingon priests? He eventually concluded that the highest teachings of Shakyamuni Buddha ( – ) were to be found in the Lotus Sutra. Throughout his career Nichiren carried his personal copy of the Lotus Sutra which he continually annotated. The mantra he expounded on 28 April 1253, known as the Daimoku or Odaimoku, Namu Myōhō Renge Kyō, expresses his devotion to the Lotus Sutra. From this early stage of his career, Nichiren started to engage in fierce polemics criticizing the teachings of Buddhism taught by the other sects of his day, a practice that continued and expanded throughout his life. Although Nichiren accepted the Tendai theoretical constructs of "original enlightenment" (hongaku shisō) and "attaining Buddhahood in one's present form" (sokushin jobutsu) he drew a distinction, insisting both concepts should be seen as practical and realizable amidst the concrete realities of daily life. He took issue with other Buddhist schools of his time that stressed transcendence over immanence. Nichiren's emphasis on "self-power" (Jpn. ji-riki) led him to harshly criticize Honen and his Pure Land Buddhism school because of its exclusive reliance on Amida Buddha for salvation which resulted in "other-dependence." (Jpn. ta-riki) In addition to his critique of Pure Land Buddhism, he later expanded his polemics to criticisms of the Zen, Shingon, and Ritsu sects. These four critiques were later collectively referred to as his "four dictums." Later in his writings, Nichiren referred to his early exegeses of the Pure Land teachings as just the starting point for his polemics against the esoteric teachings, which he had deemed as a far more significant matter of concern. Adding to his criticisms of esoteric Shingon, Nichiren wrote detailed condemnations about the Tendai school which had abandoned its Lotus Sutra-exclusiveness and incorporated esoteric doctrines and rituals as well as faith in the soteriological power of Amida Buddha. The target of his tactics expanded during the early part of his career. Between 1253 and 1259 he proselytized and converted individuals, mainly attracting mid- to lower-ranking samurai and local landholders and debated resident priests in Pure Land temples. In 1260, however, he attempted to directly reform society as a whole by submitting a treatise entitled "Risshō Ankoku Ron" ("Establishment of the Legitimate Teaching for the Protection of the Country") to Hōjō Tokiyori, the de facto leader of the nation. In it he cites passages from the Ninnō, Yakushi, Daijuku, and Konkōmyō sutras. Drawing on Tendai thinking about the non duality of person and land, Nichiren argued that the truth and efficacy of the people's religious practice will be expressed in the outer conditions of their land and society. He thereby associated the natural disasters of his age with the nation's attachment to inferior teachings, predicted foreign invasion and internal rebellion, and called for the return to legitimate dharma to protect the country. Although the role of Buddhism in "nation-protection" (chingo kokka) was well-established in Japan at this time, in this thesis Nichiren explicitly held the leadership of the country directly responsible for the safety of the land. Middle stage: 1261–1273 During the middle stage of his career, in refuting other religious schools publicly and vociferously, Nichiren provoked the ire of the country's rulers and of the priests of the sects he criticized. As a result, he was subjected to persecution which included two assassination attempts, an attempted beheading and two exiles. His first exile, to Izu Peninsula (1261–1263), convinced Nichiren that he was "bodily reading the Lotus Sutra (Jpn. Hokke shikidoku)," fulfilling the predictions on the 13th chapter (Fortitude) that votaries would be persecuted by ignorant lay people, influential priests, and their friends in high places. Nichiren began to argue that through "bodily reading the Lotus Sutra," rather than just studying its text for literal meaning, a country and its people could be protected. According to Habito, Nichiren argued that bodily reading the Lotus Sutra entails four aspects: The awareness of Śākyamuni Buddha's living presence. "Bodily reading the Lotus Sutra" is equivalent to entering the very presence of the Buddha in an immediate, experiential, and face-to-face way, he claimed. Here Nichiren is referring to the primordial buddha revealed in Chapter 16 ("Life Span of the Thus Come One") who eternally appears and engages in human events in order to save living beings from their state of unhappiness. One contains all. Nichiren further developed the Tiantai doctrine of "three thousand realms in a single thought-moment". Every thought, word, or deed contains within itself the whole of the three thousand realms; reading even one word of the sūtra therefore includes the teachings and merits of all buddhas. Chanting Namu Myōhō Renge Kyō, according to Nichiren, is the concrete means by which the principle of the three thousand realms in a single thought-moment is activated and assures the attainment of enlightenment as well as receiving various kinds of worldly benefit. The here and now. Nichiren held that the bodily reading of the sūtra must be applicable to time, place, and contemporary events. Nichiren was acutely aware of the social and political turmoil of his country and spiritual confusion of people in the Latter Day of the Law. Utmost seriousness. True practitioners must go beyond mental or verbal practices and actively speak up against and oppose prevailing thoughts and philosophies that denigrate the message of the Lotus Sutra. Nichiren set the example and was willing to lay down his life for its propagation and realization. His three-year exile to Sado Island proved to be another key turning point in Nichiren's life. Here he began inscribing the Gohonzon and wrote several major theses in which he claimed that he was Bodhisattva Superior Practices, the leader of the Bodhisattvas of the Earth. He concludes his work The Opening of the Eyes with the declaration "I will be the pillar of Japan; I will be the eyes of Japan; I will be the vessel of Japan. Inviolable shall remain these vows!" His thinking now went beyond theories of karmic retribution or guarantees of the Lotus Sutra as a protective force. Rather, he expressed a resolve to fulfill his mission despite the consequences. All of his disciples, he asserted, should emulate his spirit and work just like him in helping all people open their innate Buddha lives even though this means entails encountering enormous challenges. Final stage: 1274–1282 Nichiren's teachings reached their full maturity between the years 1274 and 1282 while he resided in primitive settings at Mount Minobu located in today's Yamanashi Prefecture. During this time he devoted himself to training disciples, produced most of the Gohonzon which he sent to followers, and authored works constituting half of his extant writings including six treatises that were categorized by his follower Nikkō as among his ten most important. In 1278 the "Atsuhara Affair" ("Atsuhara Persecution") occurred, culminating three years later. In the prior stage of his career, between 1261 and 1273, Nichiren endured and overcame numerous trials that were directed at him personally including assassination attempts, an attempted execution, and two exiles, thereby "bodily reading the Lotus Sutra" (shikidoku 色読). In so doing, according to him, he validated the 13th ("Fortitude") chapter of the Lotus Sutra in which a host of bodhisattvas promise to face numerous trials that follow in the wake of upholding and spreading the sutra in the evil age following the death of the Buddha: slander and abuse; attack by swords and staves; enmity from kings, ministers, and respected monks; and repeated banishment. On two occasions, however, the persecution was aimed at his followers. First, in 1271, in conjunction with the arrest and attempted execution of Nichiren and his subsequent exile to Sado, many of his disciples were arrested, banished, or had lands confiscated by the government. At that time, Nichiren stated, most recanted their faith in order to escape the government's actions. In contrast, during the Atsuhara episode twenty lay peasant-farmer followers were arrested on questionable charges and tortured; three were ultimately executed. This time none recanted their faith. Some of his prominent followers in other parts of the country were also being persecuted but maintained their faith as well. Although Nichiren was situated in Minobu, far from the scene of the persecution, the Fuji district of present-day Shizuoka Prefecture, Nichiren held his community together in the face of significant oppression through a sophisticated display of legal and rhetorical responses. He also drew on a wide array of support from the network of leading monks and lay disciples he had raised, some of whom were also experiencing persecution at the hands of the government. Throughout the events he wrote many letters to his disciples in which he gave context to the unfolding events by asserting that severe trials have deep significance. According to Stone, "By standing firm under interrogation, the Atsuhara peasants had proved their faith in Nichiren's eyes, graduating in his estimation from 'ignorant people' to devotees meriting equally with himself the name of 'practitioners of the Lotus Sutra.'" During this time Nichiren inscribed 114 mandalas that are extant today, 49 of which have been identified as being inscribed for individual lay followers and which may have served to deepen the bond between teacher and disciple. In addition, a few very large mandalas were inscribed, apparently intended for use at gathering places, suggesting the existence of some type of conventicle structure. The Atsuhara Affair also gave Nichiren the opportunity to better define what was to become Nichiren Buddhism. He stressed that meeting great trials was a part of the practice of the Lotus Sutra; the great persecutions of Atsuhara were not results of karmic
as there is no documentation about him in the public records of his times. Modern scholarship on Nichiren's life tries to provide sophisticated textual and sociohistorical analyses to cull longstanding myths about Nichiren that accrued over time from what is actually concretized. It is clear that from an early point in his studies Nichiren came to focus on the Lotus Sutra as the culmination and central message of Shakyamuni. As his life unfolded he engaged in a "circular hermeneutic" in which the interplay of the Lotus Sutra text and his personal experiences verified and enriched each other in his mind. As a result, there are significant turning points as his teachings reach full maturity. Scholar Yoshirō Tamura categorizes the development of Nichiren's thinking into three periods: An early period extending up to Nichiren's submission of the "Risshō Ankoku Ron" ("Establishment of the Legitimate Teaching for the Protection of the Country") to Hōjō Tokiyori in 1260; A middle period bookmarked by his first exile (to Izu Peninsula, 1261) and his release from his second exile (to Sado Island, 1273); A final period (1274–1282) in which Nichiren lived in Mount Minobu directing his movement from afar. Early stage: From initial studies to 1260 For more than 20 years Nichiren examined Buddhist texts and commentaries at Mount Hiei's Enryaku-ji temple and other major centers of Buddhist study in Japan. In later writings he claimed he was motivated by four primary questions: (1) What were the essentials of the competing Buddhist sects so they could be ranked according to their merits and flaws? (2) Which of the many Buddhist scriptures that had reached Japan represented the essence of Shakyamuni's teaching? (3) How could he be assured of the certainty of his own enlightenment? (4) Why was the Imperial house defeated by the Kamakura regime in 1221 despite the prayers and rituals of Tendai and Shingon priests? He eventually concluded that the highest teachings of Shakyamuni Buddha ( – ) were to be found in the Lotus Sutra. Throughout his career Nichiren carried his personal copy of the Lotus Sutra which he continually annotated. The mantra he expounded on 28 April 1253, known as the Daimoku or Odaimoku, Namu Myōhō Renge Kyō, expresses his devotion to the Lotus Sutra. From this early stage of his career, Nichiren started to engage in fierce polemics criticizing the teachings of Buddhism taught by the other sects of his day, a practice that continued and expanded throughout his life. Although Nichiren accepted the Tendai theoretical constructs of "original enlightenment" (hongaku shisō) and "attaining Buddhahood in one's present form" (sokushin jobutsu) he drew a distinction, insisting both concepts should be seen as practical and realizable amidst the concrete realities of daily life. He took issue with other Buddhist schools of his time that stressed transcendence over immanence. Nichiren's emphasis on "self-power" (Jpn. ji-riki) led him to harshly criticize Honen and his Pure Land Buddhism school because of its exclusive reliance on Amida Buddha for salvation which resulted in "other-dependence." (Jpn. ta-riki) In addition to his critique of Pure Land Buddhism, he later expanded his polemics to criticisms of the Zen, Shingon, and Ritsu sects. These four critiques were later collectively referred to as his "four dictums." Later in his writings, Nichiren referred to his early exegeses of the Pure Land teachings as just the starting point for his polemics against the esoteric teachings, which he had deemed as a far more significant matter of concern. Adding to his criticisms of esoteric Shingon, Nichiren wrote detailed condemnations about the Tendai school which had abandoned its Lotus Sutra-exclusiveness and incorporated esoteric doctrines and rituals as well as faith in the soteriological power of Amida Buddha. The target of his tactics expanded during the early part of his career. Between 1253 and 1259 he proselytized and converted individuals, mainly attracting mid- to lower-ranking samurai and local landholders and debated resident priests in Pure Land temples. In 1260, however, he attempted to directly reform society as a whole by submitting a treatise entitled "Risshō Ankoku Ron" ("Establishment of the Legitimate Teaching for the Protection of the Country") to Hōjō Tokiyori, the de facto leader of the nation. In it he cites passages from the Ninnō, Yakushi, Daijuku, and Konkōmyō sutras. Drawing on Tendai thinking about the non duality of person and land, Nichiren argued that the truth and efficacy of the people's religious practice will be expressed in the outer conditions of their land and society. He thereby associated the natural disasters of his age with the nation's attachment to inferior teachings, predicted foreign invasion and internal rebellion, and called for the return to legitimate dharma to protect the country. Although the role of Buddhism in "nation-protection" (chingo kokka) was well-established in Japan at this time, in this thesis Nichiren explicitly held the leadership of the country directly responsible for the safety of the land. Middle stage: 1261–1273 During the middle stage of his career, in refuting other religious schools publicly and vociferously, Nichiren provoked the ire of the country's rulers and of the priests of the sects he criticized. As a result, he was subjected to persecution which included two assassination attempts, an attempted beheading and two exiles. His first exile, to Izu Peninsula (1261–1263), convinced Nichiren that he was "bodily reading the Lotus Sutra (Jpn. Hokke shikidoku)," fulfilling the predictions on the 13th chapter (Fortitude) that votaries would be persecuted by ignorant lay people, influential priests, and their friends in high places. Nichiren began to argue that through "bodily reading the Lotus Sutra," rather than just studying its text for literal meaning, a country and its people could be protected. According to Habito, Nichiren argued that bodily reading the Lotus Sutra entails four aspects: The awareness of Śākyamuni Buddha's living presence. "Bodily reading the Lotus Sutra" is equivalent to entering the very presence of the Buddha in an immediate, experiential, and face-to-face way, he claimed. Here Nichiren is referring to the primordial buddha revealed in Chapter 16 ("Life Span of the Thus Come One") who eternally appears and engages in human events in order to save living beings from their state of unhappiness. One contains all. Nichiren further developed the Tiantai doctrine of "three thousand realms in a single thought-moment". Every thought, word, or deed contains within itself the whole of the three thousand realms; reading even one word of the sūtra therefore includes the teachings and merits of all buddhas. Chanting Namu Myōhō Renge Kyō, according to Nichiren, is the concrete means by which the principle of the three thousand realms in a single thought-moment is activated and assures the attainment of enlightenment as well as receiving various kinds of worldly benefit. The here and now. Nichiren held that the bodily reading of the sūtra must be applicable to time, place, and contemporary events. Nichiren was acutely aware of the social and political turmoil of his country and spiritual confusion of people in the Latter Day of the Law. Utmost seriousness. True practitioners must go beyond mental or verbal practices and actively speak up against and oppose prevailing thoughts and philosophies that denigrate the message of the Lotus Sutra. Nichiren set the example and was willing to lay down his life for its propagation and realization. His three-year exile to Sado Island proved to be another key turning point in Nichiren's life. Here he began inscribing the Gohonzon and wrote several major theses in which he claimed that he was Bodhisattva Superior Practices, the leader of the Bodhisattvas of the Earth. He concludes his work The Opening of the Eyes with the declaration "I will be the pillar of Japan; I will be the eyes of Japan; I will be the vessel of Japan. Inviolable shall remain these vows!" His thinking now went beyond theories of karmic retribution or guarantees of the Lotus Sutra as a protective force. Rather, he expressed a resolve to fulfill his mission despite the consequences. All of his disciples, he asserted, should emulate his spirit and work just like him in helping all people open their innate Buddha lives even though this means entails encountering enormous challenges. Final stage: 1274–1282 Nichiren's teachings reached their full maturity between the years 1274 and 1282 while he resided in primitive settings at Mount Minobu located in today's Yamanashi Prefecture. During this time he devoted himself to training disciples, produced most of the Gohonzon which he sent to followers, and authored works constituting half of his extant writings including six treatises that were categorized by his follower Nikkō as among his ten most important. In 1278 the "Atsuhara Affair" ("Atsuhara Persecution") occurred, culminating three years later. In the prior stage of his career, between 1261 and 1273, Nichiren endured and overcame numerous trials that were directed at him personally including assassination attempts, an attempted execution, and two exiles, thereby "bodily reading the Lotus Sutra" (shikidoku 色読). In so doing, according to him, he validated the 13th ("Fortitude") chapter of the Lotus Sutra in which a host of bodhisattvas promise to face numerous trials that follow in the wake of upholding and spreading the sutra in the evil age following the death of the Buddha: slander and abuse; attack by swords and staves; enmity from kings, ministers, and respected monks; and repeated banishment. On two occasions, however, the persecution was aimed at his followers. First, in 1271, in conjunction with the arrest and attempted execution of Nichiren and his subsequent exile to Sado, many of his disciples were arrested, banished, or had lands confiscated by the government. At that time, Nichiren stated, most recanted their faith in order to escape the government's actions. In contrast, during the Atsuhara episode twenty lay peasant-farmer followers were arrested on questionable charges and tortured; three were ultimately executed. This time none recanted their faith. Some of his prominent followers in other parts of the country were also being persecuted but maintained their faith as well. Although Nichiren was situated in Minobu, far from the scene of the persecution, the Fuji district of present-day Shizuoka Prefecture, Nichiren held his community together in the face of significant oppression through a sophisticated display of legal and rhetorical responses. He also drew on a wide array of support from the network of leading monks and lay disciples he had raised, some of whom were also experiencing persecution at the hands of the government. Throughout the events he wrote many letters to his disciples in which he gave context to the unfolding events by asserting that severe trials have deep significance. According to Stone, "By standing firm under interrogation, the Atsuhara peasants had proved their faith in Nichiren's eyes, graduating in his estimation from 'ignorant people' to devotees meriting equally with himself the name of 'practitioners of the Lotus Sutra.'" During this time Nichiren inscribed 114 mandalas that are extant today, 49 of which have been identified as being inscribed for individual lay followers and which may have served to deepen the bond between teacher and disciple. In addition, a few very large mandalas were inscribed, apparently intended for use at gathering places, suggesting the existence of some type of conventicle structure. The Atsuhara Affair also gave Nichiren the opportunity to better define what was to become Nichiren Buddhism. He stressed that meeting great trials was a part of the practice of the Lotus Sutra; the great persecutions of Atsuhara were not results of karmic retribution but were the historical unfolding of the Buddhist Dharma. The vague "single good of the true vehicle" which he advocated in the Risshō ankoku ron now took final form as chanting the Lotus Sutra's daimoku or title which he described as the heart of the "origin teaching" (honmon 本門) of the Lotus Sutra. This, he now claimed, lay hidden in the depths of the 16th ("The Life Span of the Tathāgata") chapter, never before being revealed, but intended by the Buddha solely for the beginning of the Final Dharma Age. Nichiren's writings A prolific writer, Nichiren's personal communiques among his followers as well as numerous treatises detail his view of the correct form of practice for the Latter Day of the Law (mappō); lay out his views on other Buddhist schools, particularly those of influence during his lifetime; and elucidate his interpretations of Buddhist teachings that preceded his. These writings are collectively known as Gosho (御書) or Nichiren ibun (日蓮遺文). Out of 162 historically identified followers of Nichiren, 47 were women. Many of his writings were to women followers in which he displays strong empathy for their struggles, and continually stressed the Lotus Sutra's teaching that all people, men and women equally, can become enlightened just as they are. His voice is sensitive and kind which differs from the strident picture painted about him by critics. Which of these writings, including the Ongi Kuden (orally transmitted teachings), are deemed authentic or apocryphal is a matter of debate within the various schools of today's Nichiren Buddhism. His Rissho Ankoku Ron, preserved at Shochuzan Hokekyo-ji, is one of the National Treasures of Japan. Post-Nichiren development in Japan Development in Medieval Japan After Nichiren's death in 1282 the Kamakura shogunate weakened largely due to financial and political stresses resulting from defending the country from the Mongols. It was replaced by the Ashikaga shogunate (1336–1573), which in turn was succeeded by the Azuchi–Momoyama period (1573–1600), and then the Tokugawa shogunate (1600–1868). During these time periods, collectively comprising Japan's medieval history, Nichiren Buddhism experienced considerable fracturing, growth, turbulence and decline. A prevailing characteristic of the movement in medieval Japan was its lack of understanding of Nichiren's own spiritual realization. Serious commentaries about Nichiren's theology did not appear for almost two hundred years. This contributed to divisive doctrinal confrontations that were often superficial and dogmatic. This long history of foundings, divisions, and mergers have led to today's 37 legally incorporated Nichiren Buddhist groups. In the modern period, Nichiren Buddhism experienced a revival, largely initiated by lay people and lay movements. Development of the major lineages Several denominations comprise the umbrella term "Nichiren Buddhism" which was known at the time as the Hokkeshū (Lotus School) or Nichirenshū (Nichiren School). The splintering of Nichiren's teachings into different schools began several years after Nichiren's passing. Despite their differences, however, the Nichiren groups shared commonalities: asserting the primacy of the Lotus Sutra, tracing Nichiren as their founder, centering religious practice on chanting Namu-myoho-renge-kyo, using the Gohonzon in meditative practice, insisting on the need for propagation, and participating in remonstrations with the authorities. The movement was supported financially by local warlords or stewards (jitõ) who often founded tightly organized clan temples (ujidera) that were frequently led by sons who became priests. Most Nichiren schools point to the founding date of their respective head or main temple (for example, Nichiren Shū the year 1281, Nichiren Shōshū the year 1288, and Kempon Hokke Shu the year 1384) although they did not legally incorporate as religious bodies until the late 19th and early 20th century. A last wave of temple mergers took place in the 1950s. The roots of this splintering can be traced to the organization of the Nichiren community during his life. In 1282, one year before his death, Nichiren named "six senior priests" (rokurōsō) disciple to lead his community: Nikkō Shonin (日興), Nisshō (日昭), Nichirō (日朗), Nikō (日向), Nitchō (日頂), and Nichiji (日持). Each had led communities of followers in different parts of the Kanto region of Japan and these groups, after Nichiren's death, ultimately morphed into lineages of schools. Nikkō Shonin, Nichirō, and Nisshō were the core of the Minobu (also known as the Nikō or Kuon-ji) monryu or school. Nikō became the second chief abbot of Minobu (Nichiren is considered by this school to be the first). Nichirō's direct lineage was called the Nichirō or Hikigayatsu monryu. Nisshō's lineage became the Nisshō or Hama monryu. Nitchō formed the Nakayama lineage but later returned to become a follower of Nikkō. Nichiji, originally another follower of Nikkō, eventually traveled to the Asian continent (ca. 1295) on a missionary journey and some scholarship suggests he reached northern China, Manchuria, and possibly Mongolia. Kuon-ji Temple in Mount Minobu eventually became the head temple of today's Nichiren Shū, the largest branch among traditional schools, encompassing the schools and temples tracing their origins to Nikō, Nichirō, Nisshō, Nitchō, and Nichiji. The lay and/or new religious movements Reiyūkai, Risshō Kōsei Kai, and Nipponzan-Myōhōji-Daisanga stem from this lineage. Nikkō left Kuon-ji in 1289 and became the founder of what was to be called the Nikkō monryu or lineage. He founded a center at the foot of Mount Fuji which would later be known as the Taisekiji temple of Nichiren Shōshū. Soka Gakkai is the largest independent lay organization that shares roots with this lineage. Fault lines between the various Nichiren groups crystallized over several issues: Local gods. A deeply embedded and ritualized part of Japanese village life, Nichiren schools clashed over the practice of honoring local gods (kami) by lay disciples of Nichiren. Some argued that this practice was a necessary accommodation. The group led by the monk Nikkō objected to such syncretism. Content of Lotus Sūtra. Some schools (called Itchi) argued that all chapters of the sūtra should be equally valued and others (called Shōretsu) claimed that the latter half was superior to the former half. (See below for more details.) Identity of Nichiren. Some of his later disciples identified him with Visistacaritra, the leader of the Bodhisattvas of the Earth who were entrusted in Chapter Twenty-Two to propagate the Lotus Sūtra. The Nikkō group identified Nichiren as the original and eternal Buddha. Identification with Tiantai school. The Nisshō group began to identify itself as a Tiantai school, having no objections to its esoteric practices, perhaps as an expedient means to avoid persecution from Tiantai, Pure Land, and Shingon followers. This deepened the rift with Nikkō. The Three Gems. All schools of Buddhism speak of the concept of The Three Gems (the Buddha, the Dharma, and the Sangha) but define it differently. Over the centuries the Nichiren schools have come to understand it differently as well. The Minobu school has come to identify the Buddha as Shakyamuni whereas the Nikkō school identifies it as Nichiren. For Minobu the Dharma is Namu-myoho-renge-kyo, the Nikkō school identifies it as the Namu-myoho-renge-kyo that is hidden in the 16th "Lifespan" Chapter of the Lotus Sutra (the Gohonzon). Currently, Nichiren Shoshu claims this specifically refers to the Dai Gohonzon, whereas Soka Gakkai holds it represents all Gohonzon. The Sangha, sometimes translated as "the priest", is also interpreted differently. Minobu defines it as Nichiren; Nichiren Shoshu as Nikkō representing its priesthood; and the Soka Gakkai as Nikkō representing the harmonious community of practitioners. The cleavage between Nichiren groups has also been classified by the so-called Itchi (meaning unity or harmony) and Shoretsu (a contraction of two words meaning superior/inferior) lineages. The Itchi lineage today comprises most of the traditional schools within Nichiren Buddhism, of which the Nichiren Shū is the biggest representative, although it also includes some Nikkō temples. In this lineage the whole of the Lotus Sutra, both the so-called theoretical (shakumon or "Imprinted Gate") and essential (honmon or "Original Gate") chapters, are venerated. While great attention is given to the 2nd and 16th chapter of the Lotus Sutra, other parts of the sutra are recited. The Shoretsu lineage comprises most temples and lay groups following the Nikkō monryu. The Shoretsu group values the supremacy of the essential over the theoretical part of the Lotus Sutra. Therefore, solely the 2nd and 16th chapters of the Lotus Sutra are recited. There are additional subdivisions in the Shoretsu group which splintered over whether the entire second half was of equal importance, the eight chapters of the second half when the assembly participates in "The Ceremony of the Air," or specifically Chapter Sixteen (Lifespan of the Tathāgata). Origin of the Fuji School Although there were rivalries and unique interpretations among the early Hokkeshũ lineages, none were as deep and distinct as the divide between the Nikkō or Fuji school and the rest of the tradition. Animosity and discord among the six senior disciples started after the second death anniversary of Nichiren's 100th Day Memorial ceremony (23 January 1283) when the rotation system as agreed upon the "Shuso Gosenge Kiroku" (English: Record document of founder's demise) and Rimbo Cho (English: Rotation Wheel System) to clean and maintain Nichiren's grave. By the third anniversary of Nichiren's passing (13 October 1284), these arrangements seemed to have broken down. Nikkō claimed that the other five senior priests no longer returned to Nichiren's tomb in Mount Minobu, citing signs of neglect at the gravesite. He took up residency and overall responsibility for Kuonji temple while Nikō served as its doctrinal instructor. Before long tensions grew between the two concerning the behavior of Hakii Nanbu Rokurō Sanenaga, the steward of the Minobu district and the temple's patron. Nikkō accused Sanenaga of unorthodox practices deemed to be heretical such as crafting a standing statue of Shakyamuni Buddha
, , and ). As of November 2017 this work was underway for the sixth Nimitz-class vessel, . History Industrialist Collis P. Huntington (1821–1900) provided crucial funding to complete the Chesapeake and Ohio Railroad (C&O) from Richmond, Virginia to the Ohio River in the early 1870s. Although originally built for general commerce, this C&O rail link to the midwest was soon also being used to transport bituminous coal from the previously isolated coalfields, adjacent to the New River and the Kanawha River in West Virginia. In 1881, the Peninsula Extension of the C&O was built from Richmond down the Virginia Peninsula to reach a new coal pier on Hampton Roads in Warwick County near the small unincorporated community of Newport News Point. However, building the railroad and coal pier was only the first part of Huntington's dreams for Newport News. The shipyard's early years In 1886, Huntington built a shipyard to repair ships servicing this transportation hub. In 1891 Newport News Shipbuilding and Drydock Company delivered its first ship, the tugboat Dorothy. By 1897 NNS had built three warships for the US Navy: , and . When Collis died in 1900, his nephew Henry E. Huntington inherited much of his uncle's fortune. He also married Collis' widow Arabella Huntington, and assumed Collis' leadership role with Newport News Shipbuilding and Drydock Company. Under Henry Huntington's leadership, growth continued. In 1906 the revolutionary launched a great naval race worldwide. Between 1907 and 1923, Newport News built six of the US Navy's total of 22 dreadnoughts – , , , , and . All but the first were in active service in World War II. In 1907 President Theodore Roosevelt sent the Great White Fleet on its round-the-world voyage. NNS had built seven of its 16 battleships. In 1914 NNS built SS Medina for the Mallory Steamship Company; as she was until 2009 the world's oldest active ocean-faring passenger ship. Newport News and the shipyard In the early years, leaders of the Newport News community and those of the shipyard were virtually interchangeable. Shipyard president Walter A. Post served from March 9, 1911 to February 12, 1912, when he died. Earlier, he had come to the area as one of the builders of the C&O Railway's terminals, and had served as the first mayor of Newport News after it became an independent city in 1896. It was on March 14, 1914 that Albert Lloyd Hopkins, a young New Yorker trained in engineering, succeeded Post as president of the company. In May 1915 while traveling to England on shipyard business aboard , Albert L. Hopkins tenure and life ended prematurely when that ship was torpedoed and sunk by a German U-boat off Queenstown on the Irish coast. His assistant, Frederic Gauntlett, was also on board, but was able to swim to safety. Homer Lenoir Ferguson was company vice president when Hopkins died, and assumed the presidency the following August. He saw the company through both world wars, became a noted community leader, and was a co-founder of the Mariners' Museum with Archer Huntington. He served until July 31, 1946, after World War II had ended on both the European and Pacific fronts. Just northwest of the shipyard, Hilton Village, one of the first planned communities in the country, was built by the federal government to house shipyard workers in 1918. The planners met with the wives of shipyard workers. Based on their input 14 house plans were designed for the projected 500 English-village-style homes. After the war, in 1922, Henry Huntington acquired it from the government, and helped facilitate the sale of the homes to shipyard employees and other local residents. Three streets there were named after Post, Hopkins, and Ferguson. Navy orders during and after World War I The Lusitania incident was among the events that brought the United States into World War I. Between 1918 and 1920 NNS delivered 25 destroyers, and after the war it began building aircraft carriers. was delivered in 1934, and NNS went on to build and . Ocean liners After World War I NNS completed a major reconditioning and refurbishment of the ocean liner . Before the war she had been the German liner Vaterland, but the start of hostilities found her laid up in New York Harbor and she had been seized by the US Government in 1917 and converted into a troopship. War duty and age meant that all wiring, plumbing, and interior layouts were stripped and redesigned while her hull was strengthened and her boilers converted from coal to oil while being refurbished. Virtually a new
three warships for the US Navy: , and . When Collis died in 1900, his nephew Henry E. Huntington inherited much of his uncle's fortune. He also married Collis' widow Arabella Huntington, and assumed Collis' leadership role with Newport News Shipbuilding and Drydock Company. Under Henry Huntington's leadership, growth continued. In 1906 the revolutionary launched a great naval race worldwide. Between 1907 and 1923, Newport News built six of the US Navy's total of 22 dreadnoughts – , , , , and . All but the first were in active service in World War II. In 1907 President Theodore Roosevelt sent the Great White Fleet on its round-the-world voyage. NNS had built seven of its 16 battleships. In 1914 NNS built SS Medina for the Mallory Steamship Company; as she was until 2009 the world's oldest active ocean-faring passenger ship. Newport News and the shipyard In the early years, leaders of the Newport News community and those of the shipyard were virtually interchangeable. Shipyard president Walter A. Post served from March 9, 1911 to February 12, 1912, when he died. Earlier, he had come to the area as one of the builders of the C&O Railway's terminals, and had served as the first mayor of Newport News after it became an independent city in 1896. It was on March 14, 1914 that Albert Lloyd Hopkins, a young New Yorker trained in engineering, succeeded Post as president of the company. In May 1915 while traveling to England on shipyard business aboard , Albert L. Hopkins tenure and life ended prematurely when that ship was torpedoed and sunk by a German U-boat off Queenstown on the Irish coast. His assistant, Frederic Gauntlett, was also on board, but was able to swim to safety. Homer Lenoir Ferguson was company vice president when Hopkins died, and assumed the presidency the following August. He saw the company through both world wars, became a noted community leader, and was a co-founder of the Mariners' Museum with Archer Huntington. He served until July 31, 1946, after World War II had ended on both the European and Pacific fronts. Just northwest of the shipyard, Hilton Village, one of the first planned communities in the country, was built by the federal government to house shipyard workers in 1918. The planners met with the wives of shipyard workers. Based on their input 14 house plans were designed for the projected 500 English-village-style homes. After the
systems of greater than (nonlinear) equations as well if the algorithm uses the generalized inverse of the non-square Jacobian matrix instead of the inverse of . If the nonlinear system has no solution, the method attempts to find a solution in the non-linear least squares sense. See Gauss–Newton algorithm for more information. In a Banach space Another generalization is Newton's method to find a root of a functional defined in a Banach space. In this case the formulation is where is the Fréchet derivative computed at . One needs the Fréchet derivative to be boundedly invertible at each in order for the method to be applicable. A condition for existence of and convergence to a root is given by the Newton–Kantorovich theorem. Over -adic numbers In -adic analysis, the standard method to show a polynomial equation in one variable has a -adic root is Hensel's lemma, which uses the recursion from Newton's method on the -adic numbers. Because of the more stable behavior of addition and multiplication in the -adic numbers compared to the real numbers (specifically, the unit ball in the -adics is a ring), convergence in Hensel's lemma can be guaranteed under much simpler hypotheses than in the classical Newton's method on the real line. Newton–Fourier method The Newton–Fourier method is Joseph Fourier's extension of Newton's method to provide bounds on the absolute error of the root approximation, while still providing quadratic convergence. Assume that is twice continuously differentiable on and that contains a root in this interval. Assume that on this interval (this is the case for instance if , , and , and on this interval). This guarantees that there is a unique root on this interval, call it . If it is concave down instead of concave up then replace by since they have the same roots. Let be the right endpoint of the interval and let be the left endpoint of the interval. Given , define which is just Newton's method as before. Then define where the denominator is and not . The iterations will be strictly decreasing to the root while the iterations will be strictly increasing to the root. Also, so that distance between and decreases quadratically. Quasi-Newton methods When the Jacobian is unavailable or too expensive to compute at every iteration, a quasi-Newton method can be used. -analog Newton's method can be generalized with the -analog of the usual derivative. Modified Newton methods Maehly's procedure A nonlinear equation has multiple solutions in general. But if the initial value is not appropriate, Newton's method may not converge to the desired solution or may converge to the same solution found earlier. When we have already found N solutions of , then the next root can be found by applying Newton's method to the next equation: This method is applied to obtain zeros of the Bessel function of the second kind. Hirano's modified Newton method Hirano's modified Newton method is a modification conserving the convergence of Newton method and avoiding unstableness. It is developed to solve complex polynomials. Interval Newton's method Combining Newton's method with interval arithmetic is very useful in some contexts. This provides a stopping criterion that is more reliable than the usual ones (which are a small value of the function or a small variation of the variable between consecutive iterations). Also, this may detect cases where Newton's method converges theoretically but diverges numerically because of an insufficient floating-point precision (this is typically the case for polynomials of large degree, where a very small change of the variable may change dramatically the value of the function; see Wilkinson's polynomial). Consider , where is a real interval, and suppose that we have an interval extension of , meaning that takes as input an interval and outputs an interval such that: We also assume that , so in particular has at most one root in . We then define the interval Newton operator by: where . Note that the hypothesis on implies that is well defined and is an interval (see interval arithmetic for further details on interval operations). This naturally leads to the following sequence: The mean value theorem ensures that if there is a root of in , then it is also in . Moreover, the hypothesis on ensures that is at most half the size of when is the midpoint of , so this sequence converges towards , where is the root of in . If strictly contains 0, the use of extended interval division produces a union of two intervals for ; multiple roots are therefore automatically separated and bounded. Applications Minimization and maximization problems Newton's method can be used to find a minimum or maximum of a function . The derivative is zero at a minimum or maximum, so local minima and maxima can be found by applying Newton's method to the derivative. The iteration becomes: Multiplicative inverses of numbers and power series An important application is Newton–Raphson division, which can be used to quickly find the reciprocal of a number , using only multiplication and subtraction, that is to say the number such that . We can rephrase that as finding the zero of . We have . Newton's iteration is Therefore, Newton's iteration needs only two multiplications and one subtraction. This method is also very efficient to compute the multiplicative inverse of a power series. Solving transcendental equations Many transcendental equations can be solved using Newton's method. Given the equation with and/or a transcendental function, one writes The values of that solve the original equation are then the roots of , which may be found via Newton's method. Obtaining zeros of special functions Newton's method is applied to the ratio of Bessel functions in order to obtain its root. Numerical verification for solutions of nonlinear equations A numerical verification for solutions of nonlinear equations has been established by using Newton's method multiple times and forming a set of solution candidates. CFD modeling An iterative Newton-Raphson procedure was employed in order to impose a stable Dirichlet boundary condition in CFD, as a quite general strategy to model current and potential distribution for electrochemical cell stacks. Examples Square root Consider the problem of finding the square root of a number , that is to say the positive number such that . Newton's method is one of many methods of computing square roots. We can rephrase that as finding the zero of . We have . For example, for finding the square root of 612 with an initial guess , the sequence given by Newton's method is: where the correct digits are underlined. With only a few iterations one can obtain a solution accurate to many decimal places. Rearranging the formula as follows yields the Babylonian method of finding square roots: i.e. the arithmetic mean of the guess, and . Solution of Consider the problem of finding the positive number with . We can rephrase that as finding the zero of . We have . Since for all and for , we know that our solution lies between 0 and 1. For example, with an initial guess , the sequence given by Newton's method is (note that a starting value of 0 will lead to an undefined result, showing the importance of using a starting point that is close to the solution): The correct digits are underlined in the above example. In particular, is correct to 12 decimal places. We see that the number of correct digits after the decimal point increases from 2 (for ) to 5 and 10, illustrating the quadratic convergence. Code The following is an implementation example of the Newton's method in the Julia programming language for finding a root of a function f which has derivative fprime. The initial guess will be and the function will be so that . Each new iteration of Newton's method will be denoted by x1. We will check during the computation whether the denominator (yprime) becomes too small (smaller than epsilon), which would be the case if , since otherwise a large amount of error could be introduced. x0 = 1 # The initial guess f(x) = x^2 - 2 # The function whose root we are trying to find fprime(x) = 2x # The derivative of the function tolerance = 1e-7 # 7 digit accuracy is desired epsilon = 1e-14 # Do not divide by a number smaller than this maxIterations = 20 # Do not allow the iterations to continue indefinitely solutionFound = false # Have not converged to a solution yet for i = 1:maxIterations y = f(x0) yprime = fprime(x0) if abs(yprime) < epsilon # Stop if the denominator is too small break end global x1 = x0 - y/yprime # Do Newton's computation if abs(x1 - x0) <= tolerance # Stop when the result is within the desired tolerance global solutionFound = true break end global x0 = x1 # Update x0 to start the process again end if solutionFound println("Solution: ", x1) # x1 is a solution within tolerance and maximum number of iterations else println("Did not converge") # Newton's method did not converge end See also Aitken's delta-squared process Bisection method Euler method
are necessary for convergence are satisfied, but the point chosen as the initial point is not in the interval where the method converges. This can happen, for example, if the function whose root is sought approaches zero asymptotically as goes to or . In such cases a different method, such as bisection, should be used to obtain a better estimate for the zero to use as an initial point. Iteration point is stationary Consider the function: It has a maximum at and solutions of at . If we start iterating from the stationary point (where the derivative is zero), will be undefined, since the tangent at is parallel to the -axis: The same issue occurs if, instead of the starting point, any iteration point is stationary. Even if the derivative is small but not zero, the next iteration will be a far worse approximation. Starting point enters a cycle For some functions, some starting points may enter an infinite cycle, preventing convergence. Let and take 0 as the starting point. The first iteration produces 1 and the second iteration returns to 0 so the sequence will alternate between the two without converging to a root. In fact, this 2-cycle is stable: there are neighborhoods around 0 and around 1 from which all points iterate asymptotically to the 2-cycle (and hence not to the root of the function). In general, the behavior of the sequence can be very complex (see Newton fractal). The real solution of this equation is …. Derivative issues If the function is not continuously differentiable in a neighborhood of the root then it is possible that Newton's method will always diverge and fail, unless the solution is guessed on the first try. Derivative does not exist at root A simple example of a function where Newton's method diverges is trying to find the cube root of zero. The cube root is continuous and infinitely differentiable, except for , where its derivative is undefined: For any iteration point , the next iteration point will be: The algorithm overshoots the solution and lands on the other side of the -axis, farther away than it initially was; applying Newton's method actually doubles the distances from the solution at each iteration. In fact, the iterations diverge to infinity for every , where . In the limiting case of (square root), the iterations will alternate indefinitely between points and , so they do not converge in this case either. Discontinuous derivative If the derivative is not continuous at the root, then convergence may fail to occur in any neighborhood of the root. Consider the function Its derivative is: Within any neighborhood of the root, this derivative keeps changing sign as approaches 0 from the right (or from the left) while for . So is unbounded near the root, and Newton's method will diverge almost everywhere in any neighborhood of it, even though: the function is differentiable (and thus continuous) everywhere; the derivative at the root is nonzero; is infinitely differentiable except at the root; and the derivative is bounded in a neighborhood of the root (unlike ). Non-quadratic convergence In some cases the iterates converge but do not converge as quickly as promised. In these cases simpler methods converge just as quickly as Newton's method. Zero derivative If the first derivative is zero at the root, then convergence will not be quadratic. Let then and consequently So convergence is not quadratic, even though the function is infinitely differentiable everywhere. Similar problems occur even when the root is only "nearly" double. For example, let Then the first few iterations starting at are = 1 = … = … = … = … = … = … = … it takes six iterations to reach a point where the convergence appears to be quadratic. No second derivative If there is no second derivative at the root, then convergence may fail to be quadratic. Let Then And except when where it is undefined. Given , which has approximately times as many bits of precision as has. This is less than the 2 times as many which would be required for quadratic convergence. So the convergence of Newton's method (in this case) is not quadratic, even though: the function is continuously differentiable everywhere; the derivative is not zero at the root; and is infinitely differentiable except at the desired root. Generalizations Complex functions When dealing with complex functions, Newton's method can be directly applied to find their zeroes. Each zero has a basin of attraction in the complex plane, the set of all starting values that cause the method to converge to that particular zero. These sets can be mapped as in the image shown. For many complex functions, the boundaries of the basins of attraction are fractals. In some cases there are regions in the complex plane which are not in any of these basins of attraction, meaning the iterates do not converge. For example, if one uses a real initial condition to seek a root of , all subsequent iterates will be real numbers and so the iterations cannot converge to either root, since both roots are non-real. In this case almost all real initial conditions lead to chaotic behavior, while some initial conditions iterate either to infinity or to repeating cycles of any finite length. Curt McMullen has shown that for any possible purely iterative algorithm similar to Newton's method, the algorithm will diverge on some open regions of the complex plane when applied to some polynomial of degree 4 or higher. However, McMullen gave a generally convergent algorithm for polynomials of degree 3. Chebyshev's third-order method Nash–Moser iteration Systems of equations variables, functions One may also use Newton's method to solve systems of equations, which amounts to finding the (simultaneous) zeroes of continuously differentiable functions . This is equivalent to finding the zeroes of a single vector-valued function . In the formulation given above, the scalars are replaced by vectors and instead of dividing the function by its derivative one instead has to left multiply the function by the inverse of its Jacobian matrix . This results in the expression . Rather than actually computing the inverse of the Jacobian matrix, one may save time and increase numerical stability by solving the system of linear equations for the unknown . variables, equations, with The -dimensional variant of Newton's method can be used to solve systems of greater than (nonlinear) equations as well if the algorithm uses the generalized inverse of the non-square Jacobian matrix instead of the inverse of . If the nonlinear system has no solution, the method attempts to find a solution in the non-linear least squares sense. See Gauss–Newton algorithm for more information. In a Banach space Another generalization is Newton's method to find a root of a functional defined in a Banach space. In this case the formulation is where is the Fréchet derivative computed at . One needs the Fréchet derivative to be boundedly invertible at each in order for the method to be applicable. A condition for existence of and convergence to a root is given by the Newton–Kantorovich theorem. Over -adic numbers In -adic analysis, the standard method to show a polynomial equation in one variable has a -adic root is Hensel's lemma, which uses the recursion from Newton's method on the -adic numbers. Because of the more stable behavior of addition and multiplication in the -adic numbers compared to the real numbers (specifically, the unit ball in the -adics is a ring), convergence in Hensel's lemma can be guaranteed under much simpler hypotheses than in the classical Newton's method on the real line. Newton–Fourier method The Newton–Fourier method is Joseph Fourier's extension of Newton's method to provide bounds on the absolute error of the root approximation, while still providing quadratic convergence. Assume that is twice continuously differentiable on and that contains a root in this interval. Assume that on this interval (this is the case for instance if , , and , and on this interval). This guarantees that there is a unique root on this interval, call it . If it is concave down instead of concave up then replace by since they have the same roots. Let be the right endpoint of the interval and let be the left endpoint of the interval. Given , define which is just Newton's method as before. Then define where the denominator is and not . The iterations will be strictly decreasing to the root while the iterations will be strictly increasing to the root. Also, so that distance between and decreases quadratically. Quasi-Newton methods When the Jacobian is unavailable or too expensive to compute at every iteration, a quasi-Newton method can be used. -analog Newton's method can be generalized with the -analog of the usual derivative. Modified Newton methods Maehly's procedure A nonlinear equation has multiple solutions in general. But if the initial value is not appropriate, Newton's method may not converge to the desired solution or may converge to the same solution found earlier. When we have already found N solutions of , then the next root can be found by applying Newton's method to the next equation: This method is applied to obtain zeros of the Bessel function of the second kind. Hirano's modified Newton method Hirano's modified Newton method is a modification conserving the convergence of Newton method and avoiding unstableness. It is developed to solve complex polynomials. Interval Newton's method Combining Newton's method with interval arithmetic is very useful in some contexts. This provides a stopping criterion that is more reliable than the usual ones (which are a small value of the function or a small variation of the variable between consecutive iterations). Also, this may detect cases where Newton's method converges theoretically but diverges numerically because of an insufficient floating-point precision (this is typically the case for polynomials of large degree, where a very small change of the variable may change dramatically the value of the function; see Wilkinson's polynomial). Consider , where is a real interval, and suppose that we have an interval extension of , meaning that takes as input an interval and outputs an interval such that: We also assume that , so in particular has at most one root in . We then define the interval Newton operator by: where . Note that the hypothesis on implies that is well defined and is an interval (see interval arithmetic for further details on interval operations). This naturally leads to the following sequence: The mean value theorem ensures that if there is a root of in , then it is also in . Moreover, the hypothesis on ensures that is at most half the size of when is the midpoint of , so this sequence converges towards , where is the root of in . If strictly contains 0, the use of extended interval division produces a union of two intervals for ; multiple roots are therefore automatically separated and bounded. Applications Minimization and maximization problems Newton's method can be used to find a minimum or maximum of a function . The derivative is zero at a minimum or maximum, so local minima and maxima can be found by applying Newton's method to the derivative. The iteration becomes: Multiplicative inverses of numbers and power series An important application is Newton–Raphson division, which can be used to quickly find the reciprocal of a number , using only multiplication and subtraction, that is to say the number such that . We can rephrase that as finding the zero of . We have . Newton's iteration is Therefore, Newton's iteration needs only two multiplications and one subtraction. This method is also very efficient to compute the multiplicative inverse of a power series. Solving transcendental equations Many transcendental equations can be solved using Newton's method. Given the equation with and/or a transcendental function, one writes The values of that solve the original equation are then the roots of , which may be found via Newton's method. Obtaining zeros of special functions Newton's method is applied to the ratio of Bessel functions in order to obtain its root. Numerical verification for solutions of nonlinear equations A numerical verification for solutions of nonlinear equations has been established by using Newton's method multiple times and forming a set of solution candidates. CFD modeling An iterative Newton-Raphson procedure was employed in order to impose a stable Dirichlet boundary condition in CFD, as a quite general strategy to model current and potential distribution for electrochemical cell stacks. Examples Square root Consider the problem of finding the square root of a number , that is to say the positive number such that . Newton's method is one of many methods of computing square roots. We can rephrase that as finding the zero of . We have . For example, for finding the square root of 612 with an initial guess , the sequence given by Newton's method is: where the correct digits are underlined. With only a few iterations one can obtain a solution accurate to many decimal places. Rearranging the formula as follows yields the Babylonian method of finding square roots: i.e. the arithmetic mean of the guess, and . Solution of Consider the problem of finding the positive number with . We can rephrase that as finding the zero of . We have . Since for all and for , we know that our solution lies between 0 and 1. For example, with an initial guess , the sequence given by Newton's method is (note that a starting value of 0 will lead to an undefined result, showing the importance of using a starting point that is close to the solution): The correct digits are underlined in the above example. In particular, is correct to 12 decimal places. We see that the number of correct digits after the decimal point increases from 2 (for ) to 5 and 10, illustrating the quadratic convergence. Code The following is an implementation example of the Newton's method in the Julia programming language for finding a root of a function f which has derivative fprime. The initial guess will be and the function will be so that . Each new iteration of Newton's method will be denoted by x1. We will check during the computation whether the denominator (yprime) becomes too small (smaller than epsilon), which would be the case if , since otherwise a large amount of error could be introduced. x0 = 1
directed by Mike Christie. Entitled Education Entertainment Recreation (Live at Alexandra Palace), it was released on 7 May. Other projects In 1988, Bernard Sumner teamed up with former Smiths guitarist Johnny Marr to form the group Electronic, also enlisting the help of Neil Tennant and Chris Lowe of the Pet Shop Boys. Electronic regrouped in 1996 for Raise the Pressure, which also featured Karl Bartos (formerly of Kraftwerk). The project's third album Twisted Tenderness was released in 1999 after which the band dissolved. In June 2009, Sumner formed a new band called Bad Lieutenant with Phil Cunningham (guitar) and Jake Evans (guitar and vocals). Their album Never Cry Another Tear was released on 5 October 2009. In addition to Cunningham and Evans the album also features appearances by Stephen Morris (drums), Jack Mitchell (drums), Tom Chapman (bass) and Alex James (bass). The live band included Morris on drums and Tom Chapman on bass. Peter Hook has been involved with several other projects. In the 1990s, Hook recorded with Killing Joke with a view to joining the band. However, original bassist Martin 'Youth' Glover instead returned to the band. In 1995 he toured with the Durutti Column. He has recorded one album with the band Revenge with Davyth Hicks and Chris Jones and two with Monaco (both as bassist, keyboardist and lead vocalist) with David Potts. Monaco scored a club and alternative radio hit with "What Do You Want From Me?" in 1997. Hook also formed a band called Freebass with fellow bass players Mani (the Stone Roses) and Andy Rourke (the Smiths) and vocalist Gary Briggs, which was active from 2007 to 2010. He also contributed to Perry Farrell's Satellite Party. Hook's current band Peter Hook and the Light is touring Joy Division and New Order albums in their entirety. In 1990 Gillian Gilbert and Stephen Morris formed their own band, The Other Two. The Other Two released its first single "Tasty Fish" in 1991 and released two albums, The Other Two & You in 1993 and Super Highways in 1999. They have also been involved in scoring television soundtracks, like Making Out. In 2007, Gilbert and Morris remixed two tracks for the Nine Inch Nails remixes album Year Zero Remixed. BeMusic "BeMusic" was a name the band used for their publishing company (the LP label for Movement says "B Music" in large letters, though using an italic ß for the letter B). All four members of the band used the name for production work for other artists' recordings between 1982 and 1985. The first BeMusic credit was for Peter Hook producing Stockholm Monsters in 1982. Other artists with producer or musician credit for "BeMusic" were 52nd Street, Section 25, Marcel King, Quando Quango, Paul Haig, Thick Pigeon, Nyam Nyam and Life. Their production work as BeMusic was collected on two LTM Recordings compilation CDs, Cool As Ice: The BeMusic Productions and Twice As Nice (which also included production work by Donald Johnson, of A Certain Ratio, and Arthur Baker). Influences, style and legacy New Order's music mixes rock with dance music, as can be seen on signature tracks such as 1982's "Temptation", 1983's "Blue Monday" and 1987's "True Faith". Founding member Hook stated that the band's shift from playing cold dark tracks from 1981 to producing electro/rock tracks from 1982 was inspired by the music of German electronic group Kraftwerk, US rock band Sparks who had produced disco/electro-rock music with producer Giorgio Moroder on their No. 1 in Heaven album, and also the Moroder/Donna Summer collaboration on "I Feel Love". Along with Kraftwerk, the English bands Cabaret Voltaire, the Human League and Orchestral Manoeuvres in the Dark (OMD) educated singer Bernard Sumner that one "could make music without guitars". New Order's collaboration with New York DJ Arthur Baker was inspired by the records' sounds of Grandmaster Flash and the Furious Five and Afrika Bambaataa & the Soulsonic Force. According to a staff-written "Allmusic" history, the band are also regarded as "the first alternative dance" music group with their fusion of "used icy, gloomy post-punk with Kraftwerk-style synth-pop" and were also labeled as synth-pop, post-punk, new wave, dance-rock and electronica. They have heavily influenced techno, rock, and pop musicians including Moby, and were themselves influenced by the likes of David Bowie and Neu!. They have also significantly influenced electro, freestyle and house. New Order's Kraftwerk influence was acknowledged by their single "Krafty", which had cover art referencing "Autobahn". Drummer Stephen Morris plays a mixture of acoustic and electronic drums, and in many cases plays along seamlessly with sequenced parts. All the band members could and did switch instruments throughout gigs, as evidenced on Jonathan Demme's video for "The Perfect Kiss" and the concert videos Taras Shevchenko (recorded in New York, November 1981) and Pumped Full of Drugs (Tokyo, May 1985). During such live gigs, Sumner alternated between guitar, keyboards, melodica and (on the track "Confusion") bass; Gilbert switched between keyboards and guitar, Morris between drums and keyboards, and Hook played both bass and electronic drums. Taras Shevchenko is also notable for the fact all four members of the group have left the stage before the final song, "Temptation", comes to a complete end. Reputation Both New Order and Joy Division were among the most successful artists on the Factory Records label, run by Granada television personality Tony Wilson, and partnered with Factory in the financing of the Manchester club The Haçienda. Speaking in 2009, fellow synthpop musician Phil Oakey described New Order's slow-burn career as cult musicians as being unusually prolonged and effective: "If you want to make a lot of money out of pop, be number 3 a lot. Like New Order did." Cover artwork Almost all New Order recordings bear minimalist packaging, art directed by Peter Saville. The group's record sleeves bucked the 1980s trend by rarely showing the band members (with the exception of the Low-Life album) or even providing basic information such as the band name or title of the release. Song names were often hidden within the shrink wrapped package, either on the disc itself (such as the "Blue Monday" single), on an inconspicuous part of an inner sleeve ("The Perfect Kiss" single), or written in a cryptic colour code invented by Saville (Power, Corruption & Lies). Saville said his intention was to sell the band as a "mass-produced secret" of sorts, and that the minimalist style was enough to allow fans to identify the band's products without explicit labelling. Saville frequently sent the artwork straight to the printer, unreviewed by either the band or the label. Awards and nominations {| class=wikitable |- ! Year !! Awards !! Work !! Category !! Result |- | rowspan="3" | 1983 | rowspan="3" | NME Awards | Power, Corruption & Lies | Best Dressed Sleeve | |- | "Blue Monday" | Best Single | |- | Themselves | Best Group | |- | rowspan=2|1988 | Brit Awards | "True Faith" | Best British Video | |- | rowspan=2|Pollstar Concert Industry Awards | rowspan=2|Themselves | rowspan=2|Most Creative Stage Production | |- | 1990 | |- | 1991 | Ivor Novello Awards | "World in Motion" | Best Selling A Side | |- | rowspan=3|1993 | Mercury Prize | Republic | Album of the Year | |- | rowspan=2|Billboard Music Awards | Themselves | Top Modern Rock Tracks Artist | |- | "Regret" | Top Modern Rock Track | |- |1994 | D&AD Awards | "World (The Price of Love)" | Pop Promo Video | style="background:#BF8040"| Wood Pencil |- | 1999 | rowspan=1|Q Awards | Themselves | Q Inspiration Award | |- | 2000 | ASCAP Pop Music Awards | "Blue Monday" | Most Performed Song | |- | rowspan=2|2001 |Q Awards | "Crystal" | Best Single | |- | Žebřík Music Awards | rowspan=2|Themselves | Best International Surprise | |- | 2005 | NME Awards | Godlike Genius Award | |- | rowspan="2" | 2006 | Grammy Awards | "Guilt is a Useless Emotion" | Best Dance Recording | |- | MTV VMAJ | "Krafty" | Best Dance Video | |- | rowspan=2|2012 | UK Festival Awards | rowspan="3" | Themselves | Headliner of the Year | |- | Artrocker Awards | Legend Award | |- | rowspan="3" | 2015 | rowspan="2" | Q Awards | Q Outstanding Contribution To Music | |- | "Restless" | Best Track | |- |Best Art Vinyl |Music Complete | Best Art Vinyl | |- | 2016 | International Dance Music Awards | "Plastic" | Best Alternative/Rock Dance Track | |- | 2019 | Silver Clef Awards | Bernard Sumner | Outstanding Achievement Award | Band members Current Bernard Sumner – lead vocals, guitars, keyboards, programming, melodica (1980–1993, 1998–2007, 2011–present) Stephen Morris – drums, percussion, keyboards, programming (1980–1993, 1998–2007, 2011–present) Gillian Gilbert – keyboards, guitars, programming (1980–1993, 1998–2001, 2011–present) Phil Cunningham – guitars, keyboards, electronic percussion (2001–2007, 2011–present) Tom Chapman – bass, keyboards (2011–present) Former Peter Hook – bass, electronic percussion, vocals, keyboards, programming (1980–1993, 1998–2007) Timeline Discography Movement (1981) Power, Corruption & Lies (1983) Low-Life (1985) Brotherhood (1986) Technique (1989) Republic (1993) Get Ready (2001) Waiting for the Sirens' Call (2005) Lost Sirens (2013) Music Complete (2015) References Further reading Hickey, Dec. From Heaven to Heaven. New Order Live. The Early Years (1981-1984) at Close Quarters. London: Dec Hickey, 2012. Edge, Brian. New Order + Joy Division: Pleasures and Wayward Distractions. London: Omnibus Press, 1988. Flowers, Claude. New Order + Joy Division: Dreams Never End. London: Omnibus Press, 1995. Johnson, Mark. An Ideal For Living: An History Of Joy Division. London: Bobcat Books, 1984. Middles, Mick. From Joy Division to New Order: The Factory Story.
suicide of lead singer Ian Curtis; they were joined by Gillian Gilbert on keyboards later that year. New Order's integration of post-punk with electronic and dance music made them one of the most acclaimed and influential bands of the 1980s. They were the flagship band for Manchester-based independent record label Factory Records and its nightclub The Haçienda, and worked in long-term collaboration with graphic designer Peter Saville. While the band's early years were overshadowed by the legacy of Joy Division, their experience of the early 1980s New York club scene saw them increasingly incorporate dance rhythms and electronic instrumentation into their work. Their 1983 hit "Blue Monday" became the best-selling 12-inch single of all time and a popular club track. In the 1980s, they released successful albums such as Power, Corruption & Lies (1983), Technique (1989), and the singles compilation Substance (1987). They disbanded in 1993 to work on individual projects before reuniting in 1998. In the years since, New Order has gone through various hiatuses and personnel changes, most prominently the departure of Hook in 2007. They released their tenth studio album, Music Complete, in 2015. History Origins and formation: 1977–1980 Between 1977 and 1980, Ian Curtis, Peter Hook, Stephen Morris, and Bernard Sumner were members of the post-punk band Joy Division, often featuring heavy production input from producer Martin Hannett. Curtis took his own life on 18 May 1980, the day before Joy Division were scheduled to depart for their first American tour, and prior to the release of the band's second album, Closer. The rest of the band decided soon after Curtis's death that they would carry on. Prior to his death, the members of Joy Division had agreed not to continue under the Joy Division name should any one member leave. On 29 July 1980, the still unnamed trio debuted live at Manchester's Beach Club. Rob Gretton, the band's manager for over 20 years, is credited for having found the name New Order in an article in The Guardian titled "The People's New Order of Kampuchea". The band adopted this name, despite its previous use for former Stooge Ron Asheton's band The New Order. The group states that the name New Order (as was also the case with "Joy Division") does not draw a direct line to National Socialism or Fascism. The band rehearsed with each member taking turns on vocals. Sumner ultimately took the role, as he could sing when he wasn't playing his guitar. They wanted to complete the line-up with someone they knew well and whose musical skill and style was compatible with their own. Gretton suggested Morris's girlfriend Gillian Gilbert, and she was invited to join the band in early October 1980, as keyboardist and guitarist. Her first live performance with the band occurred at The Squat in Manchester on 25 October 1980. Movement: 1981–1982 The initial release as New Order was the single "Ceremony", backed with "In a Lonely Place". These two songs were written in the weeks before Curtis took his own life. With the release of Movement in November 1981, New Order initially started on a similar route as their previous incarnation, performing dark, melodic songs, albeit with an increased use of synthesisers. The band viewed the period as a low point, as they were still reeling from Curtis' death. Hook commented that the only positive thing to come out of the Movement sessions was that producer Martin Hannett had showed the band how to use a mixing board, which allowed them to produce records by themselves from then on. More recently, Hook indicated a change of heart: "I think Movement gets a raw deal in general really – for me, when you consider the circumstances in which it was written, it is a fantastic record." New Order visited New York City again in 1981, where the band were introduced to post-disco, freestyle and electro. The band had taken to listening to Italian disco to cheer themselves up, while Morris taught himself drum programming. The singles that followed, "Everything's Gone Green" and "Temptation", saw a change in direction toward dance music. The Haçienda, Factory Records' own nightclub (largely funded by New Order) opened in May 1982 in Manchester and was even issued a Factory catalogue number: FAC51. The opening of UK's first ever superclub was marked by a nearly 23-minute instrumental piece originally entitled "Prime 5 8 6", but released 15 years later as "Video 5 8 6". Composed primarily by Sumner and Morris, "Prime 5 8 6"/"Video 5 8 6" was an early version of "5 8 6" that contained rhythm elements that would later surface on "Blue Monday" and "Ultraviolence". Power, Corruption & Lies: 1983–1984 Power, Corruption & Lies, released in May 1983, was a synthesiser-based outing and a dramatic change in sound from Joy Division and the preceding album, although the band had been hinting at the increased use of technology during the music-making process for a number of years then, including their work as Joy Division. Starting from what earlier singles had hinted, this was where the band had found their footing, mixing early techno music with their earlier guitar-based sound and showing the strong influence of acts like Kraftwerk and Giorgio Moroder. Even further in this direction was the electronically sequenced, four-on-the-floor single "Blue Monday". Inspired by Klein + M.B.O.'s "Dirty Talk" and Sylvester's disco classic, "You Make Me Feel (Mighty Real)", "Blue Monday" became the best-selling independent 12" single of all time in the UK; however, (much to the chagrin of the buying public) it was not on the track list of Power, Corruption & Lies. The song was included however on the cassette format in some countries, such as Australia and New Zealand, and on the original North American CD release of the album, alongside its B-side, "The Beach". "Blue Monday" was also included on the 2008 collector's edition of Power, Corruption & Lies. The 1983 single "Confusion" firmly established the group as a dance music force, inspiring many musicians in subsequent years. In 1984 they followed the largely synthesised single "Thieves Like Us" with the heavy guitar-drum-bass rumble of "Murder", a not-too-distant cousin of "Ecstasy" from the Power, Corruption & Lies album. KROQ Los Angeles DJ Jed the Fish claims New Order had more to do with the emergence of house music than the Warehouse music of Chicago and "Frankie Knuckles and the whole so-called House music scene. Unless you were actually from regional Chicago, had you ever heard of House music until New Order? Be real, now." Low-Life, Brotherhood, and Substance: 1985–1987 1985's Low-Life refined and sometimes mixed the two styles, guitar-based and electronic, and included "The Perfect Kiss"—the video for which was filmed by Jonathan Demme—and "Sub-culture". In February 1986, the soundtrack album to Pretty in Pink featuring "Shellshock" was released on A&M Records. An instrumental version of "Thieves Like Us" and the instrumental "Elegia" appeared in the film but were not on the soundtrack album. Later that summer, New Order headlined a line-up that included the Smiths, the Fall, and A Certain Ratio during the Festival of the Tenth Summer at Manchester's G-Mex. Brotherhood (1986) divided the two approaches onto separate album sides. The album notably featured "Bizarre Love Triangle" (a Top 20 hit in Australia and New Zealand) and "Angel Dust" (of which a remixed instrumental version is available on the UK "True Faith" CD video single, under the title "Evil Dust"), a track which marries a synth break beat with Low-Life-era guitar effects. While New Order toured North America with friends Echo & the Bunnymen, the summer of 1987 saw the release of the compilation Substance, which featured the new single "True Faith". Substance was an important album in collecting the group's 12-inch singles onto CD for the first time and featured new versions of "Temptation" and "Confusion"—referred to as "Temptation '87" and "Confusion '87". A second disc featured several of the B-sides from the singles on the first disc, as well as additional A-sides "Procession" and "Murder". The single, "True Faith", with its surreal video, became a hit on MTV and the band's first American top 40 hit. The single's B-side, "1963"—originally planned on being the A-side until the group's label convinced them to release "True Faith" instead—would later be released as a single in its own right several years later, with two new versions. In December 1987, the band released a further single, "Touched by the Hand of God", with a Kathryn Bigelow-directed video parodying glam-metal. The song was one of four new tracks recorded for the American comedy film Salvatation!, and reached number 20 on the UK Singles Chart and number 1 in the UK Independent Singles chart. However, it would not appear on an album until the 1994 compilation The Best of New Order. Technique, Republic and first break-up: 1988–1993 By this time, the group was heavily influenced by the Balearic sounds of Ibiza, which were making their way into the Haçienda. Partly recorded at Mediterranean Sound studios on Ibiza, Technique was released in February 1989. The album entered the charts at number one in the UK and contained a mix of acid house influence (as on opening track "Fine Time") and a more traditional rock sound (as on the single "Run 2"). The album is a blend of upbeat, accessible music coupled with blunt, poignant lyrics. During the summer of 1989, New Order supported Technique by touring with Public Image Ltd, Throwing Muses and the Sugarcubes across the United States and Canada in what the press dubbed the "Monsters of Alternative Rock" tour. Around this time, band members also began side projects including Electronic (Sumner with Johnny Marr) and Revenge (Hook with Davyth Hicks). Morris and Gilbert began to work together on outside TV theme production work. In 1991, the band were sued by the publishing company of American singer John Denver, who alleged that the guitar break in "Run 2" was similar to his song "Leaving on a Jet Plane". The case was settled out of court and the song has since been credited to both New Order and John Denver. In 1990, New Order recorded the official song of the England national football team's 1990 World Cup campaign, "World in Motion", under the ad hoc band name EnglandNewOrder. The song, co-written with comedian Keith Allen, was the band's sole number one UK hit. The song was originally planned to be titled "E for England", however the Football Association vetoed the title upon realising that this was a reference to ecstasy; a drug heavily associated with the Haçienda. (Allen claimed that his original draft lyrics included "E is for England, England starts with E / We'll all be smiling when we're in Italy.") The song also featured chanting from members of the England team and Allen, and a guest rap from England player John Barnes. It was again produced by Stephen Hague, who the band chose to produce their next album. The band's next album Republic was shadowed by the collapse of their longtime label Factory Records. The label had been ailing due to financial difficulties, and was forced to declare bankruptcy in 1992. New Order never had a formal contract with Factory. Although unusual for a major group, this was Factory's standard practice until the mid-1980s. Because of this, the band, rather than Factory Records, legally owned all of their recordings. This has been cited by Wilson himself as the main reason London Records' 1992 offer to buy the ailing label fell through. Following Factory's collapse, New Order signed with London, as did Morris and Gilbert separately for their side project The Other Two, whose debut album was originally intended for release on Factory. Republic, released around the world in 1993, spawned the singles "Regret"—New Order's highest-charting single in the US—"Ruined in a Day", "World", and "Spooky". Following the release and promotion of Republic, the band put New Order on hold while focusing on side projects; with The Other Two's debut album released in 1993. In 1994, a second singles collection was released, entitled The Best of New Order. It featured all of the band's singles since Substance as well as a few extra tracks: "Vanishing Point" (from 1989's Technique), "The Perfect Kiss", "Thieves Like Us", "Shellshock", and remixes of "True Faith", "Bizarre Love
and Euclid, and an acclaimed compilation of mathematics. Tartaglia was the first to apply mathematics to the investigation of the paths of cannonballs, known as ballistics, in his Nova Scientia (A New Science, 1537); his work was later partially validated and partially superseded by Galileo's studies on falling bodies. He also published a treatise on retrieving sunken ships. Personal life Niccolò Fontana was born in Brescia, the son of Michele Fontana, a dispatch rider who travelled to neighbouring towns to deliver mail. In 1506, Michele was murdered by robbers, and Niccolò, his two siblings, and his mother were left impoverished. Niccolò experienced further tragedy in 1512 when King Louis XII's troops invaded Brescia during the War of the League of Cambrai against Venice. The militia of Brescia defended their city for seven days. When the French finally broke through, they took their revenge by massacring the inhabitants of Brescia. By the end of battle, over 45,000 residents were killed. During the massacre, Niccolò and his family sought sanctuary in the local cathedral. But the French entered and a soldier sliced Niccolò's jaw and palate with a saber and left him for dead. His mother nursed him back to health but the young boy was left with a speech impediment, prompting the nickname "Tartaglia" ("stammerer"). After this he would never shave, and grew a beard to camouflage his scars. Tartaglia's biographer Arnoldo Masotti writes that: Tartaglia moved to Verona around 1517, then to Venice in 1534, a major European commercial hub and one of the great centres of the Italian renaissance at this time. Also relevant is Venice's place at the forefront of European printing culture in the sixteenth century, making early printed texts available even to poor scholars if sufficiently motivated or well-connected — Tartaglia knew of Archimedes' work on the quadrature of the parabola, for example, from Guarico's Latin edition of 1503, which he had found "in the hands of a sausage-seller in Verona in 1531" (in mano di un salzizaro in Verona, l'anno 1531 in his words). Tartaglia eked out a living teaching practical mathematics in abacus schools and earned a penny where he could: He died in Venice. Ballistics Nova Scientia (1537) was Tartaglia's first published work, described by Matteo Valleriani as: Then dominant Aristotelian physics preferred categories like "heavy" and "natural" and "violent" to describe motion, generally eschewing mathematical explanations. Tartaglia brought mathematical models to the fore, "eviscerat[ing] Aristotelian terms of projectile movement" in the words of Mary J. Henninger-Voss. One of his findings was that the maximum range of a projectile was achieved by directing the cannon at a 45° angle to the horizon. Tartaglia's model for a cannonball's flight was that it proceeded from the cannon in a straight line, then after a while started to arc towards the earth along a circular path, then finally dropped in another straight line directly towards the earth. At the end of Book 2 of Nova Scientia, Tartaglia proposes to find the length of that initial rectilinear path for a projectile fired at an elevation of 45°, engaging in a Euclidean-style argument, but one with numbers attached to line segments and areas, and eventually proceeds algebraically to find the desired quantity (procederemo per algebra in his words). Mary J. Henninger-Voss notes that "Tartaglia's work on military science had an enormous circulation throughout Europe", being a reference for common gunners into the eighteenth century, sometimes through unattributed translations. He influenced Galileo as well, who owned "richly annotated" copies of his works on ballistics as he set about solving the projectile problem once and for all. Translations Archimedes' works began to be studied outside the universities in Tartaglia's
and his mother were left impoverished. Niccolò experienced further tragedy in 1512 when King Louis XII's troops invaded Brescia during the War of the League of Cambrai against Venice. The militia of Brescia defended their city for seven days. When the French finally broke through, they took their revenge by massacring the inhabitants of Brescia. By the end of battle, over 45,000 residents were killed. During the massacre, Niccolò and his family sought sanctuary in the local cathedral. But the French entered and a soldier sliced Niccolò's jaw and palate with a saber and left him for dead. His mother nursed him back to health but the young boy was left with a speech impediment, prompting the nickname "Tartaglia" ("stammerer"). After this he would never shave, and grew a beard to camouflage his scars. Tartaglia's biographer Arnoldo Masotti writes that: Tartaglia moved to Verona around 1517, then to Venice in 1534, a major European commercial hub and one of the great centres of the Italian renaissance at this time. Also relevant is Venice's place at the forefront of European printing culture in the sixteenth century, making early printed texts available even to poor scholars if sufficiently motivated or well-connected — Tartaglia knew of Archimedes' work on the quadrature of the parabola, for example, from Guarico's Latin edition of 1503, which he had found "in the hands of a sausage-seller in Verona in 1531" (in mano di un salzizaro in Verona, l'anno 1531 in his words). Tartaglia eked out a living teaching practical mathematics in abacus schools and earned a penny where he could: He died in Venice. Ballistics Nova Scientia (1537) was Tartaglia's first published work, described by Matteo Valleriani as: Then dominant Aristotelian physics preferred categories like "heavy" and "natural" and "violent" to describe motion, generally eschewing mathematical explanations. Tartaglia brought mathematical models to the fore, "eviscerat[ing] Aristotelian terms of projectile movement" in the words of Mary J. Henninger-Voss. One of his findings was that the maximum range of a projectile was achieved by directing the cannon at a 45° angle to the horizon. Tartaglia's model for a cannonball's flight was that it proceeded from the cannon in a straight line, then after a while started to arc towards the earth along a circular path, then finally dropped in another straight line directly towards the earth. At the end of Book 2 of Nova Scientia, Tartaglia proposes to find the length of that initial rectilinear path for a projectile fired at an elevation of 45°, engaging in a Euclidean-style argument, but one with numbers attached to line segments and areas, and eventually proceeds algebraically to find the desired quantity (procederemo per algebra in his words). Mary J. Henninger-Voss notes that "Tartaglia's work on military science had an enormous circulation throughout Europe", being a reference for common gunners into the eighteenth century, sometimes through unattributed translations. He influenced Galileo as well, who owned "richly annotated" copies of his works on ballistics as he set about solving the projectile problem once and for all. Translations Archimedes' works began to be studied outside the universities in Tartaglia's day as exemplary of the notion that mathematics is the key to understanding physics, Federigo Commandino reflecting this notion when saying in 1558 that "with respect to geometry no one of sound mind could deny that Archimedes was some god". Tartaglia published a 71-page Latin edition of Archimedes in 1543, Opera Archimedis Syracusani philosophi et mathematici ingeniosissimi, containing Archimedes' works on the parabola, the circle, centres of gravity, and floating bodies. Guarico had published Latin editions of the first two in 1503, but the works on centres of gravity and floating bodies had not been published before. Tartaglia published Italian versions of some Archimedean texts later in life, his executor continuing to publish his translations after his death. Galileo probably learned of Archimedes' work through these widely disseminated editions. Tartaglia's Italian edition of Euclid in 1543, Euclide Megarense philosopho, was especially significant as the first translation of the Elements into any modern European language. For two centuries Euclid had been taught from two Latin translations taken from an Arabic source; these contained errors in Book V, the Eudoxian theory of proportion, which rendered it unusable. Tartaglia's edition was based on Zamberti's Latin translation of an uncorrupted Greek text, and rendered Book V correctly. He also wrote the first modern and useful commentary on the theory. This work went through many editions in the sixteenth century and helped diffuse knowledge of mathematics to a non-academic but increasingly well-informed literate and numerate public in Italy. The theory became an essential tool for Galileo, as it had been for Archimedes. General Trattato di Numeri et Misure Tartaglia exemplified and eventually transcended the abacco tradition that had flourished in Italy since the twelfth century, a tradition of concrete commercial mathematics taught at abacus schools maintained by communities of merchants. Maestros d'abaco like Tartaglia taught not with the abacus but with paper-and-pen, inculcating algorithms of the type found in grade schools today. Tartaglia's masterpiece was the General Trattato di Numeri et Misure (General Treatise on Number and Measure), a 1500-page encyclopedia in six parts written in the Venetian dialect, the first three coming out in 1556 about the time of Tartaglia's death and the last three published posthumously by his literary executor and publisher Curtio Troiano in 1560. David Eugene Smith wrote of the General Trattato that it was: Part I is 554 pages long and constitutes essentially commercial arithmetic, taking up such topics
not mean that they are not experienced and, therefore, non-existent; only that they are devoid of a permanent and eternal substance (svabhava) because, like a dream, they are mere projections of human consciousness. Since these imaginary fictions are experienced, they are not mere names (prajnapti)." Major attributed works According to David Seyfort Ruegg, the Madhyamakasastrastuti attributed to Candrakirti (c. 600 – c. 650) refers to eight texts by Nagarjuna:the (Madhyamaka)karikas, the Yuktisastika, the Sunyatasaptati, the Vigrahavyavartani, the Vidala (i.e. Vaidalyasutra/Vaidalyaprakarana), the Ratnavali, the Sutrasamuccaya, and Samstutis (Hymns). This list covers not only much less than the grand total of works ascribed to Nagarjuna in the Chinese and Tibetan collections, but it does not even include all such works that Candrakirti has himself cited in his writings.According to one view, that of Christian Lindtner, the works definitely written by Nāgārjuna are: Mūlamadhyamaka-kārikā (Fundamental Verses of the Middle Way), available in three Sanskrit manuscripts and numerous translations. Śūnyatāsaptati (Seventy Verses on Emptiness), accompanied by a prose commentary ascribed to Nagarjuna himself. Vigrahavyāvartanī (The End of Disputes) (Pulverizing the Categories), a prose work critiquing the categories used by Indian Nyaya philosophy. Vyavahārasiddhi (Proof of Convention) (Sixty Verses on Reasoning) (Four Hymns): Lokātīta-stava (Hymn to transcendence), Niraupamya-stava (to the Peerless), Acintya-stava (to the Inconceivable), and Paramārtha-stava (to Ultimate Truth). Ratnāvalī (Precious Garland), subtitled (rajaparikatha), a discourse addressed to an Indian king (possibly a Satavahana monarch). (Verses on the heart of Dependent Arising), along with a short commentary (Vyākhyāna). Sūtrasamuccaya, an anthology of various sutra passages. (Exposition of the awakening mind) (Letter to a Good Friend) (Requisites of awakening), a work the path of the Bodhisattva and paramitas, it is quoted by Candrakirti in his commentary on Aryadeva's four hundred. Now only extant in Chinese translation (Taisho 1660). The Tibetan historian Buston considers the first six to be the main treatises of Nāgārjuna (this is called the "yukti corpus", rigs chogs), while according to Tāranātha only the first five are the works of Nāgārjuna. TRV Murti considers Ratnāvalī, Pratītyasamutpādahṝdaya and Sūtrasamuccaya to be works of Nāgārjuna as the first two are quoted profusely by Chandrakirti and the third by Shantideva. Other attributed works In addition to works mentioned above, numerous other works are attributed to Nāgārjuna, many of which are dubious attributions and later works. There is an ongoing, lively controversy over which of those works are authentic. Christian Lindtner divides the various attributed works as "1) correctly attributed, 2) wrongly attributed to him, and 3) those which may or may not be genuine." Lindtner further divides the third category of dubious or questionable texts into those which are "perhaps authentic" and those who are unlikely to be authentic. Those which he sees as perhaps being authentic include: Mahāyānavimsika, it is cited as Nagarjuna's work in the Tattvasamgraha as well as by Atisha, Lindtner sees the style and content as compatible with the yukti corpus. Survives in Sanskrit. Bodhicittotpādavidhi, a short text that describes the sevenfold write for a bodhisattva, Dvadasakāranayastotra, a madhyamaka text only extant in Tibetan, (Madhyamaka-)Bhavasamkrānti, a verse from this is attributed to Nagarjuna by Bhavaviveka. Nirālamba-stava, Sālistambakārikā, only exists in Tibetan, it is a versification of the Śālistamba Sūtra Stutytitastava, only exists in Tibetan Danaparikatha, only exists in Tibetan, a praise of giving (dana) Cittavajrastava, Mulasarvāstivadisrāmanerakārikā, 50 karikas on the Vinaya of the Mulasarvastivadins Dasabhumtkavibhāsā, only exists in Chinese, a commentary on the Dashabhumikasutra Lokapariksā, Yogasataka, a medical text Prajñadanda Rasavaisesikasutra, a rasayana (biochemical) text Bhāvanākrama, contains various verses similar to the Lankavatara, it is cited in the Tattvasamgraha as by Nagarjuna Ruegg notes various works of uncertain authorship which have been attributed to Nagarjuna, including the Dharmadhatustava (Hymn to the Dharmadhatu, which shows later influences), Mahayanavimsika, Salistambakarikas, the Bhavasamkranti, and the Dasabhumtkavibhāsā. Furthermore, Ruegg writes that "three collections of stanzas on the virtues of intelligence and moral conduct ascribed to Nagarjuna are extant in Tibetan translation": Prajñasatakaprakarana, Nitisastra-Jantuposanabindu and Niti-sastra-Prajñadanda. Attributions which are likely to be false Meanwhile, those texts that Lindtner considers as questionable and likely inauthentic are: Aksarasataka, Akutobhaya (Mulamadhyamakavrtti), Aryabhattaraka-Manjusriparamarthastuti, Kayatrayastotra, Narakoddharastava, Niruttarastava, Vandanastava, Dharmasamgraha, Dharmadhatugarbhavivarana, Ekaslokasastra, Isvarakartrtvanirakrtih (A refutation of God/Isvara), Sattvaradhanastava, Upayahrdaya, Astadasasunyatasastra, Dharmadhatustava, Yogaratnamala.Meanwhile, Lindtner's list of outright wrong attributions is: Mahāprajñāpāramitopadeśa (Dà zhìdù lùn), Abudhabodhakaprakarana, Guhyasamajatantratika, Dvadasadvaraka, Prajñaparamitastotra, and Svabhavatrayapravesasiddhi.Notably, the Dà zhìdù lùn (Taisho 1509, "Commentary on the great prajñaparamita") which has been influential in Chinese Buddhism, has been questioned as a genuine work of Nāgārjuna by various scholars including Lamotte. This work is also only attested in a Chinese translation by Kumārajīva and is unknown in the Tibetan and Indian traditions. Other works are extant only in Chinese, one of these is the Shih-erh-men-lun or 'Twelve-topic treatise' (*Dvadasanikaya or *Dvadasamukha-sastra); one of the three basic treatises of the Sanlun school (East Asian Madhyamaka). Several works considered important in esoteric Buddhism are attributed to Nāgārjuna and his disciples by traditional historians like Tāranātha from 17th century Tibet. These historians try to account for chronological difficulties with various theories, such as seeing later writings as mystical revelations. For a useful summary of this tradition, see Wedemeyer 2007. Lindtner sees the author of some of these tantric works as being a tantric Nagarjuna who lives much later, sometimes called "Nagarjuna II". Philosophy Sunyata Nāgārjuna's major thematic focus is the concept of śūnyatā (translated into English as "emptiness") which brings together other key Buddhist doctrines, particularly anātman "not-self" and pratītyasamutpāda "dependent origination", to refute the metaphysics of some of his contemporaries. For Nāgārjuna, as for the Buddha in the early texts, it is not merely sentient beings that are "selfless" or non-substantial; all phenomena (dhammas) are without any svabhāva, literally "own-being", "self-nature", or "inherent existence" and thus without any underlying essence. They are empty of being independently existent; thus the heterodox theories of svabhāva circulating at the time were refuted on the basis of the doctrines of early Buddhism. This is so because all things arise always dependently: not by their own power, but by depending on conditions leading to their coming into existence, as opposed to being. Nāgārjuna means by real any entity which has a nature of its own (svabhāva), which is not produced by causes (akrtaka), which is not dependent on anything else (paratra nirapeksha). Chapter 24 verse 14 of the Mūlamadhyamakakārikā provides one of Nāgārjuna's most famous quotations on emptiness and co-arising: As part of his analysis of the emptiness of phenomena in the Mūlamadhyamakakārikā, Nāgārjuna critiques svabhāva in several different concepts. He discusses the problems of positing any sort of inherent essence to causation, movement, change and personal identity. Nāgārjuna makes use of the Indian logical tool of the tetralemma to attack any essentialist conceptions. Nāgārjuna's logical analysis is based on four basic propositions: All things (dharma) exist: affirmation of being, negation of non-being All things (dharma) do not exist: affirmation of non-being, negation of being All things (dharma) both exist and do not exist: both affirmation and negation All things (dharma) neither exist nor do not exist: neither affirmation nor negation To say that all things are 'empty' is to deny any kind of ontological foundation; therefore Nāgārjuna's view is often seen as a kind of ontological anti-foundationalism or a metaphysical anti-realism. Understanding the nature of the emptiness of phenomena is simply a means to an end, which is nirvana. Thus Nāgārjuna's philosophical project is ultimately a soteriological one meant to correct our everyday cognitive processes which mistakenly posits svabhāva on the flow of experience. Some scholars such as Fyodor Shcherbatskoy and T.R.V. Murti held that Nāgārjuna was the inventor of the Shunyata doctrine; however, more recent work by scholars such as Choong Mun-keat, Yin Shun and Dhammajothi Thero has argued that Nāgārjuna was not an innovator by putting forth this theory, but that, in the words of Shi Huifeng, "the connection between emptiness and dependent origination is not an innovation or creation of Nāgārjuna". Two truths Nāgārjuna was also instrumental in the development of the two truths doctrine, which claims that there are two levels of truth in Buddhist teaching, the ultimate truth (paramārtha satya) and the conventional or superficial truth (saṃvṛtisatya). The ultimate truth to Nāgārjuna is the truth that everything is empty of essence, this includes emptiness itself ('the emptiness of emptiness'). While some (Murti, 1955) have interpreted this by positing Nāgārjuna as a neo-Kantian and thus making ultimate
this happened and how Nāgārjuna retrieved the sutras. Some sources say he retrieved the sutras from the land of the nāgas. Indeed, Nāgārjuna is often depicted in composite form comprising human and nāga characteristics. Nāgas are snake-like supernatural beings of great magical power that feature in Hindu, Buddhist and Jain mythology. Nāgas are found throughout Indian religious culture, and typically signifies an intelligent serpent or dragon, who is responsible for the rains, lakes and other bodies of water. In Buddhism, it is a synonym for a realised arhat, or wise person in general. Traditional sources also claim that Nāgārjuna practiced aryuvedic alchemy (rasayāna). Kumārajīva's biography for example, has Nāgārjuna making an elixir of invisibility, and Bus-ton, Taranatha and Xuanzang all state that he could turn rocks into gold. Tibetan hagiographies also state that Nāgārjuna studied at Nālanda University. However, according to Walser, this university was not a strong monastic center until about 425. Also, as Walser notes, "Xuanzang and Yijing both spent considerable time at Nālanda and studied Nāgārjuna’s texts there. It is strange that they would have spent so much time there and yet chose not to report any local tales of a man whose works played such an important part in the curriculum." Some sources (Bu-ston and the other Tibetan historians) claim that in his later years, Nāgārjuna lived on the mountain of Śrīparvata near the city that would later be called Nāgārjunakoṇḍa ("Hill of Nāgārjuna"). The ruins of Nāgārjunakoṇḍa are located in Guntur district, Andhra Pradesh. The Caitika and Bahuśrutīya nikāyas are known to have had monasteries in Nāgārjunakoṇḍa. The archaeological finds at Nāgārjunakoṇḍa have not resulted in any evidence that the site was associated with Nagarjuna. The name "Nāgārjunakoṇḍa" dates from the medieval period, and the 3rd-4th century inscriptions found at the site make it clear that it was known as "Vijayapuri" in the ancient period. Other Nāgārjunas There are a multitude of texts attributed to "Nāgārjuna", many of these texts date from much later periods. This has caused much confusion for the traditional Buddhist biographers and doxographers. Modern scholars are divided on how to classify these later texts and how many later writers called "Nāgārjuna" existed (the name remains still popular today in Andhra Pradesh). Some scholars have posited that there was a separate Aryuvedic writer called Nāgārjuna who wrote numerous treatises on Rasayana. Also, there is a later Tantric Buddhist author by the same name who may have been a scholar at Nālandā University and wrote on Buddhist tantra. According to Donald S. Lopez Jr., he originally belonged to a Brahmin family from eastern India and later became Buddhist. There is also a Jain figure of the same name who was said to have traveled to the Himalayas. Walser thinks that it is possible that stories related to this figure influenced Buddhist legends as well. Works There exist a number of influential texts attributed to Nāgārjuna; however, as there are many pseudepigrapha attributed to him, lively controversy exists over which are his authentic works. Mūlamadhyamakakārikā The Mūlamadhyamakakārikā is Nāgārjuna's best-known work. It is "not only a grand commentary on the Buddha's discourse to Kaccayana, the only discourse cited by name, but also a detailed and careful analysis of most of the important discourses included in the Nikayas and the Agamas, especially those of the Atthakavagga of the Sutta-nipata. In the Mūlamadhyamakakārikā, "[A]ll experienced phenomena are empty (sunya). This did not mean that they are not experienced and, therefore, non-existent; only that they are devoid of a permanent and eternal substance (svabhava) because, like a dream, they are mere projections of human consciousness. Since these imaginary fictions are experienced, they are not mere names (prajnapti)." Major attributed works According to David Seyfort Ruegg, the Madhyamakasastrastuti attributed to Candrakirti (c. 600 – c. 650) refers to eight texts by Nagarjuna:the (Madhyamaka)karikas, the Yuktisastika, the Sunyatasaptati, the Vigrahavyavartani, the Vidala (i.e. Vaidalyasutra/Vaidalyaprakarana), the Ratnavali, the Sutrasamuccaya, and Samstutis (Hymns). This list covers not only much less than the grand total of works ascribed to Nagarjuna in the Chinese and Tibetan collections, but it does not even include all such works that Candrakirti has himself cited in his writings.According to one view, that of Christian Lindtner, the works definitely written by Nāgārjuna are: Mūlamadhyamaka-kārikā (Fundamental Verses of the Middle Way), available in three Sanskrit manuscripts and numerous translations. Śūnyatāsaptati (Seventy Verses on Emptiness), accompanied by a prose commentary ascribed to Nagarjuna himself. Vigrahavyāvartanī (The End of Disputes) (Pulverizing the Categories), a prose work critiquing the categories used by Indian Nyaya philosophy. Vyavahārasiddhi (Proof of Convention) (Sixty Verses on Reasoning) (Four Hymns): Lokātīta-stava (Hymn to transcendence), Niraupamya-stava (to the Peerless), Acintya-stava (to the Inconceivable), and Paramārtha-stava (to Ultimate Truth). Ratnāvalī (Precious Garland), subtitled (rajaparikatha), a discourse addressed to an Indian king (possibly a Satavahana monarch). (Verses on the heart of Dependent Arising), along with a short commentary (Vyākhyāna). Sūtrasamuccaya, an anthology of various sutra passages. (Exposition of the awakening mind) (Letter to a Good Friend) (Requisites of awakening), a work the path of the Bodhisattva and paramitas, it is quoted by Candrakirti in his commentary on Aryadeva's four hundred. Now only extant in Chinese translation (Taisho 1660). The Tibetan historian Buston considers the first six to be the main treatises of Nāgārjuna (this is called the "yukti corpus", rigs chogs), while according to Tāranātha only the first five are the works of Nāgārjuna. TRV Murti considers Ratnāvalī, Pratītyasamutpādahṝdaya and Sūtrasamuccaya to be works of Nāgārjuna as the first two are quoted profusely by Chandrakirti and the third by Shantideva. Other attributed works In addition to works mentioned above, numerous other works are attributed to Nāgārjuna, many of which are dubious attributions and later works. There is an ongoing, lively controversy over which of those works are authentic. Christian Lindtner divides the various attributed works as "1) correctly attributed, 2) wrongly attributed to him, and 3) those which may or may not be genuine." Lindtner further divides the third category of dubious or questionable texts into those which are "perhaps authentic" and those who are unlikely to be authentic. Those which he sees as perhaps being authentic include: Mahāyānavimsika, it is cited as Nagarjuna's work in the Tattvasamgraha as well as by Atisha, Lindtner sees the style and content as compatible with the yukti corpus. Survives in Sanskrit. Bodhicittotpādavidhi, a short text that describes the sevenfold write for a bodhisattva, Dvadasakāranayastotra, a madhyamaka text only extant in Tibetan, (Madhyamaka-)Bhavasamkrānti, a verse from this is attributed to Nagarjuna by Bhavaviveka. Nirālamba-stava, Sālistambakārikā, only exists in Tibetan, it is a versification of the Śālistamba Sūtra Stutytitastava, only exists in Tibetan Danaparikatha, only exists in Tibetan, a praise of giving (dana) Cittavajrastava, Mulasarvāstivadisrāmanerakārikā, 50 karikas on the Vinaya of the Mulasarvastivadins Dasabhumtkavibhāsā, only exists in Chinese, a commentary on the Dashabhumikasutra Lokapariksā, Yogasataka, a medical text Prajñadanda Rasavaisesikasutra, a rasayana (biochemical) text Bhāvanākrama, contains various verses similar to the Lankavatara, it is cited in the Tattvasamgraha as by Nagarjuna Ruegg notes various works of uncertain authorship which have been attributed to Nagarjuna, including the Dharmadhatustava (Hymn to the Dharmadhatu, which shows later influences), Mahayanavimsika, Salistambakarikas, the Bhavasamkranti, and the Dasabhumtkavibhāsā. Furthermore, Ruegg writes that "three collections of stanzas on the virtues of intelligence and moral conduct ascribed to Nagarjuna are extant in Tibetan translation": Prajñasatakaprakarana, Nitisastra-Jantuposanabindu and Niti-sastra-Prajñadanda. Attributions which are likely to be false Meanwhile, those texts that Lindtner considers as questionable and likely inauthentic are: Aksarasataka, Akutobhaya (Mulamadhyamakavrtti), Aryabhattaraka-Manjusriparamarthastuti, Kayatrayastotra, Narakoddharastava, Niruttarastava, Vandanastava, Dharmasamgraha, Dharmadhatugarbhavivarana, Ekaslokasastra, Isvarakartrtvanirakrtih (A refutation of God/Isvara), Sattvaradhanastava, Upayahrdaya, Astadasasunyatasastra, Dharmadhatustava, Yogaratnamala.Meanwhile, Lindtner's list of outright wrong attributions is: Mahāprajñāpāramitopadeśa (Dà zhìdù lùn), Abudhabodhakaprakarana, Guhyasamajatantratika, Dvadasadvaraka, Prajñaparamitastotra, and Svabhavatrayapravesasiddhi.Notably, the Dà zhìdù lùn (Taisho 1509, "Commentary on the great prajñaparamita") which has been influential in Chinese Buddhism, has been questioned as a genuine work of Nāgārjuna by various scholars including Lamotte. This work is also only attested in a Chinese translation by Kumārajīva and is unknown in the Tibetan and Indian traditions. Other works are extant only in Chinese, one of these is the Shih-erh-men-lun or 'Twelve-topic treatise' (*Dvadasanikaya or *Dvadasamukha-sastra); one of the three basic treatises of the Sanlun school (East Asian Madhyamaka). Several works considered important in esoteric Buddhism are attributed to Nāgārjuna and his disciples by traditional historians like Tāranātha from 17th century Tibet. These historians try to account for chronological difficulties with various theories, such as seeing later writings as mystical revelations. For a useful summary of this tradition, see Wedemeyer 2007. Lindtner sees the author of some of these tantric works as being a tantric Nagarjuna who lives much later, sometimes called "Nagarjuna II". Philosophy Sunyata Nāgārjuna's major thematic focus is the concept of śūnyatā (translated into English as "emptiness") which brings together other key Buddhist doctrines, particularly anātman "not-self" and pratītyasamutpāda "dependent origination", to refute the metaphysics of some of his contemporaries. For Nāgārjuna, as for the Buddha in the early texts, it is not merely sentient beings that are "selfless" or non-substantial; all phenomena (dhammas) are without any svabhāva, literally "own-being", "self-nature", or "inherent existence" and thus without any underlying essence. They are empty of being independently existent; thus the heterodox theories of svabhāva circulating at the time were refuted on the basis of the doctrines of early Buddhism. This is so because all things arise always dependently: not by their own power, but by depending on conditions leading to their coming into existence, as opposed to being. Nāgārjuna means by real any entity which has a nature of its own (svabhāva), which is not produced by causes (akrtaka), which is not dependent on anything else (paratra nirapeksha). Chapter 24 verse 14 of the Mūlamadhyamakakārikā provides one of Nāgārjuna's most famous quotations on emptiness and co-arising: As part of his analysis of the emptiness of phenomena in the Mūlamadhyamakakārikā, Nāgārjuna critiques svabhāva in several different concepts. He discusses the problems of positing any sort of inherent essence to causation, movement, change and personal identity. Nāgārjuna makes use of the Indian logical tool of the tetralemma to attack any essentialist conceptions. Nāgārjuna's logical analysis is based on four basic propositions: All things (dharma) exist: affirmation of being, negation of non-being All things (dharma) do not exist: affirmation of non-being, negation of being All things (dharma) both exist and do not exist: both affirmation and negation All things (dharma) neither exist nor do not exist: neither affirmation nor negation To say that all things are 'empty' is to deny any
for nuclear weapons. Fermi and Szilard applied for a patent on reactors on 19 December 1944. Its issuance was delayed for 10 years because of wartime secrecy. "World's first nuclear power plant" is the claim made by signs at the site of the EBR-I, which is now a museum near Arco, Idaho. Originally called "Chicago Pile-4", it was carried out under the direction of Walter Zinn for Argonne National Laboratory. This experimental LMFBR operated by the U.S. Atomic Energy Commission produced 0.8 kW in a test on 20 December 1951 and 100 kW (electrical) the following day, having a design output of 200 kW (electrical). Besides the military uses of nuclear reactors, there were political reasons to pursue civilian use of atomic energy. U.S. President Dwight Eisenhower made his famous Atoms for Peace speech to the UN General Assembly on 8 December 1953. This diplomacy led to the dissemination of reactor technology to U.S. institutions and worldwide. The first nuclear power plant built for civil purposes was the AM-1 Obninsk Nuclear Power Plant, launched on 27 June 1954 in the Soviet Union. It produced around 5 MW (electrical). It was built after the F-1 (nuclear reactor) which was the first reactor to go critical in Europe, and was also built by the Soviet Union. After World War II, the U.S. military sought other uses for nuclear reactor technology. Research by the Army led to the power stations for Camp Century, Greenland and McMurdo Station, Antarctica Army Nuclear Power Program. The Air Force Nuclear Bomber project resulted in the Molten-Salt Reactor Experiment. The U.S. Navy succeeded when they steamed the USS Nautilus (SSN-571) on nuclear power 17 January 1955. The first commercial nuclear power station, Calder Hall in Sellafield, England was opened in 1956 with an initial capacity of 50 MW (later 200 MW). The first portable nuclear reactor "Alco PM-2A" was used to generate electrical power (2 MW) for Camp Century from 1960 to 1963. Reactor types Classifications By type of nuclear reaction All commercial power reactors are based on nuclear fission. They generally use uranium and its product plutonium as nuclear fuel, though a thorium fuel cycle is also possible. Fission reactors can be divided roughly into two classes, depending on the energy of the neutrons that sustain the fission chain reaction: Thermal-neutron reactors (the most common type of nuclear reactor) use slowed or thermal neutrons to keep up the fission of their fuel. Almost all current reactors are of this type. These contain neutron moderator materials that slow neutrons until their neutron temperature is thermalized, that is, until their kinetic energy approaches the average kinetic energy of the surrounding particles. Thermal neutrons have a far higher cross section (probability) of fissioning the fissile nuclei uranium-235, plutonium-239, and plutonium-241, and a relatively lower probability of neutron capture by uranium-238 (U-238) compared to the faster neutrons that originally result from fission, allowing use of low-enriched uranium or even natural uranium fuel. The moderator is often also the coolant, usually water under high pressure to increase the boiling point. These are surrounded by a reactor vessel, instrumentation to monitor and control the reactor, radiation shielding, and a containment building. Fast-neutron reactors use fast neutrons to cause fission in their fuel. They do not have a neutron moderator, and use less-moderating coolants. Maintaining a chain reaction requires the fuel to be more highly enriched in fissile material (about 20% or more) due to the relatively lower probability of fission versus capture by U-238. Fast reactors have the potential to produce less transuranic waste because all actinides are fissionable with fast neutrons, but they are more difficult to build and more expensive to operate. Overall, fast reactors are less common than thermal reactors in most applications. Some early power stations were fast reactors, as are some Russian naval propulsion units. Construction of prototypes is continuing (see fast breeder or generation IV reactors). In principle, fusion power could be produced by nuclear fusion of elements such as the deuterium isotope of hydrogen. While an ongoing rich research topic since at least the 1940s, no self-sustaining fusion reactor for any purpose has ever been built. By moderator material Used by thermal reactors: Graphite-moderated reactors Water moderated reactors Heavy-water reactors (Used in Canada, India, Argentina, China, Pakistan, Romania and South Korea). Light-water-moderated reactors (LWRs). Light-water reactors (the most common type of thermal reactor) use ordinary water to moderate and cool the reactors. Because the light hydrogen isotope is a slight neutron poison these reactors need artificially enriched fuels. When at operating temperature, if the temperature of the water increases, its density drops, and fewer neutrons passing through it are slowed enough to trigger further reactions. That negative feedback stabilizes the reaction rate. Graphite and heavy-water reactors tend to be more thoroughly thermalized than light water reactors. Due to the extra thermalization, and the absence of the light hydrogen poisoning effects these types can use natural uranium/unenriched fuel. Light-element-moderated reactors. Molten-salt reactors (MSRs) are moderated by light elements such as lithium or beryllium, which are constituents of the coolant/fuel matrix salts "LiF" and "BeF2", "LiCh" and "BeCh2" and other light element containing salts can all cause a moderating effect. Liquid metal cooled reactors, such as those whose coolant is a mixture of lead and bismuth, may use BeO as a moderator. Organically moderated reactors (OMR) use biphenyl and terphenyl as moderator and coolant. By coolant Water cooled reactor. These constitute the great majority of operational nuclear reactors: as of 2014, 93% of the world's nuclear reactors are water cooled, providing about 95% of the world's total nuclear generation capacity. Pressurized water reactor (PWR) Pressurized water reactors constitute the large majority of all Western nuclear power plants. A primary characteristic of PWRs is a pressurizer, a specialized pressure vessel. Most commercial PWRs and naval reactors use pressurizers. During normal operation, a pressurizer is partially filled with water, and a steam bubble is maintained above it by heating the water with submerged heaters. During normal operation, the pressurizer is connected to the primary reactor pressure vessel (RPV) and the pressurizer "bubble" provides an expansion space for changes in water volume in the reactor. This arrangement also provides a means of pressure control for the reactor by increasing or decreasing the steam pressure in the pressurizer using the pressurizer heaters. Pressurized heavy water reactors are a subset of pressurized water reactors, sharing the use of a pressurized, isolated heat transport loop, but using heavy water as coolant and moderator for the greater neutron economies it offers. Boiling water reactor (BWR) BWRs are characterized by boiling water around the fuel rods in the lower portion of a primary reactor pressure vessel. A boiling water reactor uses 235U, enriched as uranium dioxide, as its fuel. The fuel is assembled into rods housed in a steel vessel that is submerged in water. The nuclear fission causes the water to boil, generating steam. This steam flows through pipes into turbines. The turbines are driven by the steam, and this process generates electricity. During normal operation, pressure is controlled by the amount of steam flowing from the reactor pressure vessel to the turbine. Supercritical water reactor (SCWR) SCWRs are a Generation IV reactor concept where the reactor is operated at supercritical pressures and water is heated to a supercritical fluid, which never undergoes a transition to steam yet behaves like saturated steam, to power a steam generator. Reduced moderation water reactor [RWMR] which use more highly enriched fuel with the fuel elements set closer together to allow a faster neutron spectrum sometimes called an Epithermal neutron Spectrum. Pool-type reactor can refer to unpressurized water cooled open pool reactors, but not to be confused with pool type LMFBRs which are sodium cooled Some reactors have been cooled by heavy water which also served as a moderator. Examples include: Early CANDU reactors (later ones use heavy water moderator but light water coolant) DIDO class research reactors Liquid metal cooled reactor. Since water is a moderator, it cannot be used as a coolant in a fast reactor. Liquid metal coolants have included sodium, NaK, lead, lead-bismuth eutectic, and in early reactors, mercury. Sodium-cooled fast reactor Lead-cooled fast reactor Gas cooled reactors are cooled by a circulating gas. In commercial nuclear power plants carbon dioxide has usually been used, for example in current British AGR nuclear power plants and formerly in a number of first generation British, French, Italian, & Japanese plants. Nitrogen and helium have also been used, helium being considered particularly suitable for high temperature designs. Utilization of the heat varies, depending on the reactor. Commercial nuclear power plants run the gas through a heat exchanger to make steam for a steam turbine. Some experimental designs run hot enough that the gas can directly power a gas turbine. Molten-salt reactors (MSRs) are cooled by circulating a molten salt, typically a eutectic mixture of fluoride salts, such as FLiBe. In a typical MSR, the coolant is also used as a matrix in which the fissile material is dissolved. Other eutectic salt combinations used include "ZrF4" with "NaF" and "LiCh" with "BeCh2" Organic nuclear reactors use organic fluids such as biphenyl and terphenyl as coolant rather than water. By generation Generation I reactor (early prototypes such as Shippingport Atomic Power Station, research reactors, non-commercial power producing reactors) Generation II reactor (most current nuclear power plants, 1965–1996) Generation III reactor (evolutionary improvements of existing designs, 1996–2016) Generation III+ reactor (evolutionary development of Gen III reactors, offering improvements in safety over Gen III reactor designs, 2017–2021) Generation IV reactor (technologies still under development; unknown start date, possibly 2030) In 2003, the French Commissariat à l'Énergie Atomique (CEA) was the first to refer to "Gen II" types in Nucleonics Week. The first mention of "Gen III" was in 2000, in conjunction with the launch of the Generation IV International Forum (GIF) plans. "Gen IV" was named in 2000, by the United States Department of Energy (DOE), for developing new plant types. By phase of fuel Solid fueled Fluid fueled Aqueous homogeneous reactor Molten-salt reactor Gas fueled (theoretical) By shape of the core Cubical Cylindrical Octagonal Spherical Slab Annulus By use Electricity Nuclear power plants including small modular reactors Propulsion, see nuclear propulsion Nuclear marine propulsion Various proposed forms of rocket propulsion Other uses of heat Desalination Heat for domestic and industrial heating Hydrogen production for use in a hydrogen economy Production reactors for transmutation of elements Breeder reactors are capable of producing more fissile material than they consume during the fission chain reaction (by converting fertile U-238 to Pu-239, or Th-232 to U-233). Thus, a uranium breeder reactor, once running, can be refueled with natural or even depleted uranium, and a thorium breeder reactor can be refueled with thorium; however, an initial stock of fissile material is required. Creating various radioactive isotopes, such as americium for use in smoke detectors, and cobalt-60, molybdenum-99 and others, used for imaging and medical treatment. Production of materials for nuclear weapons such as weapons-grade plutonium Providing a source of neutron radiation (for example with the pulsed Godiva device) and positron radiation (e.g. neutron activation analysis and potassium-argon dating) Research reactor: Typically reactors used for research and training, materials testing, or the production of radioisotopes for medicine and industry. These are much smaller than power reactors or those propelling ships, and many are on university campuses. There are about 280 such reactors operating, in 56 countries. Some operate with high-enriched uranium fuel, and international efforts are underway to substitute low-enriched fuel. Current technologies Pressurized water reactors (PWR) [moderator: high-pressure water; coolant: high-pressure water] These reactors use a pressure vessel to contain the nuclear fuel, control rods, moderator, and coolant. The hot radioactive water that leaves the pressure vessel is looped through a steam generator, which in turn heats a secondary (nonradioactive) loop of water to steam that can run turbines. They represent the majority (around 80%) of current reactors. This is a thermal neutron reactor design, the newest of which are the Russian VVER-1200, Japanese Advanced Pressurized Water Reactor, American AP1000, Chinese Hualong Pressurized Reactor and the Franco-German European Pressurized Reactor. All the United States Naval reactors are of this type. Boiling water reactors (BWR) [moderator: low-pressure water; coolant: low-pressure water] A BWR is like a PWR without the steam generator. The lower pressure of its cooling water allows it to boil inside the pressure vessel, producing the steam that runs the turbines. Unlike a PWR, there is no primary and secondary loop. The thermal efficiency of these reactors can be higher, and they can be simpler, and even potentially more stable and safe. This is a thermal-neutron reactor design, the newest of which are the Advanced Boiling Water Reactor and the Economic Simplified Boiling Water Reactor. Pressurized Heavy Water Reactor (PHWR) [moderator: high-pressure heavy water; coolant: high-pressure heavy water] A Canadian design (known as CANDU), very similar to PWRs but using heavy water. While heavy water is significantly more expensive than ordinary water, it has greater neutron economy (creates a higher number of thermal neutrons), allowing the reactor to operate without fuel enrichment facilities. Instead of using a single large pressure vessel as in a PWR, the fuel is contained in hundreds of pressure tubes. These reactors are fueled with natural uranium and are thermal-neutron reactor designs. PHWRs can be refueled while at full power, (online refueling) which makes them very efficient in their use of uranium (it allows for precise flux control in the core). CANDU PHWRs have been built in Canada, Argentina, China, India, Pakistan, Romania, and South Korea. India also operates a number of PHWRs, often termed 'CANDU derivatives', built after the Government of Canada halted nuclear dealings with India following the 1974 Smiling Buddha nuclear weapon test. Reaktor Bolshoy Moschnosti Kanalniy (High Power Channel Reactor) (RBMK) [moderator: graphite; coolant: high-pressure water] A Soviet design, RBMKs are in some respects similar to CANDU in that they are refuelable during power operation and employ a pressure tube design instead of a PWR-style pressure vessel. However, unlike CANDU they are very unstable and large, making containment buildings for them expensive. A series of critical safety flaws have also been identified with the RBMK design, though some of these were corrected following the Chernobyl disaster. Their main attraction is their use of light water and unenriched uranium. As of 2022, 8 remain open, mostly due to safety improvements and help from international safety agencies such as the DOE. Despite these safety improvements, RBMK reactors are still considered one of the most dangerous reactor designs in use. RBMK reactors were deployed only in the former Soviet Union. Gas-cooled reactor (GCR) and advanced gas-cooled reactor (AGR) [moderator: graphite; coolant: carbon dioxide] These designs an have a high thermal efficiency compared with PWRs due to higher operating temperatures. There are a number of operating reactors of this design, mostly in the United Kingdom, where the concept was developed. Older designs (i.e. Magnox stations) are either shut down or will be in the near future. However, the AGRs have an anticipated life of a further 10 to 20 years. This is a thermal-neutron reactor design. Decommissioning costs can be high due to large volume of reactor core. Liquid metal fast-breeder reactor (LMFBR) [moderator: none; coolant: liquid metal] This totally unmoderated reactor design produces more fuel than it consumes. They are said to "breed" fuel, because they produce fissionable fuel during operation because of neutron capture. These reactors can function much like a PWR in terms of efficiency, and do not require much high-pressure containment, as the liquid metal does not need to be kept at high pressure, even at very high temperatures. These reactors are fast neutron, not thermal neutron designs. These reactors come in two types: Lead-cooled Using lead as the liquid metal provides excellent radiation shielding, and allows for operation at very high temperatures. Also, lead is (mostly) transparent to neutrons, so fewer neutrons are lost in the coolant, and the coolant does not become radioactive. Unlike sodium, lead is mostly inert, so there is less risk of explosion or accident, but such large quantities of lead may be problematic from toxicology and disposal points of view. Often a reactor of this type would use a lead-bismuth eutectic mixture. In this case, the bismuth would present some minor radiation problems, as it is not quite as transparent to neutrons, and can be transmuted to a radioactive isotope more readily than lead. The Russian Alfa class submarine uses a lead-bismuth-cooled fast reactor as its main power plant. Sodium-cooled Most LMFBRs are of this type. The TOPAZ, BN-350 and BN-600 in USSR; Superphénix in France; and Fermi-I in the United States were reactors of this type. The sodium is relatively easy to obtain and work with, and it also manages to actually prevent corrosion on the various reactor parts immersed in it. However, sodium explodes violently when exposed to water, so care must be taken, but such explosions would not be more violent than (for example) a leak of superheated fluid from a pressurized-water reactor. The Monju reactor in Japan suffered a sodium leak in 1995 and could not be restarted until May 2010. The EBR-I, the first reactor to have a core meltdown, in 1955, was also a sodium-cooled reactor. Pebble-bed reactors (PBR) [moderator: graphite; coolant: helium] These use fuel molded into ceramic balls, and then circulate gas through the balls. The result is an efficient, low-maintenance, very safe reactor with inexpensive, standardized fuel. The prototype was the AVR and the HTR-10 is operating in China, where the HTR-PM is being developed. The HTR-PM is expected to be the first generation IV reactor to enter operation. Molten-salt reactors (MSR) [moderator: graphite, or none for fast spectrum MSRs; coolant: molten salt mixture] These dissolve the fuels in fluoride or chloride salts, or use such salts for coolant. MSRs potentially have many safety features, including the absence of high pressures or highly flammable components in the core. They were initially designed for aircraft propulsion due to their high efficiency and high power density. One prototype, the Molten-Salt Reactor Experiment, was built to confirm the feasibility of the Liquid fluoride thorium reactor, a thermal spectrum reactor which would breed fissile uranium-233 fuel from thorium. Aqueous homogeneous reactor (AHR) [moderator: high-pressure light or heavy water; coolant: high-pressure light or heavy water] These reactors use as fuel soluble nuclear salts (usually uranium sulfate or uranium nitrate) dissolved in water and mixed with the coolant and the moderator. As of April 2006, only five AHRs were in operation. Future and developing technologies Advanced reactors More than a dozen advanced reactor designs are in various stages of development. Some are evolutionary from the PWR, BWR and PHWR designs above, some are more radical departures. The former include the advanced boiling water reactor (ABWR), two of which are now operating with others under construction, and the planned passively safe Economic Simplified Boiling Water Reactor (ESBWR) and AP1000 units (see Nuclear Power 2010 Program). The Integral fast reactor (IFR) was built, tested and evaluated during the 1980s and then retired under the Clinton administration in the 1990s due to nuclear non-proliferation policies of the administration. Recycling spent fuel is the core of its design and it therefore produces only a fraction of the waste of current reactors. The pebble-bed reactor, a high-temperature gas-cooled reactor (HTGCR), is designed so high temperatures reduce power output by Doppler broadening of the fuel's neutron cross-section. It uses ceramic fuels so its safe operating temperatures exceed the power-reduction temperature range. Most designs are cooled by inert helium. Helium is not subject to steam explosions, resists neutron absorption leading to radioactivity, and does not dissolve contaminants that can become radioactive. Typical designs have more layers (up to 7) of passive containment than light water reactors (usually 3). A unique feature that may aid safety is that the fuel balls actually form the core's mechanism, and are replaced one by one as they age. The design of the fuel makes fuel reprocessing expensive. The Small, sealed, transportable, autonomous reactor (SSTAR) is being primarily researched and developed in the US, intended as a fast breeder reactor that is passively safe and could be remotely shut down in case the suspicion arises that it is being tampered with. The Clean and Environmentally Safe Advanced Reactor (CAESAR) is a nuclear reactor concept that uses steam as a moderator – this design is still in development. The Reduced moderation water reactor builds upon the Advanced boiling water reactor ABWR) that is presently in use, it is not a complete fast reactor instead using mostly epithermal neutrons, which are between thermal and fast neutrons in speed. The hydrogen-moderated self-regulating nuclear power module (HPM) is a reactor design emanating from the Los Alamos National Laboratory that uses uranium hydride as fuel. Subcritical reactors are designed to be safer and more stable, but pose a number of engineering and economic difficulties. One example is the Energy amplifier. Thorium-based reactors. It is possible to convert Thorium-232 into U-233 in reactors specially designed for the purpose. In this way, thorium, which is four times more abundant than uranium, can be used to breed U-233 nuclear fuel. U-233 is also believed to have favourable nuclear properties as compared to traditionally used U-235, including better neutron economy and lower production of long lived transuranic waste. Advanced heavy-water reactor (AHWR) — A proposed heavy water moderated nuclear power reactor that will be the next generation design of the PHWR type. Under development in the Bhabha Atomic Research Centre (BARC), India. KAMINI – A unique reactor using Uranium-233 isotope for fuel. Built in India by BARC and Indira Gandhi Center for Atomic Research (IGCAR). India is also planning to build fast breeder reactors using the thorium – Uranium-233 fuel cycle. The FBTR (Fast Breeder Test Reactor) in operation at Kalpakkam (India) uses Plutonium as a fuel and liquid sodium as a coolant. China, which has control of the Cerro Impacto deposit, has a reactor and hopes to replace coal energy with nuclear energy. Rolls-Royce aims to sell nuclear reactors for the production of synfuel for aircraft. Generation IV reactors Generation IV reactors are a set of theoretical nuclear reactor designs currently being researched. These designs are generally not expected to be available for commercial construction before 2030. Current reactors in operation around the world are generally considered second- or third-generation systems, with the first-generation systems having been retired some time ago. Research into these reactor types was officially started by the Generation IV International Forum (GIF) based on eight technology goals. The primary goals being to improve nuclear safety, improve proliferation resistance, minimize waste and natural resource utilization, and to decrease the cost to build and run such plants. Gas-cooled fast reactor Lead-cooled fast reactor Molten-salt reactor Sodium-cooled fast reactor Supercritical water reactor Very-high-temperature reactor Generation V+ reactors Generation V reactors are designs which are theoretically possible, but which are not being actively considered or researched at present. Though some generation V reactors could potentially be built with current or near term technology, they trigger little interest for reasons of economics, practicality, or safety. Liquid-core reactor. A closed loop liquid-core nuclear reactor, where the fissile material is molten uranium or uranium solution cooled by a working gas pumped in through holes in the base of the containment vessel. Gas-core reactor. A closed loop version of the nuclear lightbulb rocket, where the fissile material is gaseous uranium hexafluoride contained in a fused silica vessel. A working gas (such as hydrogen) would flow around this vessel and absorb the UV light produced by the reaction. This reactor design could also function as a rocket engine, as featured in Harry Harrison's 1976 science-fiction novel Skyfall. In theory, using UF6 as a working fuel directly (rather than as a stage to one, as is done now) would mean lower processing costs, and very small reactors. In practice, running a reactor at such high power densities would probably produce unmanageable neutron flux, weakening most reactor materials, and therefore as the flux would be similar to that expected in fusion reactors, it would require similar materials to those selected by the International Fusion Materials Irradiation Facility. Gas core EM reactor. As in the gas core reactor, but with photovoltaic arrays converting the UV light directly to electricity. This approach is similar to the experimentally proved photoelectric effect that would convert the X-rays generated from aneutronic fusion into electricity, by passing the high energy photons through an array of conducting foils to transfer some of their energy to electrons, the energy of the photon is captured electrostatically, similar to a capacitor. Since X-rays can go through far greater
intended as a fast breeder reactor that is passively safe and could be remotely shut down in case the suspicion arises that it is being tampered with. The Clean and Environmentally Safe Advanced Reactor (CAESAR) is a nuclear reactor concept that uses steam as a moderator – this design is still in development. The Reduced moderation water reactor builds upon the Advanced boiling water reactor ABWR) that is presently in use, it is not a complete fast reactor instead using mostly epithermal neutrons, which are between thermal and fast neutrons in speed. The hydrogen-moderated self-regulating nuclear power module (HPM) is a reactor design emanating from the Los Alamos National Laboratory that uses uranium hydride as fuel. Subcritical reactors are designed to be safer and more stable, but pose a number of engineering and economic difficulties. One example is the Energy amplifier. Thorium-based reactors. It is possible to convert Thorium-232 into U-233 in reactors specially designed for the purpose. In this way, thorium, which is four times more abundant than uranium, can be used to breed U-233 nuclear fuel. U-233 is also believed to have favourable nuclear properties as compared to traditionally used U-235, including better neutron economy and lower production of long lived transuranic waste. Advanced heavy-water reactor (AHWR) — A proposed heavy water moderated nuclear power reactor that will be the next generation design of the PHWR type. Under development in the Bhabha Atomic Research Centre (BARC), India. KAMINI – A unique reactor using Uranium-233 isotope for fuel. Built in India by BARC and Indira Gandhi Center for Atomic Research (IGCAR). India is also planning to build fast breeder reactors using the thorium – Uranium-233 fuel cycle. The FBTR (Fast Breeder Test Reactor) in operation at Kalpakkam (India) uses Plutonium as a fuel and liquid sodium as a coolant. China, which has control of the Cerro Impacto deposit, has a reactor and hopes to replace coal energy with nuclear energy. Rolls-Royce aims to sell nuclear reactors for the production of synfuel for aircraft. Generation IV reactors Generation IV reactors are a set of theoretical nuclear reactor designs currently being researched. These designs are generally not expected to be available for commercial construction before 2030. Current reactors in operation around the world are generally considered second- or third-generation systems, with the first-generation systems having been retired some time ago. Research into these reactor types was officially started by the Generation IV International Forum (GIF) based on eight technology goals. The primary goals being to improve nuclear safety, improve proliferation resistance, minimize waste and natural resource utilization, and to decrease the cost to build and run such plants. Gas-cooled fast reactor Lead-cooled fast reactor Molten-salt reactor Sodium-cooled fast reactor Supercritical water reactor Very-high-temperature reactor Generation V+ reactors Generation V reactors are designs which are theoretically possible, but which are not being actively considered or researched at present. Though some generation V reactors could potentially be built with current or near term technology, they trigger little interest for reasons of economics, practicality, or safety. Liquid-core reactor. A closed loop liquid-core nuclear reactor, where the fissile material is molten uranium or uranium solution cooled by a working gas pumped in through holes in the base of the containment vessel. Gas-core reactor. A closed loop version of the nuclear lightbulb rocket, where the fissile material is gaseous uranium hexafluoride contained in a fused silica vessel. A working gas (such as hydrogen) would flow around this vessel and absorb the UV light produced by the reaction. This reactor design could also function as a rocket engine, as featured in Harry Harrison's 1976 science-fiction novel Skyfall. In theory, using UF6 as a working fuel directly (rather than as a stage to one, as is done now) would mean lower processing costs, and very small reactors. In practice, running a reactor at such high power densities would probably produce unmanageable neutron flux, weakening most reactor materials, and therefore as the flux would be similar to that expected in fusion reactors, it would require similar materials to those selected by the International Fusion Materials Irradiation Facility. Gas core EM reactor. As in the gas core reactor, but with photovoltaic arrays converting the UV light directly to electricity. This approach is similar to the experimentally proved photoelectric effect that would convert the X-rays generated from aneutronic fusion into electricity, by passing the high energy photons through an array of conducting foils to transfer some of their energy to electrons, the energy of the photon is captured electrostatically, similar to a capacitor. Since X-rays can go through far greater material thickness than electrons, many hundreds or thousands of layers are needed to absorb the X-rays. Fission fragment reactor. A fission fragment reactor is a nuclear reactor that generates electricity by decelerating an ion beam of fission byproducts instead of using nuclear reactions to generate heat. By doing so, it bypasses the Carnot cycle and can achieve efficiencies of up to 90% instead of 40–45% attainable by efficient turbine-driven thermal reactors. The fission fragment ion beam would be passed through a magnetohydrodynamic generator to produce electricity. Hybrid nuclear fusion. Would use the neutrons emitted by fusion to fission a blanket of fertile material, like U-238 or Th-232 and transmute other reactor's spent nuclear fuel/nuclear waste into relatively more benign isotopes. Fusion reactors Controlled nuclear fusion could in principle be used in fusion power plants to produce power without the complexities of handling actinides, but significant scientific and technical obstacles remain. Several fusion reactors have been built, but reactors have never been able to release more energy than the amount of energy used in the process. Despite research having started in the 1950s, no commercial fusion reactor is expected before 2050. The ITER project is currently leading the effort to harness fusion power. Nuclear fuel cycle Thermal reactors generally depend on refined and enriched uranium. Some nuclear reactors can operate with a mixture of plutonium and uranium (see MOX). The process by which uranium ore is mined, processed, enriched, used, possibly reprocessed and disposed of is known as the nuclear fuel cycle. Under 1% of the uranium found in nature is the easily fissionable U-235 isotope and as a result most reactor designs require enriched fuel. Enrichment involves increasing the percentage of U-235 and is usually done by means of gaseous diffusion or gas centrifuge. The enriched result is then converted into uranium dioxide powder, which is pressed and fired into pellet form. These pellets are stacked into tubes which are then sealed and called fuel rods. Many of these fuel rods are used in each nuclear reactor. Most BWR and PWR commercial reactors use uranium enriched to about 4% U-235, and some commercial reactors with a high neutron economy do not require the fuel to be enriched at all (that is, they can use natural uranium). According to the International Atomic Energy Agency there are at least 100 research reactors in the world fueled by highly enriched (weapons-grade/90% enrichment) uranium. Theft risk of this fuel (potentially used in the production of a nuclear weapon) has led to campaigns advocating conversion of this type of reactor to low-enrichment uranium (which poses less threat of proliferation). Fissile U-235 and non-fissile but fissionable and fertile U-238 are both used in the fission process. U-235 is fissionable by thermal (i.e. slow-moving) neutrons. A thermal neutron is one which is moving about the same speed as the atoms around it. Since all atoms vibrate proportionally to their absolute temperature, a thermal neutron has the best opportunity to fission U-235 when it is moving at this same vibrational speed. On the other hand, U-238 is more likely to capture a neutron when the neutron is moving very fast. This U-239 atom will soon decay into plutonium-239, which is another fuel. Pu-239 is a viable fuel and must be accounted for even when a highly enriched uranium fuel is used. Plutonium fissions will dominate the U-235 fissions in some reactors, especially after the initial loading of U-235 is spent. Plutonium is fissionable with both fast and thermal neutrons, which make it ideal for either nuclear reactors or nuclear bombs. Most reactor designs in existence are thermal reactors and typically use water as a neutron moderator (moderator means that it slows down the neutron to a thermal speed) and as a coolant. But in a fast breeder reactor, some other kind of coolant is used which will not moderate or slow the neutrons down much. This enables fast neutrons to dominate, which can effectively be used to constantly replenish the fuel supply. By merely placing cheap unenriched uranium into such a core, the non-fissionable U-238 will be turned into Pu-239, "breeding" fuel. In thorium fuel cycle thorium-232 absorbs a neutron in either a fast or thermal reactor. The thorium-233 beta decays to protactinium-233 and then to uranium-233, which in turn is used as fuel. Hence, like uranium-238, thorium-232 is a fertile material. Fueling of nuclear reactors The amount of energy in the reservoir of nuclear fuel is frequently expressed in terms of "full-power days," which is the number of 24-hour periods (days) a reactor is scheduled for operation at full power output for the generation of heat energy. The number of full-power days in a reactor's operating cycle (between refueling outage times) is related to the amount of fissile uranium-235 (U-235) contained in the fuel assemblies at the beginning of the cycle. A higher percentage of U-235 in the core at the beginning of a cycle will permit the reactor to be run for a greater number of full-power days. At the end of the operating cycle, the fuel in some of the assemblies is "spent", having spent 4 to 6 years in the reactor producing power. This spent fuel is discharged and replaced with new (fresh) fuel assemblies. Though considered "spent," these fuel assemblies contain a large quantity of fuel. In practice it is economics that determines the lifetime of nuclear fuel in a reactor. Long before all possible fission has taken place, the reactor is unable to maintain 100%, full output power, and therefore, income for the utility lowers as plant output power lowers. Most nuclear plants operate at a very low profit margin due to operating overhead, mainly regulatory costs, so operating below 100% power is not economically viable for very long. The fraction of the reactor's fuel core replaced during refueling is typically one-third, but depends on how long the plant operates between refueling. Plants typically operate on 18 month refueling cycles, or 24 month refueling cycles. This means that 1 refueling, replacing only one-third of the fuel, can keep a nuclear reactor at full power for nearly 2 years. The disposition and storage of this spent fuel is one of the most challenging aspects of the operation of a commercial nuclear power plant. This nuclear waste is highly radioactive and its toxicity presents a danger for thousands of years. After being discharged from the reactor, spent nuclear fuel is transferred to the on-site spent fuel pool. The spent fuel pool is a large pool of water that provides cooling and shielding of the spent nuclear fuel. Once the energy has decayed somewhat (approximately 5 years), the fuel can be transferred from the fuel pool to dry shielded casks, that can be safely stored for thousands of years. After loading into dry shielded casks, the casks are stored on-site in a specially guarded facility in impervious concrete bunkers. On-site fuel storage facilities are designed to withstand the impact of commercial airliners, with little to no damage to the spent fuel. An average on-site fuel storage facility can hold 30 years of spent fuel in a space smaller that a football field. Not all reactors need to be shut down for refueling; for example, pebble bed reactors, RBMK reactors, molten-salt reactors, Magnox, AGR and CANDU reactors allow fuel to be shifted through the reactor while it is running. In a CANDU reactor, this also allows individual fuel elements to be situated within the reactor core that are best suited to the amount of U-235 in the fuel element. The amount of energy extracted from nuclear fuel is called its burnup, which is expressed in terms of the heat energy produced per initial unit of fuel weight. Burn up is commonly expressed as megawatt days thermal per metric ton of initial heavy metal. Nuclear safety Nuclear safety covers the actions taken to prevent nuclear and radiation accidents and incidents or to limit their consequences. The nuclear power industry has improved the safety and performance of reactors, and has proposed new, safer (but generally untested) reactor designs but there is no guarantee that the reactors will be designed, built and operated correctly. Mistakes do occur and the designers of reactors at Fukushima in Japan did not anticipate that a tsunami generated by an earthquake would disable the backup systems that were supposed to stabilize the reactor after the earthquake, despite multiple warnings by the NRG and the Japanese nuclear safety administration. According to UBS AG, the Fukushima I nuclear accidents have cast doubt on whether even an advanced economy like Japan can master nuclear safety. Catastrophic scenarios involving terrorist attacks are also conceivable. An interdisciplinary team from MIT has estimated that given the expected growth of nuclear power from 2005 to 2055, at least four serious nuclear accidents would be expected in that period. Nuclear accidents Serious, though rare, nuclear and radiation accidents have occurred. These include the SL-1 accident (1961), the Three Mile Island accident (1979), Chernobyl disaster (1986), and the Fukushima Daiichi nuclear disaster (2011). Nuclear-powered submarine mishaps include the K-19 reactor accident (1961), the K-27 reactor accident (1968), and the K-431 reactor accident (1985). Nuclear reactors have been launched into Earth orbit at least 34 times. A number of incidents connected with the unmanned nuclear-reactor-powered Soviet RORSAT radar satellite program resulted in spent nuclear fuel reentering the Earth's atmosphere from orbit. Natural nuclear reactors Almost two billion years ago a series of self-sustaining nuclear fission "reactors" self-assembled in the area now known as Oklo in Gabon, West Africa. The conditions at that place and time allowed a natural nuclear fission to occur with circumstances that are similar to the conditions in a constructed nuclear reactor. Fifteen fossil natural fission reactors have so far been found in three separate ore deposits at the Oklo uranium mine in Gabon. First discovered in 1972 by French physicist Francis Perrin, they are collectively known as the Oklo Fossil Reactors. Self-sustaining nuclear fission reactions took place in these reactors approximately 1.5 billion years ago, and ran for a few hundred thousand years, averaging 100 kW of power output during that time. The concept of a natural nuclear reactor was theorized as early as 1956 by Paul Kuroda at the University of Arkansas. Such reactors can no longer form on Earth in its present geologic period. Radioactive decay of formerly more abundant uranium-235 over the time span of hundreds of millions of years has reduced the proportion of this naturally occurring fissile isotope to below the amount required to sustain a chain reaction with only plain water as a moderator. The natural nuclear reactors formed when a uranium-rich mineral deposit became inundated with groundwater that acted as a neutron moderator, and a strong chain reaction took place. The water moderator would boil away as the reaction increased, slowing it back down again and preventing a meltdown. The fission reaction was sustained for hundreds of thousands of years, cycling on the order of hours to a few days. These natural reactors are extensively studied by scientists interested in geologic radioactive waste disposal. They offer a case study of how radioactive isotopes migrate through the Earth's crust. This is a significant area of controversy as opponents of geologic waste disposal fear that isotopes from stored waste could end up in water supplies or be carried into the environment. Emissions Nuclear reactors produce tritium as part of normal operations, which is eventually released into the environment in trace quantities. As an isotope of hydrogen, tritium (T) frequently binds to oxygen and forms T2O. This molecule is chemically identical to H2O and so is both colorless and odorless, however the additional neutrons in the hydrogen nuclei cause the tritium to undergo beta decay with a half-life of 12.3 years. Despite being measurable, the tritium released by nuclear power plants is minimal. The United States NRC estimates that a person drinking water for one year out of a well contaminated by what they would consider to be a significant tritiated water spill would receive a radiation dose of 0.3 millirem. For comparison, this is an order of magnitude less than the 4 millirem a person receives on a round trip flight from Washington, D.C. to Los Angeles, a consequence of less atmospheric protection against highly energetic cosmic rays at high altitudes. The amounts of strontium-90 released from nuclear power plants under normal operations is so low as to be undetectable above natural background radiation. Detectable strontium-90 in ground water and the general environment can be traced to weapons testing that occurred during the mid-20th century (accounting for 99% of the Strontium-90 in the environment) and the Chernobyl accident (accounting for the remaining 1%). See also List of nuclear reactors List of small modular reactor designs List of United States Naval reactors Neutron transport Nuclear power by country Nuclear power in space One Less Nuclear Power Plant Radioisotope thermoelectric generator Safety engineering Sayonara Nuclear Power Plants Small modular reactor Thorium-based nuclear power Traveling-wave reactor (TWR) World Nuclear Industry Status Report References External links The Database on Nuclear
radioactive decay to generate power. These power generators are relatively small scale (few kW), and they are mostly used to power space missions and experiments for long periods where solar power is not available in sufficient quantity, such as in the Voyager 2 space probe. A few space vehicles have been launched using nuclear reactors: 34 reactors belong to the Soviet RORSAT series and one was the American SNAP-10A. Both fission and fusion appear promising for space propulsion applications, generating higher mission velocities with less reaction mass. Safety Nuclear power plants have three unique characteristics that affect their safety, as compared to other power plants. Firstly, intensely radioactive materials are present in a nuclear reactor. Their release to the environment could be hazardous. Secondly, the fission products, which make up most of the intensely radioactive substances in the reactor, continue to generate a significant amount of decay heat even after the fission chain reaction has stopped. If the heat cannot be removed from the reactor, the fuel rods may overheat and release radioactive materials. Thirdly, a criticality accident (a rapid increase of the reactor power) is possible in certain reactor designs if the chain reaction cannot be controlled. These three characteristics have to be taken into account when designing nuclear reactors. All modern reactors are designed so that an uncontrolled increase of the reactor power is prevented by natural feedback mechanisms, a concept known as negative void coefficient of reactivity. If the temperature or the amount of steam in the reactor increases, the fission rate inherently decreases. The chain reaction can also be manually stopped by inserting control rods into the reactor core. Emergency core cooling systems (ECCS) can remove the decay heat from the reactor if normal cooling systems fail. If the ECCS fails, multiple physical barriers limit the release of radioactive materials to the environment even in the case of an accident. The last physical barrier is the large containment building. With a death rate of 0.07 per TWh, nuclear power is the safest energy source per unit of energy generated in terms of mortality when the historical track-record is considered. Energy produced by coal, petroleum, natural gas and hydropower has caused more deaths per unit of energy generated due to air pollution and energy accidents. This is found when comparing the immediate deaths from other energy sources to both the immediate and the latent, or predicted, indirect cancer deaths from nuclear energy accidents. When the direct and indirect fatalities (including fatalities resulting from the mining and air pollution) from nuclear power and fossil fuels are compared, the use of nuclear power has been calculated to have prevented about 1.8 million deaths between 1971 and 2009, by reducing the proportion of energy that would otherwise have been generated by fossil fuels. Following the 2011 Fukushima nuclear disaster, it has been estimated that if Japan had never adopted nuclear power, accidents and pollution from coal or gas plants would have caused more lost years of life. Serious impacts of nuclear accidents are often not directly attributable to radiation exposure, but rather social and psychological effects. Evacuation and long-term displacement of affected populations created problems for many people, especially the elderly and hospital patients. Forced evacuation from a nuclear accident may lead to social isolation, anxiety, depression, psychosomatic medical problems, reckless behavior, and suicide. A comprehensive 2005 study on the aftermath of the Chernobyl disaster concluded that the mental health impact is the largest public health problem caused by the accident. Frank N. von Hippel, an American scientist, commented that a disproportionate fear of ionizing radiation (radiophobia) could have long-term psychological effects on the population of contaminated areas following the Fukushima disaster. In January 2015, the number of Fukushima evacuees was around 119,000, compared with a peak of around 164,000 in June 2012. Accidents and attacks Accidents Some serious nuclear and radiation accidents have occurred. The severity of nuclear accidents is generally classified using the International Nuclear Event Scale (INES) introduced by the International Atomic Energy Agency (IAEA). The scale ranks anomalous events or accidents on a scale from 0 (a deviation from normal operation that poses no safety risk) to 7 (a major accident with widespread effects). There have been 3 accidents of level 5 or higher in the civilian nuclear power industry, two of which, the Chernobyl accident and the Fukushima accident, are ranked at level 7. The Fukushima Daiichi nuclear accident was caused by the 2011 Tohoku earthquake and tsunami. The accident has not caused any radiation-related deaths but resulted in radioactive contamination of surrounding areas. The difficult cleanup operation is expected to cost tens of billions of dollars over 40 or more years. The Three Mile Island accident in 1979 was a smaller scale accident, rated at INES level 5. There were no direct or indirect deaths caused by the accident. The impact of nuclear accidents is controversial. According to Benjamin K. Sovacool, fission energy accidents ranked first among energy sources in terms of their total economic cost, accounting for 41 percent of all property damage attributed to energy accidents. Another analysis found that coal, oil, liquid petroleum gas and hydroelectric accidents (primarily due to the Banqiao Dam disaster) have resulted in greater economic impacts than nuclear power accidents. The study compares latent cancer deaths attributable to nuclear with immediate deaths from other energy sources per unit of energy generated, and does not include fossil fuel related cancer and other indirect deaths created by the use of fossil fuel consumption in its "severe accident" (an accident with more than 5 fatalities) classification. The Chernobyl accident in 1986 caused approximately 50 deaths from direct and indirect effects, and some temporary serious injuries from acute radiation syndrome. The future predicted mortality from increases in cancer rates is estimated at about 4000 in the decades to come. However, the costs have been large and are increasing. Extreme weather events, including events made more severe by climate change, decrease the reliability of nuclear energy. Novel reactor types and weakening of safety standards to increase competitiveness of nuclear energy may increase risks or have new risks of accidents. Nuclear power works under an insurance framework that limits or structures accident liabilities in accordance with national and international conventions. It is often argued that this potential shortfall in liability represents an external cost not included in the cost of nuclear electricity. This cost is small, amounting to about 0.1% of the levelized cost of electricity, according to a study by the Congressional Budget Office in the United States. These beyond-regular insurance costs for worst-case scenarios are not unique to nuclear power. Hydroelectric power plants are similarly not fully insured against a catastrophic event such as dam failures. For example, the failure of the Banqiao Dam caused the death of an estimated 30,000 to 200,000 people, and 11 million people lost their homes. As private insurers base dam insurance premiums on limited scenarios, major disaster insurance in this sector is likewise provided by the state. Attacks and sabotage Terrorists could target nuclear power plants in an attempt to release radioactive contamination into the community. The United States 9/11 Commission has said that nuclear power plants were potential targets originally considered for the September 11, 2001 attacks. An attack on a reactor's spent fuel pool could also be serious, as these pools are less protected than the reactor core. The release of radioactivity could lead to thousands of near-term deaths and greater numbers of long-term fatalities. In the United States, the NRC carries out "Force on Force" (FOF) exercises at all nuclear power plant sites at least once every three years. In the United States, plants are surrounded by a double row of tall fences which are electronically monitored. The plant grounds are patrolled by a sizeable force of armed guards. Insider sabotage is also a threat because insiders can observe and work around security measures. Successful insider crimes depended on the perpetrators' observation and knowledge of security vulnerabilities. A fire caused 5–10 million dollars worth of damage to New York's Indian Point Energy Center in 1971. The arsonist turned out to be a plant maintenance worker. Nuclear proliferation Nuclear proliferation is the spread of nuclear weapons, fissionable material, and weapons-related nuclear technology to states that do not already possess nuclear weapons. Many technologies and materials associated with the creation of a nuclear power program have a dual-use capability, in that they can also be used to make nuclear weapons. For this reason, nuclear power presents proliferation risks. Nuclear power program can become a route leading to a nuclear weapon. An example of this is the concern over Iran's nuclear program. The re-purposing of civilian nuclear industries for military purposes would be a breach of the Non-proliferation treaty, to which 190 countries adhere. As of April 2012, there are thirty one countries that have civil nuclear power plants, of which nine have nuclear weapons. The vast majority of these nuclear weapons states have produced weapons before commercial nuclear power stations. A fundamental goal for global security is to minimize the nuclear proliferation risks associated with the expansion of nuclear power. The Global Nuclear Energy Partnership was an international effort to create a distribution network in which developing countries in need of energy would receive nuclear fuel at a discounted rate, in exchange for that nation agreeing to forgo their own indigenous development of a uranium enrichment program. The France-based Eurodif/European Gaseous Diffusion Uranium Enrichment Consortium is a program that successfully implemented this concept, with Spain and other countries without enrichment facilities buying a share of the fuel produced at the French-controlled enrichment facility, but without a transfer of technology. Iran was an early participant from 1974 and remains a shareholder of Eurodif via Sofidif. A 2009 United Nations report said that: the revival of interest in nuclear power could result in the worldwide dissemination of uranium enrichment and spent fuel reprocessing technologies, which present obvious risks of proliferation as these technologies can produce fissile materials that are directly usable in nuclear weapons. On the other hand, power reactors can also reduce nuclear weapons arsenals when military-grade nuclear materials are reprocessed to be used as fuel in nuclear power plants. The Megatons to Megawatts Program is considered the single most successful non-proliferation program to date. Up to 2005, the program had processed $8 billion of high enriched, weapons-grade uranium into low enriched uranium suitable as nuclear fuel for commercial fission reactors by diluting it with natural uranium. This corresponds to the elimination of 10,000 nuclear weapons. For approximately two decades, this material generated nearly 10 percent of all the electricity consumed in the United States, or about half of all U.S. nuclear electricity, with a total of around 7,000 TWh of electricity produced. In total it is estimated to have cost $17 billion, a "bargain for US ratepayers", with Russia profiting $12 billion from the deal. Much needed profit for the Russian nuclear oversight industry, which after the collapse of the Soviet economy, had difficulties paying for the maintenance and security of the Russian Federations highly enriched uranium and warheads. The Megatons to Megawatts Program was hailed as a major success by anti-nuclear weapon advocates as it has largely been the driving force behind the sharp reduction in the number of nuclear weapons worldwide since the cold war ended. However, without an increase in nuclear reactors and greater demand for fissile fuel, the cost of dismantling and down blending has dissuaded Russia from continuing their disarmament. As of 2013 Russia appears to not be interested in extending the program. Environmental impact Being a low-carbon energy source with relatively little land-use requirements, nuclear energy can have a positive environmental impact. It also requires a constant supply of significant amounts of water and affects the environment through mining and milling. Its largest potential negative impacts on the environment may arise from its transgenerational risks for nuclear weapons proliferation that may increase risks of their use in the future, risks for problems associated with the management of the radioactive waste such as groundwater contamination, risks for accidents and for risks for various forms of attacks on waste storage sites or reprocessing- and power-plants. However, these remain mostly only risks as historically there have only been few disasters at nuclear power plants with known relatively substantial environmental impacts. Carbon emissions Nuclear power is one of the leading low carbon power generation methods of producing electricity, and in terms of total life-cycle greenhouse gas emissions per unit of energy generated, has emission values comparable to or lower than renewable energy. A 2014 analysis of the carbon footprint literature by the Intergovernmental Panel on Climate Change (IPCC) reported that the embodied total life-cycle emission intensity of nuclear power has a median value of 12 g eq/kWh, which is the lowest among all commercial baseload energy sources. This is contrasted with coal and natural gas at 820 and 490 g eq/kWh. From the beginning of its commercialization in the 1970s, nuclear power has prevented the emission of about 64 billion tonnes of carbon dioxide equivalent that would have otherwise resulted from the burning of fossil fuels in thermal power stations. Radiation The average dose from natural background radiation is 2.4 millisievert per year (mSv/a) globally. It varies between 1 mSv/a and 13 mSv/a, depending mostly on the geology of the location. According to the United Nations (UNSCEAR), regular nuclear power plant operations, including the nuclear fuel cycle, increases this amount by 0.0002 mSv/a of public exposure as a global average. The average dose from operating nuclear power plants to the local populations around them is less than 0.0001 mSv/a. For comparison, the average dose to those living within 50 miles of a coal power plant is over three times this dose, at 0.0003 mSv/a. Chernobyl resulted in the most affected surrounding populations and male recovery personnel receiving an average initial 50 to 100 mSv over a few hours to weeks, while the remaining global legacy of the worst nuclear power plant accident in average exposure is 0.002 mSv/a and is continuously dropping at the decaying rate, from the initial high of 0.04 mSv per person averaged over the entire populace of the Northern Hemisphere in the year of the accident in 1986. Examples of Environmental Benefits Proponents note that annually, nuclear-generated electricity reduces 470 million metric tons of carbon dioxide emissions that would otherwise come from fossil fuels. Additionally, the amount of comparatively low waste that nuclear energy does create is safely disposed of by the large scale nuclear energy production facilities or it is repurposed/recycled for other energy uses. Proponents of nuclear energy also bring to attention the opportunity cost of utilizing other forms of electricity. For example, the Environmental Protection Agency estimates that coal kills 30,000 people a year, as a result of its environmental impact, while 60 people died in the Chernobyl disaster. A real world example of impact provided by proponents of nuclear energy is the 650,000 ton increase in carbon emissions in the two months following the closure of the Vermont Yankee nuclear plant. Debate on nuclear power The nuclear power debate concerns the controversy which has surrounded the deployment and use of nuclear fission reactors to generate electricity from nuclear fuel for civilian purposes. Proponents of nuclear energy regard it as a sustainable energy source that reduces carbon emissions and increases energy security by decreasing dependence on other energy sources that are also often dependent on imports. M. King Hubbert, who popularized the concept of peak oil, saw oil as a resource that would run out and considered nuclear energy its replacement. Proponents also claim that the present quantity of nuclear waste is small and can be reduced through the latest technology of newer reactors and that the operational safety record of fission-electricity in terms of deaths is so far "unparalleled". Kharecha and Hansen estimated that "global nuclear power has prevented an average of 1.84 million air pollution-related deaths and 64 gigatonnes of CO2-equivalent (GtCO2-eq) greenhouse gas (GHG) emissions that would have resulted from fossil fuel burning" and, if continued, it could prevent up to 7 million deaths and 240 GtCO2-eq emissions by 2050. Opponents find that nuclear power poses many threats to people's health and environment such as the risk of nuclear weapons proliferation, long-term safe waste management and terrorism in the future. They also contend that nuclear power plants are complex systems where many things can and have gone wrong. Costs of the Chernobyl disaster amount to ~$68 billion as of 2019 and are increasing, the Fukushima disaster is estimated to cost taxpayers ~$187 billion, and radioactive waste management is estimated to cost the EU nuclear operators ~$250 billion by 2050. However, in countries that already use nuclear energy, when not considering reprocessing, intermediate nuclear waste disposal costs could be relatively fixed to certain but unknown degrees "as the main part of these costs stems from the operation of the intermediate storage facility". Critics find that one of the largest drawbacks to building new nuclear fission power plants are the large construction and operating costs when compared to alternatives of sustainable energy sources. Further costs include costs for ongoing research and development, expensive reprocessing in cases where such is practiced and decommissioning. Overall, many opponents find that nuclear energy cannot meaningfully contribute to climate change mitigation as they find it to be, in overall summary, too dangerous, too expensive, to take too long for deployment and to be an obstacle to achieving a transition towards sustainability and carbon-neutrality, effectively being a distracting competition for resources (i.e. human, financial, time, infrastructure and expertise) for the deployment and development of alternative, sustainable, energy system technologies (such as for wind, ocean and solar–including e.g. floating solar–as well as ways to manage their intermittency other than nuclear baseload generation such as dispatchable generation, renewables-diversification, super grids, flexible energy demand and supply regulating smart grids and energy storage technologies). Nevertheless, there is ongoing research and debate over costs of new nuclear, especially in regions where i.a. seasonal energy storage is difficult to provide and which aim to phase out fossil fuels in favor of low carbon power faster than the global average. Some find that financial transition costs for a 100% renewables-based European energy system that has completely phased out nuclear energy could be more costly by 2050 based on current technologies (i.e. not considering potential advances in e.g. green hydrogen, transmission- and flexibility capacities, ways to reduce energy-needs, geothermal energy and fusion energy) when the grid only extends across Europe. Arguments of economics and safety are used by both sides of the debate. Comparison with renewable energy Slowing global warming requires a transition to a low-carbon economy, mainly by burning far less fossil fuel. Limiting global warming to 1.5 °C is technically possible if no new fossil fuel power plants are built from 2019. This has generated considerable interest and dispute in determining the best path forward to rapidly replace fossil-based fuels in the global energy mix, with intense academic debate. Sometimes the IEA says that countries without nuclear should develop it as well as their renewable power. Several studies suggest that it might be theoretically possible to cover a majority of world energy generation with new renewable sources. The Intergovernmental Panel on Climate Change (IPCC) has said that if governments were supportive, renewable energy supply could account for close to 80% of the world's energy use by 2050. While in developed nations the economically feasible geography for new hydropower is lacking, with every geographically suitable area largely already exploited, some proponents of wind and solar energy claim these resources alone could eliminate the need for nuclear power. Nuclear power is comparable to, and in some cases lower, than many renewable energy sources in terms of lives lost in the past per unit of electricity delivered. Depending on recycling of renewable energy technologies, nuclear reactors may produce a much smaller volume of waste, although much more toxic, expensive to manage and longer-lived. A nuclear plant also needs to be disassembled and removed and much of the disassembled nuclear plant needs to be stored as low-level nuclear waste for a few decades. The disposal and management of the wide variety of radioactive waste, of which there are over one quarter of a million tons as of 2018, can cause future damage and costs across the world for over or during hundreds of thousands of years–possibly over a million years, due to issues such as leakage, malign retrieval, vulnerability to attacks (including of reprocessing- and power plants), groundwater contamination, radiation and leakage to above ground, brine leakage or bacterial corrosion. The European Commission Joint Research Centre found that as of 2021 the necessary technologies for geological disposal of nuclear waste are now available and can be deployed. Corrosion experts noted in 2020 that putting the problem of storage off any longer "isn't good for anyone". Separated plutonium and enriched uranium could be used for nuclear weapons, which – even with the current centralized control (e.g. state-level) and level of prevalence – are considered to be a difficult and substantial global risk for substantial future impacts on human health, lives, civilization and the environment. Speed of transition and investment needed Analysis in 2015 by professor Barry W. Brook and colleagues found that nuclear energy could displace or remove fossil fuels from the electric grid completely within 10 years. This finding was based on the historically modest and proven rate at which nuclear energy was added in France and Sweden during their building programs in the 1980s. In a similar analysis, Brook had earlier determined that 50% of all global energy, including transportation synthetic fuels etc., could be generated within approximately 30 years if the global nuclear fission build rate was identical to historical proven installation rates calculated in GW per year per unit of global GDP (GW/year/$). This is in contrast to the conceptual studies for 100% renewable energy systems, which would require an orders of magnitude more costly global investment per year, which has no historical precedent. These renewable scenarios would also need far greater land devoted to onshore wind and onshore solar projects. Brook notes that the "principal limitations on nuclear fission are not technical, economic or fuel-related, but are instead linked to complex issues of societal acceptance, fiscal and political inertia, and inadequate critical evaluation of the real-world constraints facing [the other] low-carbon alternatives." Contrary to his views, the construction and operating costs of nuclear are very large when compared to alternatives of sustainable energy sources whose costs are decreasing and which are the fastest-growing source of electricity generation with there being ongoing research and development into options to move beyond current constraints in a highly decarbonized energy system without reliance on new nuclear. The costs and the increasing competition from sustainable energy technologies may be main drivers of an apparent decline of nuclear. Some have argued that recent publicity of nuclear energy – including for novel reactor designs like "small modular reactors" – is driven in part or mostly by a "declining industry's desperation for capital and its related lobby depicting it as a solution for climate change". Scientific data indicates that–assuming 2021 emissions levels–humanity only has a carbon budget equivalent to 11 years of emissions left for limiting warming to 1.5 °C while the construction of new nuclear reactors took a median of 7.2–10.9 years in 2018–2020, substantially longer than, alongside other measures, scaling up the deployment of wind and solar – especially for novel reactor types – as well as being more risky, often delayed and more dependent on state-support. Researchers have cautioned that novel nuclear technologies – which have been in development since decades, are less tested, have higher proliferation risks, have more new safety problems, are often far from commercialization and are more expensive – are not available in time. Critics of nuclear energy often only oppose nuclear fission energy but not nuclear fusion – however, fusion energy is unlikely to be commercially widespread before 2050. Land use Nuclear power stations require approximately one square kilometer of land per typical reactor. Environmentalists and conservationists have begun to question the global renewable energy expansion proposals, as they are opposed to the frequently controversial use of once forested land to situate renewable energy systems. Seventy five academic conservationists signed a letter, suggesting a more effective policy to mitigate climate change involving the reforestation of this land proposed for renewable energy production, to its prior natural landscape, by means of the native trees that previously inhabited it, in tandem with the lower land use footprint of nuclear energy, as the path to assure both the commitment to carbon emission reductions and to succeed with landscape rewilding programs that are part of the global native species protection and re-introduction initiatives. These scientists argue that government commitments to increase renewable energy usage while simultaneously making commitments to expand areas of biological conservation are two competing land-use outcomes, in opposition to one another, that are increasingly coming into conflict. With the existing protected areas for conservation at present regarded as insufficient to safeguard biodiversity "the conflict for space between energy production and habitat will remain one of the key future conservation issues to resolve." Research Advanced fission reactor designs Current fission reactors in operation around the world are second or third generation systems, with most of the first-generation systems having been already retired. Research into advanced generation IV reactor types was officially started by the Generation IV International Forum (GIF) based on eight technology goals, including to improve economics, safety, proliferation resistance, natural resource utilization and the ability to consume existing nuclear waste in the production of electricity. Most of these reactors differ significantly from current operating light water reactors, and are expected to be available for commercial construction after 2030. Hybrid nuclear fusion-fission Hybrid nuclear power is a proposed means of generating power by the use of a combination of nuclear fusion and fission processes. The concept dates to the 1950s and was briefly advocated by Hans Bethe during the 1970s, but largely remained unexplored until a revival of interest in 2009, due to delays in the realization of pure fusion. When a sustained nuclear fusion power plant is built, it has the potential to be capable of extracting all the fission energy that remains in spent fission fuel, reducing the volume of nuclear waste by orders of magnitude, and more importantly, eliminating all actinides present in the spent fuel, substances which cause security concerns. Nuclear fusion Nuclear fusion reactions have the potential to be safer and generate less radioactive waste than fission. These reactions appear potentially viable, though technically quite difficult and have yet to be created on a scale that could be used in a functional power plant. Fusion power has been under theoretical and experimental investigation since the 1950s. Several experimental nuclear fusion reactors and facilities exist. The largest and most ambitious international nuclear fusion project currently in progress is ITER, a large tokamak under construction in France. ITER is planned to pave the way for commercial fusion power by
transforms the heat into mechanical energy; an electric generator, which transforms the mechanical energy into electrical energy. When a neutron hits the nucleus of a uranium-235 or plutonium atom, it can split the nucleus into two smaller nuclei. The reaction is called nuclear fission. The fission reaction releases energy and neutrons. The released neutrons can hit other uranium or plutonium nuclei, causing new fission reactions, which release more energy and more neutrons. This is called a chain reaction. In most commercial reactors, the reaction rate is controlled by control rods that absorb excess neutrons. The controllability of nuclear reactors depends on the fact that a small fraction of neutrons resulting from fission are delayed. The time delay between the fission and the release of the neutrons slows down changes in reaction rates and gives time for moving the control rods to adjust the reaction rate. Life cycle of nuclear fuel The life cycle of nuclear fuel starts with uranium mining. The uranium ore is then converted into a compact ore concentrate form, known as yellowcake (U3O8), to facilitate transport. Fission reactors generally need uranium-235, a fissile isotope of uranium. The concentration of uranium-235 in natural uranium is very low (about 0.7%). Some reactors can use this natural uranium as fuel, depending on their neutron economy. These reactors generally have graphite or heavy water moderators. For light water reactors, the most common type of reactor, this concentration is too low, and it must be increased by a process called uranium enrichment. In civilian light water reactors, uranium is typically enriched to 3.5-5% uranium-235. The uranium is then generally converted into uranium oxide (UO2), a ceramic, that is then compressively sintered into fuel pellets, a stack of which forms fuel rods of the proper composition and geometry for the particular reactor. After some time in the reactor, the fuel will have reduced fissile material and increased fission products, until its use becomes impractical. At this point, the spent fuel will be moved to a spent fuel pool which provides cooling for the thermal heat and shielding for ionizing radiation. After several months or years, the spent fuel is radioactively and thermally cool enough to be moved to dry storage casks or reprocessed. Uranium resources Uranium is a fairly common element in the Earth's crust: it is approximately as common as tin or germanium, and is about 40 times more common than silver. Uranium is present in trace concentrations in most rocks, dirt, and ocean water, but is generally economically extracted only where it is present in high concentrations. Uranium mining can be underground, open-pit, or in-situ leach mining. An increasing number of the highest output mines are remote underground operations, such as McArthur River uranium mine, in Canada, which by itself accounts for 13% of global production. As of 2011 the world's known resources of uranium, economically recoverable at the arbitrary price ceiling of US$130/kg, were enough to last for between 70 and 100 years. In 2007, the OECD estimated 670 years of economically recoverable uranium in total conventional resources and phosphate ores assuming the then-current use rate. Light water reactors make relatively inefficient use of nuclear fuel, mostly using only the very rare uranium-235 isotope. Nuclear reprocessing can make this waste reusable, and newer reactors also achieve a more efficient use of the available resources than older ones. With a pure fast reactor fuel cycle with a burn up of all the uranium and actinides (which presently make up the most hazardous substances in nuclear waste), there is an estimated 160,000 years worth of uranium in total conventional resources and phosphate ore at the price of 60–100 US$/kg. However, reprocessing is expensive, possibly dangerous and can be used to manufacture nuclear weapons. One analysis found that for uranium prices could increase by two orders of magnitudes between 2035 and 2100 and that there could be a shortage near the end of the century. A 2017 study by researchers from MIT and WHOI found that "at the current consumption rate, global conventional reserves of terrestrial uranium (approximately 7.6 million tonnes) could be depleted in a little over a century". Limited uranium-235 supply may inhibit substantial expansion with the current nuclear technology. While various ways to reduce dependence on such resources are being explored, new nuclear technologies are considered to not be available in time for climate change mitigation purposes or competition with alternatives of renewables in addition to being more expensive and require costly research and development. A study found it to be uncertain whether identified resources will be developed quickly enough to provide uninterrupted fuel supply to expanded nuclear facilities and various forms of mining may be challenged by ecological barriers, costs, and land requirements. Researchers also report considerable import dependence of nuclear energy. Unconventional uranium resources also exist. Uranium is naturally present in seawater at a concentration of about 3 micrograms per liter, with 4.4 billion tons of uranium considered present in seawater at any time. In 2014 it was suggested that it would be economically competitive to produce nuclear fuel from seawater if the process was implemented at large scale. Like fossil fuels, over geological timescales, uranium extracted on an industrial scale from seawater would be replenished by both river erosion of rocks and the natural process of uranium dissolved from the surface area of the ocean floor, both of which maintain the solubility equilibria of seawater concentration at a stable level. Some commentators have argued that this strengthens the case for nuclear power to be considered a renewable energy. Nuclear waste The normal operation of nuclear power plants and facilities produce radioactive waste, or nuclear waste. This type of waste is also produced during plant decommissioning. There are two broad categories of nuclear waste: low-level waste and high-level waste. The first has low radioactivity and includes contaminated items such as clothing, which poses limited threat. High-level waste is mainly the spent fuel from nuclear reactors, which is very radioactive and must be cooled and then safely disposed of or reprocessed. High-level waste The most important waste stream from nuclear power reactors is spent nuclear fuel, which is considered high-level waste. For LWRs, spent fuel is typically composed of 95% uranium, 4% fission products, and about 1% transuranic actinides (mostly plutonium, neptunium and americium). The plutonium and other transuranics are responsible for the bulk of the long-term radioactivity, whereas the fission products are responsible for the bulk of the short-term radioactivity. High-level waste requires treatment, management, and isolation from the environment. These operations present considerable challenges due to the extremely long periods these materials remain potentially hazardous to living organisms. This is due to long-lived fission products (LLFP), such as technetium-99 (half-life 220,000 years) and iodine-129 (half-life 15.7 million years). LLFP dominate the waste stream in terms of radioactivity, after the more intensely radioactive short-lived fission products (SLFPs) have decayed into stable elements, which takes approximately 300 years. Due to the exponential decrease of radioactivity with time, spent nuclear fuel activity decrease by 99.5% after 100 years. After about 100,000 years, the spent fuel becomes less radioactive than natural uranium ore. Commonly suggested methods to isolate LLFP waste from the biosphere include separation and transmutation, synroc treatments, or deep geological storage. Thermal-neutron reactors, which presently constitute the majority of the world fleet, cannot burn up the reactor grade plutonium that is generated during the reactor operation. This limits the life of nuclear fuel to a few years. In some countries, such as the United States, spent fuel is classified in its entirety as a nuclear waste. In other countries, such as France, it is largely reprocessed to produce a partially recycled fuel, known as mixed oxide fuel or MOX. For spent fuel that does not undergo reprocessing, the most concerning isotopes are the medium-lived transuranic elements, which are led by reactor-grade plutonium (half-life 24,000 years). Some proposed reactor designs, such as the Integral Fast Reactor and molten salt reactors, can use as fuel the plutonium and other actinides in spent fuel from light water reactors, thanks to their fast fission spectrum. This offers a potentially more attractive alternative to deep geological disposal. The thorium fuel cycle results in similar fission products, though creates a much smaller proportion of transuranic elements from neutron capture events within a reactor. Spent thorium fuel, although more difficult to handle than spent uranium fuel, may present somewhat lower proliferation risks. Low-level waste The nuclear industry also produces a large volume of low-level waste, with low radioactivity, in the form of contaminated items like clothing, hand tools, water purifier resins, and (upon decommissioning) the materials of which the reactor itself is built. Low-level waste can be stored on-site until radiation levels are low enough to be disposed of as ordinary waste, or it can be sent to a low-level waste disposal site. Waste relative to other types In countries with nuclear power, radioactive wastes account for less than 1% of total industrial toxic wastes, much of which remains hazardous for long periods. Overall, nuclear power produces far less waste material by volume than fossil-fuel based power plants. Coal-burning plants, in particular, produce large amounts of toxic and mildly radioactive ash resulting from the concentration of naturally occurring radioactive materials in coal. A 2008 report from Oak Ridge National Laboratory concluded that coal power actually results in more radioactivity being released into the environment than nuclear power operation, and that the population effective dose equivalent from radiation from coal plants is 100 times that from the operation of nuclear plants. Although coal ash is much less radioactive than spent nuclear fuel by weight, coal ash is produced in much higher quantities per unit of energy generated. It is also released directly into the environment as fly ash, whereas nuclear plants use shielding to protect the environment from radioactive materials. Nuclear waste volume is small compared to the energy produced. For example, at Yankee Rowe Nuclear Power Station, which generated 44 billion kilowatt hours of electricity when in service, its complete spent fuel inventory is contained within sixteen casks. It is estimated that to produce a lifetime supply of energy for a person at a western standard of living (approximately 3 GWh) would require on the order of the volume of a soda can of low enriched uranium, resulting in a similar volume of spent fuel generated. Waste disposal Following interim storage in a spent fuel pool, the bundles of used fuel rod assemblies of a typical nuclear power station are often stored on site in dry cask storage vessels. Presently, waste is mainly stored at individual reactor sites and there are over 430 locations around the world where radioactive material continues to accumulate. Disposal of nuclear waste is often considered the most politically divisive aspect in the lifecycle of a nuclear power facility. With the lack of movement of nuclear waste in the 2 billion year old natural nuclear fission reactors in Oklo, Gabon being cited as "a source of essential information today." Experts suggest that centralized underground repositories which are well-managed, guarded, and monitored, would be a vast improvement. There is an "international consensus on the advisability of storing nuclear waste in deep geological repositories". With the advent of new technologies, other methods including horizontal drillhole disposal into geologically inactive areas have been proposed. There are no commercial scale purpose built underground high-level waste repositories in operation. However, in Finland the Onkalo spent nuclear fuel repository of the Olkiluoto Nuclear Power Plant is under construction as of 2015. Reprocessing Most thermal-neutron reactors run on a once-through nuclear fuel cycle, mainly due to the low price of fresh uranium. However, many reactors are also fueled with recycled fissionable materials that remain in spent nuclear fuel. The most common fissionable material that is recycled is the reactor-grade plutonium (RGPu) that is extracted from spent fuel, it is mixed with uranium oxide and fabricated into mixed-oxide or MOX fuel. Because thermal LWRs remain the most common reactor worldwide, this type of recycling is the most common. It is considered to increase the sustainability of the nuclear fuel cycle, reduce the attractiveness of spent fuel to theft, and lower the volume of high level nuclear waste. Spent MOX fuel cannot generally be recycled for use in thermal-neutron reactors. This issue does not affect fast-neutron reactors, which are therefore preferred in order to achieve the full energy potential of the original uranium. The main constituent of spent fuel from LWRs is slightly enriched uranium. This can be recycled into reprocessed uranium (RepU), which can be used in a fast reactor, used directly as fuel in CANDU reactors, or re-enriched for another cycle through an LWR. Re-enriching of reprocessed uranium is common in France and Russia. Reprocessed uranium is also safer in terms of nuclear proliferation potential. Reprocessing has the potential to recover up to 95% of the uranium and plutonium fuel in spent nuclear fuel, as well as reduce long-term radioactivity within the remaining waste. However, reprocessing has been politically controversial because of the potential for nuclear proliferation and varied perceptions of increasing the vulnerability to nuclear terrorism. Reprocessing also leads to higher fuel cost compared to the once-through fuel cycle. While reprocessing reduces the volume of high-level waste, it does not reduce the fission products that are the primary causes of residual heat generation and radioactivity for the first few centuries outside the reactor. Thus, reprocessed waste still requires an almost identical treatment for the initial first few hundred years. Reprocessing of civilian fuel from power reactors is currently done in France, the United Kingdom, Russia, Japan, and India. In the United States, spent nuclear fuel is currently not reprocessed. The La Hague reprocessing facility in France has operated commercially since 1976 and is responsible for half the world's reprocessing as of 2010. It produces MOX fuel from spent fuel derived from several countries. More than 32,000 tonnes of spent fuel had been reprocessed as of 2015, with the majority from France, 17% from Germany, and 9% from Japan. Breeding Breeding is the process of converting non-fissile material into fissile material that can be used as nuclear fuel. The non-fissile material that can be used for this process is called fertile material, and constitute the vast majority of current nuclear waste. This breeding process occurs naturally in breeder reactors. As opposed to light water thermal-neutron reactors, which use uranium-235 (0.7% of all natural uranium), fast-neutron breeder reactors use uranium-238 (99.3% of all natural uranium) or thorium. A number of fuel cycles and breeder reactor combinations are considered to be sustainable or renewable sources of energy. In 2006 it was estimated that with seawater extraction, there was likely five billion years' worth of uranium resources for use in breeder reactors. Breeder technology has been used in several reactors, but as of 2006, the high cost of reprocessing fuel safely requires uranium prices of more than US$200/kg before becoming justified economically. Breeder reactors are however being developed for their potential to burn up all of the actinides (the most active and dangerous components) in the present inventory of nuclear waste, while also producing power and creating additional quantities of fuel for more reactors via the breeding process. As of 2017, there are two breeders producing commercial power, BN-600 reactor and the BN-800 reactor, both in Russia. The Phénix breeder reactor in France was powered down in 2009 after 36 years of operation. Both China and India are building breeder reactors. The Indian 500 MWe Prototype Fast Breeder Reactor is in the commissioning phase, with plans to build more. Another alternative to fast-neutron breeders are thermal-neutron breeder reactors that use uranium-233 bred from thorium as fission fuel in the thorium fuel cycle. Thorium is about 3.5 times more common than uranium in the Earth's crust, and has different geographic characteristics. India's three-stage nuclear power programme features the use of a thorium fuel cycle in the third stage, as it has abundant thorium reserves but little uranium. Nuclear decommissioning Nuclear decommissioning is the process of dismantling a nuclear facility to the point that it no longer requires measures for radiation protection, returning the facility and its parts to a safe enough level to be entrusted for other uses. Due to the presence of radioactive materials, nuclear decommissioning presents technical and economic challenges. The costs of decommissioning are generally spread over the lifetime of a facility and saved in a decommissioning fund. Installed capacity and electricity production Civilian nuclear power supplied 2,586 terawatt hours (TWh) of electricity in 2019, equivalent to about 10% of global electricity generation, and was the second largest low-carbon power source after hydroelectricity. Since electricity accounts for about 25% of world energy consumption, nuclear power's contribution to global energy was about 2.5% in 2011. This is a little more than the combined global electricity production from wind, solar, biomass and geothermal power, which together provided 2% of global final energy consumption in 2014. Nuclear power's share of global electricity production has fallen from 16.5% in 1997, in large part because the economics of nuclear power have become more difficult. there are 442 civilian fission reactors in the world, with a combined electrical capacity of 392 gigawatt (GW). There are also 53 nuclear power reactors under construction and 98 reactors planned, with a combined capacity of 60 GW and 103 GW, respectively. The United States has the largest fleet of nuclear reactors, generating over 800 TWh per year with an average capacity factor of 92%. Most reactors under construction are generation III reactors in Asia. Regional differences in the use of nuclear power are large. The United States produces the most nuclear energy in the world, with nuclear power providing 20% of the electricity it consumes, while France produces the highest percentage of its electrical energy from nuclear reactors – 71% in 2019. In the European Union, nuclear power provides 26% of the electricity as of 2018. Nuclear power is the single largest low-carbon electricity source in the United States, and accounts for two-thirds of the European Union's low-carbon electricity. Nuclear energy policy differs among European Union countries, and some, such as Austria, Estonia, Ireland and Italy, have no active nuclear power stations. In addition, there were approximately 140 naval vessels using nuclear propulsion in operation, powered by about 180 reactors. These include military and some civilian ships, such as nuclear-powered icebreakers. International research is continuing into additional uses of process heat such as hydrogen production (in support of a hydrogen economy), for desalinating sea water, and for use in district heating systems. Economics The cost of a nuclear power plant is typically billions of dollars, a similar cost to other large infrastructure projects around the world. Although this is an obstacle for nuclear advocates working to convince governments and companies to build nuclear reactors, they are relatively cheap to run once operational. The cost of building a reactor depends on the nation it is built in, the type of design it uses, and the time it takes to finish the project. The only two nations for which data is available that saw cost decreases in the 2000s were India and South Korea. Analysis of the economics of nuclear power must also take into account who bears the risks of future uncertainties. As of 2010, all operating nuclear power plants have been developed by state-owned or regulated electric utility monopolies. Many countries have since liberalized the electricity market where these risks, and the risk of cheaper competitors emerging before capital costs are recovered, are borne by plant suppliers and operators rather than consumers, which leads to a significantly different evaluation of the economics of new nuclear power plants. The levelized cost of electricity (LCOE) from a new nuclear power plant is estimated to be 69 USD/MWh, according to an analysis by the International Energy Agency and the OECD Nuclear Energy Agency. This represents the median cost estimate for an nth-of-a-kind nuclear power plant to be completed in 2025, at a discount rate of 7%. Nuclear power was found to be the least-cost option among dispatchable technologies. Variable renewables can generate cheaper electricity: the median cost of onshore wind power was estimated to be 50 USD/MWh, and utility-scale solar power 56 USD/MWh. At the assumed CO2 emission cost of USD 30 per ton, power from coal (88 USD/MWh) and gas (71 USD/MWh) is more expensive than low-carbon technologies. Electricity from long-term operation of nuclear power plants by lifetime extension was found the be the least-cost option, at 32 USD/MWh. Measures to mitigate global warming, such as a carbon tax or carbon emissions trading, may favor the economics of nuclear power. New small modular reactors, such as those developed by NuScale Power, are aimed at reducing the investment costs for new construction by making the reactors smaller and modular, so that they can be built in a factory. Certain designs had considerable early positive economics, such as the CANDU, which realized much higher capacity factor and reliability when compared to generation II light water reactors up to the 1990s. Nuclear power plants, though capable of some grid-load following, are typically run as much as possible to keep the cost of the generated electrical energy as low as possible, supplying mostly base-load electricity. Due to the on-line refueling reactor design, PHWRs (of which the CANDU design is a part) continue to hold many world record positions for longest continual electricity generation, often over 800 days. The specific record as of 2019 is held by a PHWR at Kaiga Atomic Power Station, generating electricity continuously for 962 days. Costs not considered in LCOE calculations include funds for research and development, and disasters (the Fukushima disaster is estimated to cost taxpayers ~$187 billion). Governments were found to in some cases force "consumers to pay upfront for potential cost overruns" or subsidize uneconomic nuclear energy or be required to do so. Nuclear operators are liable to pay for the waste management in the EU. In the U.S. the Congress reportedly decided 40 years ago that the nation, and not private companies, would be responsible for storing radioactive waste with taxpayers paying for the costs. The World Nuclear Waste Report 2019 found that "even in countries in which the polluter-pays-principle is a legal requirement, it is applied incompletely" and notes the case of the German Asse II deep geological disposal facility, where the retrieval of large amounts of waste has to be paid for by taxpayers. Similarly, other forms of energy, including fossil fuels and renewables, have a portion of their costs covered by governments. Use in space The most common use of nuclear power in space is the use of radioisotope thermoelectric generators, which use radioactive decay to generate power. These power generators are relatively small scale (few kW), and they are mostly used to power space missions and experiments for long periods where solar power is not available in sufficient quantity, such as in the Voyager 2 space probe. A few space vehicles have been launched using nuclear reactors: 34 reactors belong to the Soviet RORSAT series and one was the American SNAP-10A. Both fission and fusion appear promising for space propulsion applications, generating higher mission velocities with less reaction mass. Safety Nuclear power plants have three unique characteristics that affect their safety, as compared to other power plants. Firstly, intensely radioactive materials are present in a nuclear reactor. Their release to the environment could be hazardous. Secondly, the fission products, which make up most of the intensely radioactive substances in the reactor, continue to generate a significant amount of decay heat even after the fission chain reaction has stopped. If the heat cannot be removed from the reactor, the fuel rods may overheat and release radioactive materials. Thirdly, a criticality accident (a rapid increase of the reactor power) is possible in certain reactor designs if the chain reaction cannot be controlled. These three characteristics have to be taken into account when designing nuclear reactors. All modern reactors are designed so that an uncontrolled increase of the reactor power is prevented by natural feedback mechanisms, a concept known as negative void coefficient of reactivity. If the temperature or the amount of steam in the reactor increases, the fission rate inherently decreases. The chain reaction can also be manually stopped by inserting control rods into the reactor core. Emergency core cooling systems (ECCS) can remove the decay heat from the reactor if normal cooling systems fail. If the ECCS fails, multiple physical barriers limit the release of radioactive materials to the environment even in the case of an accident. The last physical barrier is the large containment building. With a death rate of 0.07 per TWh, nuclear power is the safest energy source per unit of energy generated in terms of mortality when the historical track-record is considered. Energy produced by coal, petroleum, natural gas and hydropower has caused more deaths per unit of energy generated due to air pollution and energy accidents. This is found when comparing the immediate deaths from other energy sources to both the immediate and the latent, or predicted, indirect cancer deaths from nuclear energy accidents. When the direct and indirect fatalities (including fatalities resulting from the mining and air pollution) from nuclear power and fossil fuels are compared, the use of nuclear power has been calculated to have prevented about 1.8 million deaths between 1971 and 2009, by reducing the proportion of energy that would otherwise have been generated by fossil fuels. Following the 2011 Fukushima nuclear disaster, it has been estimated that if Japan had never adopted nuclear power, accidents and pollution from coal or gas plants would have caused more lost years of life. Serious impacts of nuclear accidents are often not directly attributable to radiation exposure, but rather social and psychological effects. Evacuation and long-term displacement of affected populations created problems for many people, especially the elderly and hospital patients. Forced evacuation from a nuclear accident may lead to social isolation, anxiety, depression, psychosomatic medical problems, reckless behavior, and suicide. A comprehensive 2005 study on the aftermath of the Chernobyl disaster concluded that the mental health impact is the largest public health problem caused by the accident. Frank N. von Hippel, an American scientist, commented that a disproportionate fear of ionizing radiation (radiophobia) could have long-term psychological effects on the population of contaminated areas following the Fukushima disaster. In January 2015, the number of Fukushima evacuees was around 119,000, compared with a peak of around 164,000 in June 2012. Accidents and attacks Accidents Some serious nuclear and radiation accidents have occurred. The severity of nuclear accidents is generally classified using the International Nuclear Event Scale (INES) introduced by the International Atomic Energy Agency (IAEA). The scale ranks anomalous events or accidents on a scale from 0 (a deviation from normal operation that poses no safety risk) to 7 (a major accident with widespread effects). There have been 3 accidents of level 5 or higher in the civilian nuclear power industry, two of which, the Chernobyl accident and the Fukushima accident, are ranked at level 7. The Fukushima Daiichi nuclear accident was caused by the 2011 Tohoku earthquake and tsunami. The accident has not caused any radiation-related deaths but resulted in radioactive contamination of surrounding areas. The difficult cleanup operation is expected to cost tens of billions of dollars over 40 or more years. The Three Mile Island accident in 1979 was a smaller scale accident, rated at INES level 5. There were no direct or indirect deaths caused by the accident. The impact of nuclear accidents is controversial. According to Benjamin K. Sovacool, fission energy accidents ranked first among energy sources in terms of their total economic cost, accounting for 41 percent of all property damage attributed to energy accidents. Another analysis found that coal, oil, liquid petroleum gas and hydroelectric accidents (primarily due to the Banqiao Dam disaster) have resulted in greater economic impacts than nuclear power accidents. The study compares latent cancer deaths attributable to nuclear with immediate deaths from other energy sources per unit of energy generated, and does not include fossil fuel related cancer and other indirect deaths created by the use of fossil fuel consumption in its "severe accident" (an accident with more than 5 fatalities) classification. The Chernobyl accident in 1986 caused approximately 50 deaths from direct and indirect effects, and some temporary serious injuries from acute radiation syndrome. The future predicted mortality from increases in cancer rates is estimated at about 4000 in the decades to come. However, the costs have been large and are increasing. Extreme weather events, including events made more severe by climate change, decrease the reliability of nuclear energy. Novel reactor types and weakening of safety standards to increase competitiveness of nuclear energy may increase risks or have new risks of accidents. Nuclear power works under an insurance framework that limits or structures accident liabilities in accordance with national and international conventions. It is often argued that this potential shortfall in liability represents an external cost not included in the cost of nuclear electricity. This cost is small, amounting to about 0.1% of the levelized cost of electricity, according to a study by the Congressional Budget Office in the United States. These beyond-regular insurance costs for worst-case scenarios are not unique to nuclear power. Hydroelectric power plants are similarly not fully insured against a catastrophic event such as dam failures. For example, the failure of the Banqiao Dam caused the death of an estimated 30,000 to 200,000 people, and 11 million people lost their homes. As private insurers base dam insurance premiums on limited scenarios, major disaster insurance in this sector is likewise provided by the state. Attacks and sabotage Terrorists could target nuclear power plants in an attempt to release radioactive contamination into the community. The United States 9/11 Commission has said that nuclear power plants were potential targets originally considered for the September 11, 2001 attacks. An attack on a reactor's spent fuel pool could also be serious, as these pools are less protected than the reactor core. The release of radioactivity could lead to thousands of near-term deaths and greater numbers of long-term fatalities. In the United States, the NRC carries out "Force on Force" (FOF) exercises at all nuclear power plant sites at least once every three years. In the United States, plants are surrounded by a double row of tall fences which are electronically monitored. The plant grounds are patrolled by a sizeable force of armed guards. Insider sabotage is also a threat because insiders can observe and work around security measures. Successful insider crimes depended on the perpetrators' observation and knowledge of security vulnerabilities. A fire caused 5–10 million dollars worth of damage to New York's Indian Point Energy Center in 1971. The arsonist turned out to be a plant maintenance worker. Nuclear proliferation Nuclear proliferation is the spread of nuclear weapons, fissionable material, and weapons-related nuclear technology to states that do not already possess nuclear weapons. Many technologies and materials associated with the creation of a nuclear power program have a dual-use capability, in that they can also be used to make nuclear weapons. For this reason, nuclear power presents proliferation risks. Nuclear power program can become a route leading to a nuclear weapon. An example of this is the concern over Iran's nuclear program. The re-purposing of civilian nuclear industries for military purposes would be a breach of the Non-proliferation treaty, to which 190 countries adhere. As of April 2012, there are thirty one countries that have civil nuclear power plants, of which nine have nuclear weapons. The vast majority of these nuclear weapons states have produced weapons before commercial nuclear power stations. A fundamental goal for global security is to minimize the nuclear proliferation risks associated with the expansion of nuclear power. The Global Nuclear Energy Partnership was an international effort to create a distribution network in which developing countries in need of energy would receive nuclear fuel at a discounted rate, in exchange for that nation agreeing to forgo their own indigenous development of a uranium enrichment program. The France-based Eurodif/European Gaseous Diffusion Uranium Enrichment Consortium is a program that successfully implemented this concept, with Spain and other countries without enrichment facilities buying a share of the fuel produced at
(BBA and graduate programs) and Norwegian (majority of undergraduate programs and custom programs for local businesses). The school currently participates in exchange programs with 200 foreign institutions in 45 countries. The internationally award-winning main campus in Nydalen (Oslo) was designed by Niels Torp, who also designed Gardermoen Airport. Norsk Kundebarometer Norsk Kundebarometer (NKB) () is a research program run by BI, with a focus on relations between customers and businesses. Based on an annual survey of Norwegian households, it collects data that may be used for comparison between businesses, comparisons between various industries, and comparisons over time. Campuses in Norway Campus Oslo: the main campus is located in Nydalen, Oslo, the capital of Norway. Campus Oslo has more than 14,000 students. Campus Bergen: the second-largest campus is located in Bergen, the second-largest city in Norway. Campus Bergen has more than 3,000 students. Campus Stavanger: located in Stavanger, the oil capital of Norway. Campus Stavanger has approximately 1,200 students. Campus Trondheim: located in Trondheim, the third-largest city in Norway. Campus Trondheim has approximately 1,900 students. Activities abroad BI has educated roughly 1700 students in China through its close relationship with Fudan University in Shanghai, and is also the majority shareholder of the ISM University of Management and Economics (previously known as International School of Management) with around 1800 students located in Vilnius and Kaunas in Lithuania. Degree programs Undergraduate (All taught in Norwegian except Business Administration and Business Analytics) Accounting & Auditing Finance Economics & Management Business Administration (Taught in English) Business Administration (Taught in Norwegian) Business Analytics (Taught in English) Retail Management Creative Industries Management Real Estate Business & Entrepreneurship International Management Marketing PR & Market Communication Economics and Business Law Graduate (All taught in English, except MSc in Accounting & Auditing and MSc in Law & Business) MSc in Applied Economics MSc in Business Analytics MSc in Business Majors: Accounting & Business Control; Economics; Finance; Marketing; Leadership & Change; Logistics, Operations & Supply Chain Management; Strategy MSc in Entrepreneurship and Innovation MSc in Finance MSc in Leadership & Organisational Psychology MSc in Quantitative Finance QTEM Masters Network Programmes: MSc in applied economics; MSc in business analytics; MSc in business – major in economics; MSc in business – major in finance MSc in strategic marketing management MSc in accounting & auditing (taught in Norwegian) MSc in law & business (taught in Norwegian) Executive MBA (EMBA general management in cooperation with Nanyang Business School (Nanyang Technological University), Singapore and Instituto de Empresa in Madrid, Spain and Haas School of Business (University of California at Berkeley), USA) EMBA in Energy Management (in cooperation with Nanyang Business School (Nanyang Technological University), Singapore and IFP, Paris) EMBA in Shipping, Offshore, and Finance (in cooperation with Nanyang Business School (Nanyang Technological University), Singapore Executive Master in Energy Management (in cooperation with ESCP-EAP in Paris and IFP) Part-time MBA in China (in cooperation with Fudan University in Shanghai) Master of Management programs in International Management, Project Commercial Management and Project Leadership Several Executive Master of Management
Economics & Management Business Administration (Taught in English) Business Administration (Taught in Norwegian) Business Analytics (Taught in English) Retail Management Creative Industries Management Real Estate Business & Entrepreneurship International Management Marketing PR & Market Communication Economics and Business Law Graduate (All taught in English, except MSc in Accounting & Auditing and MSc in Law & Business) MSc in Applied Economics MSc in Business Analytics MSc in Business Majors: Accounting & Business Control; Economics; Finance; Marketing; Leadership & Change; Logistics, Operations & Supply Chain Management; Strategy MSc in Entrepreneurship and Innovation MSc in Finance MSc in Leadership & Organisational Psychology MSc in Quantitative Finance QTEM Masters Network Programmes: MSc in applied economics; MSc in business analytics; MSc in business – major in economics; MSc in business – major in finance MSc in strategic marketing management MSc in accounting & auditing (taught in Norwegian) MSc in law & business (taught in Norwegian) Executive MBA (EMBA general management in cooperation with Nanyang Business School (Nanyang Technological University), Singapore and Instituto de Empresa in Madrid, Spain and Haas School of Business (University of California at Berkeley), USA) EMBA in Energy Management (in cooperation with Nanyang Business School (Nanyang Technological University), Singapore and IFP, Paris) EMBA in Shipping, Offshore, and Finance (in cooperation with Nanyang Business School (Nanyang Technological University), Singapore Executive Master in Energy Management (in cooperation with ESCP-EAP in Paris and IFP) Part-time MBA in China (in cooperation with Fudan University in Shanghai) Master of Management programs in International Management, Project Commercial Management and Project Leadership Several Executive Master of Management programs, including a relatively new specialization in security and cultural understanding created in cooperation with the Norwegian Armed Forces. Doctoral Programs (PhD) Student organizations The school has two student organizations, one for the main campus in Oslo and one for the other campuses. The Oslo student organization is called "" (SBIO) (). This union was formed in 2005 after the relocation of the three locations in Oslo into
the U.S. in the 1950s–1960s was based on peaceful research and development and the economic prosperity of the country. Although the civil-sector nuclear power was established in the 1950s, the country has an active nuclear weapons program which was started in the 1970s. The bomb program has its roots after East Pakistan gained its independence through the Bangladesh Liberation War, as the new nation of Bangladesh, after India's successful intervention led to a decisive victory over Pakistan in 1971. This large-scale but clandestine atomic bomb project was directed towards the indigenous development of reactor and military-grade plutonium. In 1974, when India surprised the world with the successful detonation of its own bomb, codename Smiling Buddha, it became "imperative for Pakistan" to pursue weapons research. According to a leading scientist in the program, it became clear that once India detonated their bomb, "Newton's Third Law" came into "operation", from then on it was a classic case of "action and reaction". Earlier efforts were directed towards mastering the plutonium technology from France, but that route was slowed when the plan failed after U.S. intervention to cancel the project. Contrary to popular perception, Pakistan did not forego the "plutonium" route and covertly continued its indigenous research under Munir Ahmad Khan and it succeeded with that route in the early 1980s. Reacting to India's first nuclear weapon test, Prime Minister Zulfikar Ali Bhutto and the country's political and military science circles sensed this test as final and dangerous anticipation to Pakistan's "moral and physical existence." With diplomat Aziz Ahmed on his side, Prime Minister Bhutto launched a serious diplomatic offense and aggressively maintained at the session of the United Nations Security Council: After 1974, Bhutto's government redoubled its effort, this time equally focused on uranium and plutonium. Pakistan had established science directorates in almost all of her embassies in the important countries of the world, with theoretical physicist S.A. Butt being the director. Abdul Qadeer Khan then established a network through Dubai to smuggle URENCO technology to the Engineering Research Laboratories. Earlier, he worked with the Physics Dynamics Research Laboratories (FDO), a subsidiary of the Dutch firm VMF-Stork based in Amsterdam. Later after joining, Urenco, he had access through photographs and documents to the technology. Against popular perception, the technology that Khan had brought from Urenco was based on first generation civil reactor technology, filled with many serious technical errors, though it was an authentic and vital link for the country's gas centrifuge project. After the British Government stopped the British subsidiary of the American Emerson Electric Co. from shipping components to Pakistan, he describes his frustration with a supplier from Germany as: "That man from the German team was unethical. When he did not get the order from us, he wrote a letter to a Labour Party member and questions were asked in [British] Parliament." By 1978, his efforts paid off and made him into a national hero. In early 1996 the next Prime Minister of Pakistan Benazir Bhutto made it clear that "if India conducts a nuclear test, Pakistan could be forced to "follow suit". In 1997, her statement was echoed by Prime Minister Nawaz Sharif who maintained that "since 1972, [P]akistan had progressed significantly, and we have left that stage (developmental) far behind. Pakistan will not be made a "hostage" to India by signing the CTBT, before (India).!" In May 1998, within weeks of India's nuclear tests, Pakistan announced that it had conducted six underground tests in the Chagai Hills, five on 28 May and one on 30 May. Seismic events consistent with these claims were recorded. In 2004, the revelation of Khan's efforts led to the exposure of many defunct European consortiums which had defied export restrictions in the 1970s, and of many defunct Dutch companies that exported thousands of centrifuges to Pakistan as early as 1976. Many centrifuge components were apparently manufactured by the Malaysian Scomi Precision Engineering with the assistance of South Asian and German companies, and used a UAE-based computer company as a false front. It was widely believed to have had direct involvement by the Government of Pakistan. This claim could not be verified due to the refusal of that Government to allow the IAEA to interview the alleged head of the nuclear black market, who happened to be no other than Abdul Qadeer Khan. Confessing his crimes a month later on national television, Khan bailed out the Government by taking full responsibility. Independent investigation conducted by International Institute for Strategic Studies (IISS) confirmed that he had control over the import-export deals, and his acquisition activities were largely unsupervised by Pakistan governmental authorities. All of his activities went undetected for several years. He duly confessed to running the atomic proliferation ring from Pakistan to Iran and North Korea. He was immediately given presidential immunity. The exact nature of involvement at the governmental level is still unclear, but the manner in which the government acted cast doubt on the sincerity of Pakistan. North Korea The Democratic People's Republic of Korea (or better known as North Korea), joined the NPT in 1985 and had subsequently signed a safeguards agreement with the IAEA. However, it was believed that North Korea was diverting plutonium extracted from the fuel of its reactor at Yongbyon, for use in nuclear weapons. The subsequent confrontation with IAEA on the issue of inspections and suspected violations, resulted in North Korea threatening to withdraw from the NPT in 1993. This eventually led to negotiations with the United States resulting in the Agreed Framework of 1994, which provided for IAEA safeguards being applied to its reactors and spent fuel rods. These spent fuel rods were sealed in canisters by the United States to prevent North Korea from extracting plutonium from them. North Korea had to therefore freeze its plutonium programme. During this period, Pakistan-North Korea cooperation in missile technology transfer was being established. A high-level delegation of Pakistan military visited North Korea in August–September 1992, reportedly to discuss the supply of missile technology to Pakistan. In 1993, PM Benazir Bhutto repeatedly traveled to China, and the paid state visit to North Korea. The visits are believed to be related to the subsequent acquisition technology to developed its Ghauri system by Pakistan. During the period 1992–1994, A.Q. Khan was reported to have visited North Korea thirteen times. The missile cooperation program with North Korea was under Dr. A. Q. Khan Research Laboratories. At this time China was under U.S. pressure not to supply the M Dongfeng series of missiles to Pakistan. It is believed by experts that possibly with Chinese connivance and facilitation, the latter was forced to approach North Korea for missile transfers. Reports indicate that North Korea was willing to supply missile sub-systems including rocket motors, inertial guidance systems, control and testing equipment for US$50 million. It is not clear what North Korea got in return. Joseph S. Bermudez Jr. in Jane's Defence Weekly (27 November 2002) reports that Western analysts had begun to question what North Korea received in payment for the missiles; many suspected it was the nuclear technology. The KRL was in charge of both the uranium program and also of the missile program with North Korea. It is therefore likely during this period that cooperation in nuclear technology between Pakistan and North Korea was initiated. Western intelligence agencies began to notice the exchange of personnel, technology and components between KRL and entities of the North Korean 2nd Economic Committee (responsible for weapons production). A New York Times report on 18 October 2002 quoted U.S. intelligence officials having stated that Pakistan was a major supplier of critical equipment to North Korea. The report added that equipment such as gas centrifuges appeared to have been "part of a barter deal" in which North Korea supplied Pakistan with missiles. Separate reports indicate (The Washington Times, 22 November 2002) that U.S. intelligence had as early as 1999 picked up signs that North Korea was continuing to develop nuclear arms. Other reports also indicate that North Korea had been working covertly to develop an enrichment capability for nuclear weapons for at least five years and had used technology obtained from Pakistan (The Washington Times, 18 October 2002). Israel Israel is also thought to possess an arsenal of potentially up to several hundred nuclear warheads based on estimates of the amount of fissile material produced by Israel. This has never been openly confirmed or denied however, due to Israel's policy of deliberate ambiguity. An Israeli nuclear installation is located about ten kilometers to the south of Dimona, the Negev Nuclear Research Center. Its construction commenced in 1958, with French assistance. The official reason given by the Israeli and French governments was to build a nuclear reactor to power a "desalination plant", in order to "green the Negev". The purpose of the Dimona plant is widely assumed to be the manufacturing of nuclear weapons, and the majority of defense experts have concluded that it does in fact do that. However, the Israeli government refuses to confirm or deny this publicly, a policy it refers to as "ambiguity". Norway sold 20 tonnes of heavy water needed for the reactor to Israel in 1959 and 1960 in a secret deal. There were no "safeguards" required in this deal to prevent the use of heavy water for non-peaceful purposes. The British newspaper Daily Express accused Israel of working on a bomb in 1960. When the United States intelligence community discovered the purpose of the Dimona plant in the early 1960s, it demanded that Israel agree to international inspections. Israel agreed, but on a condition that the U.S., rather than IAEA, inspectors were used, and that Israel would receive advanced notice of all inspections. Some claim that because Israel knew the schedule of the inspectors' visits, it was able to hide the alleged purpose of the site from the inspectors by installing temporary false walls and other devices before each inspection. The inspectors eventually informed the U.S. government that their inspections were useless due to Israeli restrictions on what areas of the facility they could inspect. In 1969, the United States terminated the inspections. In 1986, Mordechai Vanunu, a former technician at the Dimona plant, revealed to the media some evidence of Israel's nuclear program. Israeli agents arrested him in Italy, drugged him and transported him to Israel. An Israeli court then tried him in secret on charges of treason and espionage, and sentenced him to eighteen years imprisonment. He was freed on 21 April 2004, but was severely limited by the Israeli government. He was arrested again on 11 November 2004, though formal charges were not immediately filed. Comments on photographs taken by Vanunu inside the Negev Nuclear Research Center have been made by prominent scientists. British nuclear weapons scientist Frank Barnaby, who questioned Vanunu over several days, estimated Israel had enough plutonium for about 150 weapons. According to Lieutenant Colonel Warner D. Farr in a report to the USAF Counterproliferation Center, while France was previously a leader in nuclear research "Israel and France were at a similar level of expertise after WWII, and Israeli scientists could make significant contributions to the French effort." In 1986 Francis Perrin, French high-commissioner for atomic energy from 1951 to 1970 stated that in 1949 Israeli scientists were invited to the Saclay nuclear research facility, this cooperation leading to a joint effort including sharing of knowledge between French and Israeli scientists especially those with knowledge from the Manhattan Project. Nuclear arms control in South Asia The public stance of India and Pakistan on non-proliferation differs markedly. Pakistan has initiated a series of regional security proposals. It has repeatedly proposed a nuclear-free zone in South Asia, and has proclaimed its willingness to engage in nuclear disarmament and to sign the Non-Proliferation Treaty if India would do so. It has endorsed a United States proposal for a regional five power conference to consider non-proliferation in South Asia. India has taken the view that solutions to regional security issues should be found at the international rather than the regional level, since its chief concern is with China. It therefore rejects Pakistan's proposals. Instead, the 'Gandhi Plan', put forward in 1988, proposed the revision of the Non-Proliferation Treaty, which it regards as inherently discriminatory in favor of the nuclear-weapon States, and a timetable for complete nuclear weapons disarmament. It endorsed early proposals for a Comprehensive Test Ban Treaty and for an international convention to ban the production of highly enriched uranium and plutonium for weapons purposes, known as the 'cut-off' convention. The United States for some years, especially under the Clinton administration, pursued a variety of initiatives to persuade India and Pakistan to abandon their nuclear weapons programs and to accept comprehensive international safeguards on all their nuclear activities. To this end, the Clinton administration proposed a conference of the five nuclear-weapon states, Japan, Germany, India and Pakistan. India refused this and similar previous proposals, and countered with demands that other potential weapons states, such as Iran and North Korea, should be invited, and that regional limitations would only be acceptable if they were accepted equally by China. The United States would not accept the participation of Iran and North Korea and these initiatives have lapsed. Another, more recent approach, centers on 'capping' the production of fissile material for weapons purposes, which would hopefully be followed by 'roll back'. To this end, India and the United States jointly sponsored a UN General Assembly resolution in 1993 calling for negotiations for a 'cut-off' convention. Should India and Pakistan join such a convention, they would have to agree to halt the production of fissile materials for weapons and to accept international verification on their relevant nuclear facilities (enrichment and reprocessing plants). It appears that India is now prepared to join negotiations regarding such a Cut-off Treaty, under the UN Conference on Disarmament. Bilateral confidence-building measures between India and Pakistan to reduce the prospects of confrontation have been limited. In 1990 each side ratified a treaty not to attack the other's nuclear installations, and at the end of 1991 they provided one another with a list showing the location of all their nuclear plants, even though the respective lists were regarded as not being wholly accurate. Early in 1994 India proposed a bilateral agreement for a 'no first use' of nuclear weapons and an extension of the 'no attack' treaty to cover civilian and industrial targets as well as nuclear installations. Having promoted the Comprehensive Test Ban Treaty since 1954, India dropped its support in 1995 and in 1996 attempted to block the Treaty. Following the 1998 tests the question has been reopened and both Pakistan and India have indicated their intention to sign the CTBT. Indian ratification may be conditional upon the five weapons states agreeing to specific reductions in nuclear arsenals. The UN Conference on Disarmament has also called upon both countries "to accede without delay to the Non-Proliferation Treaty", presumably as non-weapons states. NPT signatories Egypt In 2004 and 2005, Egypt disclosed past undeclared nuclear activities and material to the IAEA. In 2007 and 2008, high-enriched and low-enriched uranium particles were found in environmental samples taken in Egypt. In 2008, the IAEA states Egypt's statements were consistent with its own findings. In May 2009, Reuters reported that the IAEA was conducting further investigation in Egypt. Iran In 2003, the IAEA reported that Iran had been in breach of its obligations to comply with provisions of its safeguard agreement. In 2005, the IAEA Board of Governors voted in a rare non-consensus decision to find Iran in non-compliance with its NPT Safeguards Agreement and to report that non-compliance to the UN Security Council. In response, the UN Security Council passed a series of resolutions citing concerns about the program. Iran's representative to the UN argues sanctions compel Iran to abandon its rights under the Nuclear Nonproliferation Treaty to peaceful nuclear technology. Iran says its uranium enrichment program is exclusively for peaceful purposes and has enriched uranium to "less than 5 percent," consistent with fuel for a nuclear power plant and significantly below the purity of WEU (around 90%) typically used in a weapons program. The director general of the International Atomic Energy Agency, Yukiya Amano, said in 2009 he had not seen any evidence in IAEA official documents that Iran was developing nuclear weapons. Iraq Up to the late 1980s it was generally assumed that any undeclared nuclear activities would have to be based on the diversion of nuclear material from safeguards. States acknowledged the possibility of nuclear activities entirely separate from those covered by safeguards, but it was assumed they would be detected by national intelligence activities. There was no particular effort by IAEA to attempt to detect them. Iraq had been making efforts to secure a nuclear potential since the 1960s. In the late 1970s a specialised plant, Osiraq, was constructed near Baghdad. The plant was attacked during the Iran–Iraq War and was destroyed by Israeli bombers in June 1981. Not until the 1990 NPT Review Conference did some states raise the possibility of making more use of (for example) provisions for "special inspections" in existing NPT Safeguards Agreements. Special inspections can be undertaken at locations other than those where safeguards routinely apply, if there is reason to believe there may be undeclared material or activities. After inspections in Iraq following the UN Gulf War cease-fire resolution showed the extent of Iraq's clandestine nuclear weapons program, it became clear that the IAEA would have to broaden the scope of its activities. Iraq was an NPT Party, and had thus agreed to place all its nuclear material under IAEA safeguards. But the inspections revealed that it had been pursuing an extensive clandestine uranium enrichment programme, as well as a nuclear weapons design programme. The main thrust of Iraq's uranium enrichment program was the development of technology for electromagnetic isotope separation (EMIS) of indigenous uranium. This uses the same principles as a mass spectrometer (albeit on a much larger scale). Ions of uranium-238 and uranium-235 are separated because they describe arcs of different radii when they move through a magnetic field. This process was used in the Manhattan Project to make the highly enriched uranium used in the Hiroshima bomb, but was abandoned soon afterwards. The Iraqis did the basic research work at their nuclear research establishment at Tuwaitha, near Baghdad, and were building two full-scale facilities at Tarmiya and Ash Sharqat, north of Baghdad. However, when the war broke out, only a few separators had been installed at Tarmiya, and none at Ash Sharqat. The Iraqis were also very interested in centrifuge enrichment, and had been able to acquire some components including some carbon-fibre rotors, which they were at an early stage of testing. In May 1998, Newsweek reported that Abdul Qadeer Khan had sent Iraq centrifuge designs, which were apparently confiscated by the UNMOVIC officials. Iraqi officials said "the documents were authentic but that they had not agreed to work with A. Q. Khan, fearing an ISI sting operation, due to strained relations between two countries. The Government of Pakistan and A. Q. Khan strongly denied this allegation whilst the government declared the evidence to be "fraudulent". They were clearly in violation of their NPT and safeguards obligations, and the IAEA Board of Governors ruled to that effect. The UN Security Council then ordered the IAEA to remove, destroy or render harmless Iraq's nuclear weapons capability. This was done by mid-1998, but Iraq then ceased all cooperation with the UN, so the IAEA withdrew from this work. The revelations from Iraq provided the impetus for a very far-reaching reconsideration of what safeguards are intended to achieve. Libya Libya possesses ballistic missiles and previously pursued nuclear weapons under the leadership of Muammar Gaddafi. On 19 December 2003, Gaddafi announced that Libya would voluntarily eliminate all materials, equipment and programs that could lead to internationally proscribed weapons, including weapons of mass destruction and long-range ballistic missiles. Libya signed the Nuclear Non-Proliferation Treaty (NPT) in 1968 and ratified it in 1975, and concluded a safeguards agreement with the International Atomic Energy Agency (IAEA) in 1980. In March 2004, the IAEA Board of Governors welcomed Libya's decision to eliminate its formerly undeclared nuclear program, which it found had violated Libya's safeguards agreement, and approved Libya's Additional Protocol. The United States and the United Kingdom assisted Libya in removing equipment and material from its nuclear weapons program, with independent verification by the IAEA. Myanmar A report in the Sydney Morning Herald and Searchina, a Japanese newspaper, report that two Myanma defectors saying that the Myanmar junta was secretly building a nuclear reactor and plutonium extraction facility with North Korea's help, with the aim of acquiring its first nuclear bomb in five years. According to the report, "The secret complex, much of it in caves tunnelled into a mountain at Naung Laing in northern Burma, runs parallel to a civilian reactor being built at another site by Russia that both the Russians and Burmese say will be put under international safeguards." In 2002, Myanmar had notified IAEA of its intention to pursue a civilian nuclear programme. Later, Russia announced that it would build a nuclear reactor in Myanmar. There have also been reports that two Pakistani scientists, from the AQ Khan stable, had been dispatched to Myanmar where they had settled down, to help Myanmar's project. Recently, the David Albright-led Institute for Science and International Security (ISIS) rang alarm bells about Myanmar attempting a nuclear project with North Korean help. If true, the full weight of international pressure will be brought against Myanmar, said officials familiar with developments. But equally, the information that has been peddled by the defectors is also "preliminary" and could be used by the west to turn the screws on Myanmar—on democracy and human rights issues—in the run-up to the elections in the country in 2010. During an ASEAN meeting in Thailand in July 2009, US secretary of state Hillary Clinton highlighted concerns of the North Korean link. "We know there are also growing concerns about military cooperation between North Korea and Burma which we take very seriously," Clinton said. However, in 2012, after contact with the American president, Barack Obama, the Burmese leader, Thein Sein, renounced military ties with DPRK (North Korea). North Korea The Democratic People's Republic of Korea (DPRK) acceded to the NPT in 1985 as a condition for the supply of a nuclear power station by the USSR. However, it delayed concluding its NPT Safeguards Agreement with the IAEA, a process which should take only 18 months, until April 1992. During that period, it brought into operation a small gas-cooled, graphite-moderated, natural-uranium (metal) fuelled "Experimental Power Reactor" of about 25 MWt (5 MWe), based on the UK Magnox design. While this was a well-suited design to start a wholly indigenous nuclear reactor development, it also exhibited all the features of a small plutonium production reactor for weapons purposes. North Korea also made substantial progress in the construction of two larger reactors designed on the same principles, a prototype of about 200 MWt (50 MWe), and a full-scale version of about 800 MWt (200 MWe). They made only slow progress; construction halted on both in 1994 and has not resumed. Both reactors have degraded considerably since that time and would take significant efforts to refurbish. In addition, it completed and commissioned a reprocessing plant that makes the Magnox spent nuclear fuel safe, recovering uranium and plutonium. That plutonium, if the fuel was only irradiated to a very low burn-up, would have been in a form very suitable for weapons. Although all these facilities at the Yongbyon Nuclear Scientific Research Center were to be under safeguards, there was always the risk that at some stage, the DPRK would withdraw from the NPT and use the plutonium for weapons. One of the first steps in applying NPT safeguards is for the IAEA to verify the initial stocks of uranium and plutonium to ensure that all the nuclear materials in the country have been declared for safeguards purposes. While undertaking this work in 1992, IAEA inspectors found discrepancies that indicated that the reprocessing plant had been used more often than the DPRK had declared, which suggested that the DPRK could have weapons-grade plutonium which it had not declared to the IAEA. Information passed to the IAEA by a Member State (as required by the IAEA) supported that suggestion by indicating that the DPRK had two undeclared waste or other storage sites. In February 1993 the IAEA called on the DPRK to allow special inspections of the two sites so that the initial stocks of nuclear material could be verified. The DPRK refused, and on 12 March announced its intention to withdraw from the NPT (three months' notice is required). In April 1993 the IAEA Board concluded that the DPRK was in non-compliance with its safeguards obligations and reported the matter to the UN Security Council. In June 1993 the DPRK announced that it had "suspended" its withdrawal from the NPT, but subsequently claimed a "special status" with respect to its safeguards obligations. This was rejected by IAEA. Once the DPRK's non-compliance had been reported to the UN Security Council, the essential part of the IAEA's mission had been completed. Inspections in the DPRK continued, although inspectors were increasingly hampered in what they were permitted to do by the DPRK's claim of a "special status". However, some 8,000 corroding fuel rods associated with the experimental reactor have remained under close surveillance. Following bilateral negotiations between the United States and the DPRK, and the conclusion of the Agreed Framework in October 1994, the IAEA has been given additional responsibilities. The agreement requires a freeze on the operation and construction of the DPRK's plutonium production reactors and their related facilities, and the IAEA is responsible for monitoring the freeze until the facilities are eventually dismantled. The DPRK remains uncooperative with the IAEA verification work and has yet to comply with its safeguards agreement. While Iraq was defeated in a war, allowing the UN the opportunity to seek out and destroy its nuclear weapons programme as part of the cease-fire conditions, the DPRK was not defeated, nor was it vulnerable to other measures, such as trade sanctions. It can scarcely afford to import anything, and sanctions on vital commodities, such as oil, would either be ineffective or risk provoking war. Ultimately, the DPRK was persuaded to stop what appeared to be its nuclear weapons programme in exchange, under the agreed framework, for about US$5 billion in energy-related assistance. This included two 1000 MWe light-water nuclear power reactors based on an advanced U.S. System-80 design. In January 2003 the DPRK withdrew from the NPT. In response, a series of discussions among the DPRK, the United States, and China, a series of six-party talks (the parties being the DPRK, the ROK, China, Japan, the United States and Russia) were held in Beijing; the first beginning in April 2004 concerning North Korea's weapons program. On 10 January 2005, North Korea declared that it was in the possession of nuclear weapons. On 19 September 2005, the fourth round of the Six-Party Talks ended with a joint statement in which North Korea agreed to end its nuclear programs and return to the NPT in exchange for diplomatic, energy and economic assistance. However, by the end of 2005 the DPRK had halted all six-party talks because the United States froze certain DPRK international financial assets such as those in a bank in Macau. On 9 October 2006, North Korea announced that it has performed its first-ever nuclear weapon test. On 18 December 2006, the six-party talks finally resumed. On 13 February 2007, the parties announced "Initial Actions" to implement the 2005 joint statement including shutdown and disablement of North Korean nuclear facilities in exchange for energy assistance. Reacting to UN sanctions imposed after missile tests in April 2009, North Korea withdrew from the six-party talks, restarted its nuclear facilities and conducted a second nuclear test on 25 May 2009. On 12 February 2013, North Korea conducted an underground nuclear explosion with an estimated yield of 6 to 7 kilotonnes. The detonation registered a magnitude 4.9 disturbance in the area around the epicenter. Russia Security of nuclear weapons
assistance with Pakistan's nuclear power programme and, reportedly, with missile technology, which exacerbate Indian concerns. In particular, as viewed by Indian strategists, Pakistan is aided by China's People's Liberation Army. India Nuclear power for civil use is well established in India. Its civil nuclear strategy has been directed towards complete independence in the nuclear fuel cycle, necessary because of its outspoken rejection of the NPT. Due to economic and technological isolation of India after the nuclear tests in 1974, India has largely diverted focus on developing and perfecting the fast breeder technology by intensive materials and fuel cycle research at the dedicated center established for research into fast reactor technology, Indira Gandhi Center for Atomic Research (IGCAR) at Kalpakkam, in the southern part of the country. At the moment, India has a small fast breeder reactor and is planning a much larger one (Prototype Fast Breeder Reactor). This self-sufficiency extends from uranium exploration and mining through fuel fabrication, heavy water production, reactor design and construction, to reprocessing and waste management. It is also developing technology to utilise its abundant resources of thorium as a nuclear fuel. India has 14 small nuclear power reactors in commercial operation, two larger ones under construction, and ten more planned. The 14 operating ones (2548 MWe total) comprise: two 150 MWe BWRs from the United States, which started up in 1969, now use locally enriched uranium and are under safeguards, two small Canadian PHWRs (1972 & 1980), also under safeguards, and ten local PHWRs based on Canadian designs, two of 150 and eight 200 MWe. two new 540 MWe and two 700 MWe plants at Tarapur (known as TAPP: Tarapur Atomic Power Station) The two under construction and two of the planned ones are 450 MWe versions of these 200 MWe domestic products. Construction has been seriously delayed by financial and technical problems. In 2001 a final agreement was signed with Russia for the country's first large nuclear power plant, comprising two VVER-1000 reactors, under a Russian-financed US$3 billion contract. The first unit is due to be commissioned in 2007. A further two Russian units are under consideration for the site. Nuclear power supplied 3.1% of India's electricity in 2000. Its weapons material appears to come from a Canadian-designed 40 MW "research" reactor which started up in 1960, well before the NPT, and a 100 MW indigenous unit in operation since 1985. Both use local uranium, as India does not import any nuclear fuel. It is estimated that India may have built up enough weapons-grade plutonium for a hundred nuclear warheads. It is widely believed that the nuclear programs of India and Pakistan used Canadian CANDU reactors to produce fissionable materials for their weapons; however, this is not accurate. Both Canada (by supplying the 40 MW research reactor) and the United States (by supplying 21 tons of heavy water) supplied India with the technology necessary to create a nuclear weapons program, dubbed CIRUS (Canada-India Reactor, United States). Canada sold India the reactor on the condition that the reactor and any by-products would be "employed for peaceful purposes only.". Similarly, the United States sold India heavy water for use in the reactor "only... in connection with research into and the use of atomic energy for peaceful purposes". India, in violation of these agreements, used the Canadian-supplied reactor and American-supplied heavy water to produce plutonium for their first nuclear explosion, Smiling Buddha. The Indian government controversially justified this, however, by claiming that Smiling Buddha was a "peaceful nuclear explosion." The country has at least three other research reactors including the tiny one which is exploring the use of thorium as a nuclear fuel, by breeding fissile U-233. In addition, an advanced heavy-water thorium cycle is under development. India exploded a nuclear device in 1974, the so-called Smiling Buddha test, which it has consistently claimed was for peaceful purposes. Others saw it as a response to China's nuclear weapons capability. It was then universally perceived, notwithstanding official denials, to possess, or to be able to quickly assemble, nuclear weapons. In 1999 it deployed its own medium-range missile and has developed an intermediate-range missile capable of reaching targets in China's industrial heartland. In 1995 the United States quietly intervened to head off a proposed nuclear test. However, in 1998 there were five more tests in Operation Shakti. These were unambiguously military, including one claimed to be of a sophisticated thermonuclear device, and their declared purpose was "to help in the design of nuclear weapons of different yields and different delivery systems". Indian security policies are driven by: its determination to be recognized as a dominant power in the region its increasing concern with China's expanding nuclear weapons and missile delivery programmes its concern with Pakistan's capability to deliver nuclear weapons deep into India It perceives nuclear weapons as a cost-effective political counter to China's nuclear and conventional weaponry, and the effects of its nuclear weapons policy in provoking Pakistan is, by some accounts, considered incidental. India has had an unhappy relationship with China. After an uneasy ceasefire ended the 1962 war, relations between the two nations were frozen until 1998. Since then a degree of high-level contact has been established and a few elementary confidence-building measures put in place. China still occupies some territory which it captured during the aforementioned war, claimed by India, and India still occupies some territory claimed by China. Its nuclear weapon and missile support for Pakistan is a major bone of contention. American President George W. Bush met with India Prime Minister Manmohan Singh to discuss India's involvement with nuclear weapons. The two countries agreed that the United States would give nuclear power assistance to India. Pakistan Over the years in Pakistan their nuclear power infrastructure has been well established. It is dedicated to the industrial and economic development of the country. Its current nuclear policy is aimed to promote the socio-economic development of its people as a "foremost priority"; and to fulfill energy, economic, and industrial needs from nuclear sources. , there were three operational mega-commercial nuclear power plants while three larger ones were under construction. The nuclear power plants supplied 787 megawatts (MW) (roughly ≈3.6%) of electricity, and the country has projected production of 8800 MW by 2030. Infrastructure established by the IAEA and the U.S. in the 1950s–1960s was based on peaceful research and development and the economic prosperity of the country. Although the civil-sector nuclear power was established in the 1950s, the country has an active nuclear weapons program which was started in the 1970s. The bomb program has its roots after East Pakistan gained its independence through the Bangladesh Liberation War, as the new nation of Bangladesh, after India's successful intervention led to a decisive victory over Pakistan in 1971. This large-scale but clandestine atomic bomb project was directed towards the indigenous development of reactor and military-grade plutonium. In 1974, when India surprised the world with the successful detonation of its own bomb, codename Smiling Buddha, it became "imperative for Pakistan" to pursue weapons research. According to a leading scientist in the program, it became clear that once India detonated their bomb, "Newton's Third Law" came into "operation", from then on it was a classic case of "action and reaction". Earlier efforts were directed towards mastering the plutonium technology from France, but that route was slowed when the plan failed after U.S. intervention to cancel the project. Contrary to popular perception, Pakistan did not forego the "plutonium" route and covertly continued its indigenous research under Munir Ahmad Khan and it succeeded with that route in the early 1980s. Reacting to India's first nuclear weapon test, Prime Minister Zulfikar Ali Bhutto and the country's political and military science circles sensed this test as final and dangerous anticipation to Pakistan's "moral and physical existence." With diplomat Aziz Ahmed on his side, Prime Minister Bhutto launched a serious diplomatic offense and aggressively maintained at the session of the United Nations Security Council: After 1974, Bhutto's government redoubled its effort, this time equally focused on uranium and plutonium. Pakistan had established science directorates in almost all of her embassies in the important countries of the world, with theoretical physicist S.A. Butt being the director. Abdul Qadeer Khan then established a network through Dubai to smuggle URENCO technology to the Engineering Research Laboratories. Earlier, he worked with the Physics Dynamics Research Laboratories (FDO), a subsidiary of the Dutch firm VMF-Stork based in Amsterdam. Later after joining, Urenco, he had access through photographs and documents to the technology. Against popular perception, the technology that Khan had brought from Urenco was based on first generation civil reactor technology, filled with many serious technical errors, though it was an authentic and vital link for the country's gas centrifuge project. After the British Government stopped the British subsidiary of the American Emerson Electric Co. from shipping components to Pakistan, he describes his frustration with a supplier from Germany as: "That man from the German team was unethical. When he did not get the order from us, he wrote a letter to a Labour Party member and questions were asked in [British] Parliament." By 1978, his efforts paid off and made him into a national hero. In early 1996 the next Prime Minister of Pakistan Benazir Bhutto made it clear that "if India conducts a nuclear test, Pakistan could be forced to "follow suit". In 1997, her statement was echoed by Prime Minister Nawaz Sharif who maintained that "since 1972, [P]akistan had progressed significantly, and we have left that stage (developmental) far behind. Pakistan will not be made a "hostage" to India by signing the CTBT, before (India).!" In May 1998, within weeks of India's nuclear tests, Pakistan announced that it had conducted six underground tests in the Chagai Hills, five on 28 May and one on 30 May. Seismic events consistent with these claims were recorded. In 2004, the revelation of Khan's efforts led to the exposure of many defunct European consortiums which had defied export restrictions in the 1970s, and of many defunct Dutch companies that exported thousands of centrifuges to Pakistan as early as 1976. Many centrifuge components were apparently manufactured by the Malaysian Scomi Precision Engineering with the assistance of South Asian and German companies, and used a UAE-based computer company as a false front. It was widely believed to have had direct involvement by the Government of Pakistan. This claim could not be verified due to the refusal of that Government to allow the IAEA to interview the alleged head of the nuclear black market, who happened to be no other than Abdul Qadeer Khan. Confessing his crimes a month later on national television, Khan bailed out the Government by taking full responsibility. Independent investigation conducted by International Institute for Strategic Studies (IISS) confirmed that he had control over the import-export deals, and his acquisition activities were largely unsupervised by Pakistan governmental authorities. All of his activities went undetected for several years. He duly confessed to running the atomic proliferation ring from Pakistan to Iran and North Korea. He was immediately given presidential immunity. The exact nature of involvement at the governmental level is still unclear, but the manner in which the government acted cast doubt on the sincerity of Pakistan. North Korea The Democratic People's Republic of Korea (or better known as North Korea), joined the NPT in 1985 and had subsequently signed a safeguards agreement with the IAEA. However, it was believed that North Korea was diverting plutonium extracted from the fuel of its reactor at Yongbyon, for use in nuclear weapons. The subsequent confrontation with IAEA on the issue of inspections and suspected violations, resulted in North Korea threatening to withdraw from the NPT in 1993. This eventually led to negotiations with the United States resulting in the Agreed Framework of 1994, which provided for IAEA safeguards being applied to its reactors and spent fuel rods. These spent fuel rods were sealed in canisters by the United States to prevent North Korea from extracting plutonium from them. North Korea had to therefore freeze its plutonium programme. During this period, Pakistan-North Korea cooperation in missile technology transfer was being established. A high-level delegation of Pakistan military visited North Korea in August–September 1992, reportedly to discuss the supply of missile technology to Pakistan. In 1993, PM Benazir Bhutto repeatedly traveled to China, and the paid state visit to North Korea. The visits are believed to be related to the subsequent acquisition technology to developed its Ghauri system by Pakistan. During the period 1992–1994, A.Q. Khan was reported to have visited North Korea thirteen times. The missile cooperation program with North Korea was under Dr. A. Q. Khan Research Laboratories. At this time China was under U.S. pressure not to supply the M Dongfeng series of missiles to Pakistan. It is believed by experts that possibly with Chinese connivance and facilitation, the latter was forced to approach North Korea for missile transfers. Reports indicate that North Korea was willing to supply missile sub-systems including rocket motors, inertial guidance systems, control and testing equipment for US$50 million. It is not clear what North Korea got in return. Joseph S. Bermudez Jr. in Jane's Defence Weekly (27 November 2002) reports that Western analysts had begun to question what North Korea received in payment for the missiles; many suspected it was the nuclear technology. The KRL was in charge of both the uranium program and also of the missile program with North Korea. It is therefore likely during this period that cooperation in nuclear technology between Pakistan and North Korea was initiated. Western intelligence agencies began to notice the exchange of personnel, technology and components between KRL and entities of the North Korean 2nd Economic Committee (responsible for weapons production). A New York Times report on 18 October 2002 quoted U.S. intelligence officials having stated that Pakistan was a major supplier of critical equipment to North Korea. The report added that equipment such as gas centrifuges appeared to have been "part of a barter deal" in which North Korea supplied Pakistan with missiles. Separate reports indicate (The Washington Times, 22 November 2002) that U.S. intelligence had as early as 1999 picked up signs that North Korea was continuing to develop nuclear arms. Other reports also indicate that North Korea had been working covertly to develop an enrichment capability for nuclear weapons for at least five years and had used technology obtained from Pakistan (The Washington Times, 18 October 2002). Israel Israel is also thought to possess an arsenal of potentially up to several hundred nuclear warheads based on estimates of the amount of fissile material produced by Israel. This has never been openly confirmed or denied however, due to Israel's policy of deliberate ambiguity. An Israeli nuclear installation is located about ten kilometers to the south of Dimona, the Negev Nuclear Research Center. Its construction commenced in 1958, with French assistance. The official reason given by the Israeli and French governments was to build a nuclear reactor to power a "desalination plant", in order to "green the Negev". The purpose of the Dimona plant is widely assumed to be the manufacturing of nuclear weapons, and the majority of defense experts have concluded that it does in fact do that. However, the Israeli government refuses to confirm or deny this publicly, a policy it refers to as "ambiguity". Norway sold 20 tonnes of heavy water needed for the reactor to Israel in 1959 and 1960 in a secret deal. There were no "safeguards" required in this deal to prevent the use of heavy water for non-peaceful purposes. The British newspaper Daily Express accused Israel of working on a bomb in 1960. When the United States intelligence community discovered the purpose of the Dimona plant in the early 1960s, it demanded that Israel agree to international inspections. Israel agreed, but on a condition that the U.S., rather than IAEA, inspectors were used, and that Israel would receive advanced notice of all inspections. Some claim that because Israel knew the schedule of the inspectors' visits, it was able to hide the alleged purpose of the site from the inspectors by installing temporary false walls and other devices before each inspection. The inspectors eventually informed the U.S. government that their inspections were useless due to Israeli restrictions on what areas of the facility they could inspect. In 1969, the United States terminated the inspections. In 1986, Mordechai Vanunu, a former technician at the Dimona plant, revealed to the media some evidence of Israel's nuclear program. Israeli agents arrested him in Italy, drugged him and transported him to Israel. An Israeli court then tried him in secret on charges of treason and espionage, and sentenced him to eighteen years imprisonment. He was freed on 21 April 2004, but was severely limited by the Israeli government. He was arrested again on 11 November 2004, though formal charges were not immediately filed. Comments on photographs taken by Vanunu inside the Negev Nuclear Research Center have been made by prominent scientists. British nuclear weapons scientist Frank Barnaby, who questioned Vanunu over several days, estimated Israel had enough plutonium for about 150 weapons. According to Lieutenant Colonel Warner D. Farr in a report to the USAF Counterproliferation Center, while France was previously a leader in nuclear research "Israel and France were at a similar level of expertise after WWII, and Israeli scientists could make significant contributions to the French effort." In 1986 Francis Perrin, French high-commissioner for atomic energy from 1951 to 1970 stated that in 1949 Israeli scientists were invited to the Saclay nuclear research facility, this cooperation leading to a joint effort including sharing of knowledge between French and Israeli scientists especially those with knowledge from the Manhattan Project. Nuclear arms control in South Asia The public stance of India and Pakistan on non-proliferation differs markedly. Pakistan has initiated a series of regional security proposals. It has repeatedly proposed a nuclear-free zone in South Asia, and has proclaimed its willingness to engage in nuclear disarmament and to sign the Non-Proliferation Treaty if India would do so. It has endorsed a United States proposal for a regional five power conference to consider non-proliferation in South Asia. India has taken the view that solutions to regional security issues should be found at the international rather than the regional level, since its chief concern is with China. It therefore rejects Pakistan's proposals. Instead, the 'Gandhi Plan', put forward in 1988, proposed the revision of the Non-Proliferation Treaty, which it regards as inherently discriminatory in favor of the nuclear-weapon States, and a timetable for complete nuclear weapons disarmament. It endorsed early proposals for a Comprehensive Test Ban Treaty and for an international convention to ban the production of highly enriched uranium and plutonium for weapons purposes, known as the 'cut-off' convention. The United States for some years, especially under the Clinton administration, pursued a variety of initiatives to persuade India and Pakistan to abandon their nuclear weapons programs and to accept comprehensive international safeguards on all their nuclear activities. To this end, the Clinton administration proposed a conference of the five nuclear-weapon states, Japan, Germany, India and Pakistan. India refused this and similar previous proposals, and countered with demands that other potential weapons states, such as Iran and North Korea, should be invited, and that regional limitations would only be acceptable if they were accepted equally by China. The United States would not accept the participation of Iran and North Korea and these initiatives have lapsed. Another, more recent approach, centers on 'capping' the production of fissile material for weapons purposes, which would hopefully be followed by 'roll back'. To this end, India and the United States jointly sponsored a UN General Assembly resolution in 1993 calling for negotiations for a 'cut-off' convention. Should India and Pakistan join such a convention, they would have to agree to halt the production of fissile materials for weapons and to accept international verification on their relevant nuclear facilities (enrichment and reprocessing plants). It appears that India is now prepared to join negotiations regarding such a Cut-off Treaty, under the UN Conference on Disarmament. Bilateral confidence-building measures between India and Pakistan to reduce the prospects of confrontation have been limited. In 1990 each side ratified a treaty not to attack the other's nuclear installations, and at the end of 1991 they provided one another with a list showing the location of all their nuclear plants, even though the respective lists were regarded as not being wholly accurate. Early in 1994 India proposed a bilateral agreement for a 'no first use' of nuclear weapons and an extension of the 'no attack' treaty to cover civilian and industrial targets as well as nuclear installations. Having promoted the Comprehensive Test Ban Treaty since 1954, India dropped its support in 1995 and in 1996 attempted to block the Treaty. Following the 1998 tests the question has been reopened and both Pakistan and India have indicated their intention to sign the CTBT. Indian ratification may be conditional upon the five weapons states agreeing to specific reductions in nuclear arsenals. The UN Conference on Disarmament has also called upon both countries "to accede without delay to the Non-Proliferation Treaty", presumably as non-weapons states. NPT signatories Egypt In 2004 and 2005, Egypt disclosed past undeclared nuclear activities and material to the IAEA. In 2007 and 2008, high-enriched and low-enriched uranium particles were found in environmental samples taken in Egypt. In 2008, the IAEA states Egypt's statements were consistent with its own findings. In May 2009, Reuters reported that the IAEA was conducting further investigation in Egypt. Iran In 2003, the IAEA reported that Iran had been in breach of its obligations to comply with provisions of its safeguard agreement. In 2005, the IAEA Board of Governors voted in a rare non-consensus decision to find Iran in non-compliance with its NPT Safeguards Agreement and to report that non-compliance to the UN Security Council. In response, the UN Security Council passed a series of resolutions citing concerns about the program. Iran's representative to the UN argues sanctions compel Iran to abandon its rights under the Nuclear Nonproliferation Treaty to peaceful nuclear technology. Iran says its uranium enrichment program is exclusively for peaceful purposes and has enriched uranium to "less than 5 percent," consistent with fuel for a nuclear power plant and significantly below the purity of WEU (around 90%) typically used in a weapons program. The director general of the International Atomic Energy Agency, Yukiya Amano, said in 2009 he had not seen any evidence in IAEA official documents that Iran was developing nuclear weapons. Iraq Up to the late 1980s it was generally assumed that any undeclared nuclear activities would have to be based on the diversion of nuclear material from safeguards. States acknowledged the possibility of nuclear activities entirely separate from those covered by safeguards, but it was assumed they would be detected by national intelligence activities. There was no particular effort by IAEA to attempt to detect them. Iraq had been making efforts to secure a nuclear potential since the 1960s. In the late 1970s a specialised plant, Osiraq, was constructed near Baghdad. The plant was attacked during the Iran–Iraq War and was destroyed by Israeli bombers in June 1981. Not until the 1990 NPT Review Conference did some states raise the possibility of making more use of (for example) provisions for "special inspections" in existing NPT Safeguards Agreements. Special inspections can be undertaken at locations other than those where safeguards routinely apply, if there is reason to believe there may be undeclared material or activities. After inspections in Iraq following the UN Gulf War cease-fire resolution showed the extent of Iraq's clandestine nuclear weapons program, it became clear that the IAEA would have to broaden the scope of its activities. Iraq was an NPT Party, and had thus agreed to place all its nuclear material under IAEA safeguards. But the inspections revealed that it had been pursuing an extensive clandestine uranium enrichment programme, as well as a nuclear weapons design programme. The main thrust of Iraq's uranium enrichment program was the development of technology for electromagnetic isotope separation (EMIS) of indigenous uranium. This uses the same principles as a mass spectrometer (albeit on a much larger scale). Ions of uranium-238 and uranium-235 are separated because they describe arcs of different radii when they move through a magnetic field. This process was used in the Manhattan Project to make the highly enriched uranium used in the Hiroshima bomb, but was abandoned soon afterwards. The Iraqis did the basic research work at their nuclear research establishment at Tuwaitha, near Baghdad, and were building two full-scale facilities at Tarmiya and Ash Sharqat, north of Baghdad. However, when the war broke out, only a few separators had been installed at Tarmiya, and none at Ash Sharqat. The Iraqis were also very interested in centrifuge enrichment, and had been able to acquire some components including some carbon-fibre rotors, which they were at an early stage of testing. In May 1998, Newsweek reported that Abdul Qadeer Khan had sent Iraq centrifuge designs, which were apparently confiscated by the UNMOVIC officials. Iraqi officials said "the documents were authentic but that they had not agreed to work with A. Q. Khan, fearing an ISI sting operation, due to strained relations between two countries. The Government of Pakistan and A. Q. Khan strongly denied this allegation whilst the government declared the evidence to be "fraudulent". They were clearly in violation of their NPT and safeguards obligations, and the IAEA Board of Governors ruled to that effect. The UN Security Council then ordered the IAEA to remove, destroy or render harmless Iraq's nuclear weapons capability. This was done by mid-1998, but Iraq then ceased all cooperation with the UN, so the IAEA withdrew from this work. The revelations from Iraq provided the impetus for a very far-reaching reconsideration of what safeguards are intended to achieve. Libya Libya possesses ballistic missiles and previously pursued nuclear weapons under the leadership of Muammar Gaddafi.
Technology National pipe thread, U.S. standards Nested Page Tables, later Rapid Virtualization Indexing, an AMD technology Nissan NPT-90, a racing car Non-pneumatic tire or airless tire IPv6-to-IPv6 Network Prefix Translation (NPTv6) Miscellaneous Nepal
Law Nuclear Non-Proliferation Treaty, since 1970 Neighbourhood Policing Team, UK Organizations National Philanthropic Trust, offering donor-advised funds Places Northville-Placid Trail, New York, US Technology National pipe thread, U.S. standards Nested Page Tables, later Rapid Virtualization Indexing,
Nuclear potential energy, the potential energy of the particles inside an atomic nucleus Nuclear Energy (sculpture), a
atom Nuclear potential energy, the potential energy of the particles inside an atomic nucleus Nuclear Energy (sculpture),
A "folded" hierarchy allows a single definition to be represented several times by instances. An "unfolded" hierarchy does not allow a definition to be used more than once in the hierarchy. Folded hierarchies can be extremely compact. A small netlist of just a few instances can describe designs with a very large number of instances. For example, suppose definition A is a simple primitive, like a memory cell. Then suppose definition B contains 32 instances of A; C contains 32 instances of B; D contains 32 instances of C; and E contains 32 instances of D. The design now contains 5 definitions (A through E) and 128 instances. Yet, E describes a circuit that contains over a million memory cells. Unfolding In a "flat" design, only primitives are instanced. Hierarchical designs can be recursively "exploded" ("flattened") by creating a new copy (with a new name) of each definition each time it is used. If the design is highly folded, expanding it like this will result in a much larger netlist database, but preserves the hierarchy dependencies. Given a hierarchical netlist, the list of instance names in a path from the root definition to a primitive instance specifies the single unique path to that primitive. The paths to every primitive, taken together, comprise a large but flat netlist that is exactly equivalent to the compact hierarchical version. Backannotation Backannotation is data that could be added to a hierarchical netlist. Usually they are kept separate from the netlist, because several such alternate sets of data could be applied to a single netlist. These data may have been extracted from a physical design, and might provide extra information for more accurate simulations. Usually the data are composed of a hierarchical path and a piece of data for that primitive or finding the values of RC delay due to interconnection. Inheritance Another concept often used in netlists is that of inheritance. Suppose a definition of a capacitor has an associated attribute called "Capacitance", corresponding to the physical property of the same name, with a default value of "100 pF" (100 picofarads). Each instance of this capacitor might also have such an attribute, only with a different value of capacitance. And other instances might not associate any capacitance at all. In the case where no capacitance is specified for an instance, the instance will "inherit" the 100 pF value from its definition. A value specified will "override" the value on the definition. If a great number of attributes end up being the same as on the definition, a great amount of information can be "inherited", and not have to be redundantly specified in the netlist, saving space, and making the design easier to read by both machines and people. References Further reading SPICE ‘Quick’ Reference Sheet, THE GENERAL ANATOMY OF A SPICE DECK, Stanford
refer to descriptions of the parts or devices used. Each time a part is used in a netlist, this is called an "instance". These descriptions will usually list the connections that are made to that kind of device, and some basic properties of that device. These connection points are called "terminals" or "pins", among several other names. An "instance" could be anything from a MOSFET transistor or a bipolar junction transistor, to a resistor, a capacitor, or an integrated circuit chip. Instances have "terminals". In the case of a vacuum cleaner, these terminals would be the three metal prongs in the plug. Each terminal has a name, and in continuing the vacuum cleaner example, they might be "Neutral", "Live" and "Ground". Usually, each instance will have a unique name, so that if you have two instances of vacuum cleaners, one might be "vac1" and the other "vac2". Besides their names, they might otherwise be identical. Networks (nets) are the "wires" that connect things together in the circuit. There may or may not be any special attributes associated with the nets in a design, depending on the particular language the netlist is written in, and that language's features. Instance based netlists usually provide a list of the instances used in a design. Along with each instance, either an ordered list of net names is provided, or a list of pairs provided, of an instance port name, along with the net name to which that port is connected. In this kind of description, the list of nets can be gathered from the connection lists, and there is no place to associate particular attributes with the nets themselves. SPICE is an example of instance-based netlists. Net-based netlists usually describe all the instances and their attributes, then describe each net, and say which port they are connected on each instance. This allows for attributes to be associated with nets. EDIF is probably the most famous of the net-based netlists. Hierarchy In large designs, it is a common practice to split the design into pieces, each piece becoming a "definition" which can be used as instances in the design.
U.S. programs to reduce risk of nuclear terrorism The United States has taken the lead in ensuring that nuclear materials globally are properly safeguarded. A popular program that has received bipartisan domestic support for over a decade is the Cooperative Threat Reduction Program (CTR). While this program has been deemed a success, many believe that its funding levels need to be increased so as to ensure that all dangerous nuclear materials are secured in the most expeditious manner possible. The CTR program has led to several other innovative and important nonproliferation programs that need to continue to be a budget priority in order to ensure that nuclear weapons do not spread to actors hostile to the United States. Key programs: Cooperative Threat Reduction (CTR): The CTR program provides funding to help Russia secure materials that might be used in nuclear or chemical weapons as well as to dismantle weapons of mass destruction and their associated infrastructure in Russia. Global Threat Reduction Initiative (GTRI): Expanding on the success of the CTR, the GTRI will expand nuclear weapons and material securing and dismantlement activities to states outside of the former Soviet Union. Other states While the vast majority of states have adhered to the stipulations of the Nuclear Nonproliferation Treaty, a few states have either refused to sign the treaty or have pursued nuclear weapons programs while not being members of the treaty. Many view the pursuit of nuclear weapons by these states as a threat to nonproliferation and world peace. Declared nuclear weapon states not party to the NPT: Indian nuclear weapons: 80–100 active warheads Pakistani nuclear weapons: 90–110 active warheads North Korean nuclear weapons: <10 active warheads Undeclared nuclear weapon states not party to the NPT: Israeli nuclear weapons: 75–200 active warheads Nuclear weapon states not party to the NPT that disarmed and joined the NPT as non-nuclear weapons states: South African nuclear weapons: disarmed from 1989–1993 Former Soviet states that disarmed and joined the NPT as non-nuclear weapons states: Belarus Kazakhstan Ukraine Non-nuclear weapon states party to the NPT currently accused of seeking nuclear weapons: Iran Non-nuclear weapon states party to the NPT who acknowledged and eliminated past nuclear weapons programs: Libya Semiotics The precise use of terminology in the context of disarmament may have important implications for political Signaling theory. In the case of North Korea, "denuclearization" has historically been interpreted as different from "disarmament" by including withdrawal of American nuclear capabilities from the region. More recently, this term has become provocative due to its comparisons to the collapse of the Gaddafi regime after disarmament. The Biden Administration has been criticized for its reaffirming of a strategy of denuclearization with Korea and Japan, as opposed to a "freeze" or "pause" on new nuclear developments. Similarly, the term "irreversible" has been argued to set an impossible standard for states to disarm. Recent developments Eliminating nuclear weapons has long been an aim of the pacifist left. But now many mainstream politicians, academic analysts, and retired military leaders also advocate nuclear disarmament. Sam Nunn, William Perry, Henry Kissinger, and George Shultz have called upon governments to embrace the vision of a world free of nuclear weapons, and in three Wall Street Journal opeds proposed an ambitious program of urgent steps to that end. The four have created the Nuclear Security Project to advance this agenda. Nunn reinforced that agenda during a speech at the Harvard Kennedy School on October 21, 2008, saying, "I’m much more concerned about a terrorist without a return address that cannot be deterred than I am about deliberate war between nuclear powers. You can’t deter a group who is willing to commit suicide. We are in a different era. You have to understand the world has changed." In 2010, the four were featured in a documentary film entitled Nuclear Tipping Point. The film is a visual and historical depiction of the ideas laid forth in the Wall Street Journal op-eds and reinforces their commitment to a world without nuclear weapons and the steps that can be taken to reach that goal. Global Zero is an international non-partisan group of 300 world leaders dedicated to achieving nuclear disarmament. The initiative, launched in December 2008, promotes a phased withdrawal and verification for the destruction of all devices held by official and unofficial members of the nuclear club. The Global Zero campaign works toward building an international consensus and a sustained global movement of leaders and citizens for the elimination of nuclear weapons. Goals include the initiation of United States-Russia bilateral negotiations for reductions to 1,000 total warheads each and commitments from the other key nuclear weapons countries to participate in multilateral negotiations for phased reductions of nuclear arsenals. Global Zero works to expand the diplomatic dialogue with key governments and continue to develop policy proposals on the critical issues related to the elimination of nuclear weapons. The International Conference on Nuclear Disarmament took place in Oslo in February 2008, and was organized by The Government of Norway, the Nuclear Threat Initiative and the Hoover Institute. The Conference was entitled Achieving the Vision of a World Free of Nuclear Weapons and had the purpose of building consensus between nuclear weapon states and non-nuclear weapon states in relation to the Nuclear Non-proliferation Treaty. The Tehran International Conference on Disarmament and Non-Proliferation took place in Tehran in April 2010. The conference was held shortly after the signing of the New START, and resulted in a call of action toward eliminating all nuclear weapons. Representatives from 60 countries were invited to the conference. Non-governmental organizations were also present. Among the prominent figures who have called for the abolition of nuclear weapons are "the philosopher Bertrand Russell, the entertainer Steve Allen, CNN’s Ted Turner, former Senator Claiborne Pell, Notre Dame president Theodore Hesburgh, South African Bishop Desmond Tutu and the Dalai Lama". Others have argued that nuclear weapons have made the world relatively safer, with peace through deterrence and through the stability–instability paradox, including in south Asia. Kenneth Waltz has argued that nuclear weapons have created a nuclear peace, and further nuclear weapon proliferation might even help avoid the large scale conventional wars that were so common prior to their invention at the end of World War II. In the July 2012 issue of Foreign Affairs Waltz took issue with the view of most U.S., European, and Israeli, commentators and policymakers that a nuclear-armed Iran would be unacceptable. Instead Waltz argues that it would probably be the best possible outcome, as it would restore stability to the Middle East by balancing Israel's regional monopoly on nuclear weapons. Professor John Mueller of Ohio State University, the author of Atomic Obsession, has also dismissed the need to interfere with Iran's nuclear program and expressed that arms control measures are counterproductive. During a 2010 lecture at the University of Missouri, which was broadcast by C-SPAN, Dr. Mueller has also argued that the threat from nuclear weapons, especially nuclear terrorism, has been exaggerated, both in the popular media and by officials. Former Secretary Kissinger says there is a new danger, which cannot be addressed by deterrence: "The classical notion of deterrence was that there was some consequences before which aggressors and evildoers would recoil. In a world of suicide bombers, that calculation doesn’t operate in any comparable way". George Shultz has said, "If you think of the people who are doing suicide attacks, and people like that get a nuclear weapon, they are almost by definition not deterrable". Andrew Bacevich wrote that there is no feasible scenario under which the US could sensibly use nuclear weapons:For the United States, they are becoming unnecessary, even as a deterrent. Certainly, they are unlikely to dissuade the adversaries most likely to employ such weapons against us -- Islamic extremists intent on acquiring their own nuclear capability. If anything, the opposite is true. By retaining a strategic arsenal in readiness (and by insisting without qualification that the dropping of atomic bombs on two Japanese cities in 1945 was justified), the United States continues tacitly to sustain the view that nuclear weapons play a legitimate role in international politics ... .In The Limits of Safety, Scott Sagan documented numerous incidents in US military history that could have produced a nuclear war by accident. He concluded:while the military organizations controlling U.S. nuclear forces during the Cold War performed this task with less success than we know, they performed with more success than we should have reasonably predicted. The problems identified in this book were not the product of incompetent organizations. They reflect the inherent limits of organizational safety. Recognizing that simple truth is the first and most important step toward a safer future. On 3 January 2022, the permanent members of the United Nations Security Council, China, France, Russia, Britain, and the United States issued a statement on prevention of nuclear war, affirming that "a nuclear war cannot be won and must never be fought." See also Anti-nuclear organizations Baruch Plan Comprehensive Nuclear-Test-Ban Treaty Countdown to Zero International Atomic Energy Agency List of anti-war organizations List of peace activists Megatons to Megawatts Program Nuclear-free zone Nuclear proliferation Nuclear warfare Nuclear weapons and the United States Nuclear weapons convention Nuclear-Weapon-Ban treaty Nuclear-Weapon-Free Zone Prevention of nuclear catastrophe Pacem in terris Seabed Arms Control Treaty Strategic Offensive Reductions Treaty (SORT) Tehran International Conference on Disarmament and Non-Proliferation, 2010 References External links New Video: A World Without Nuclear Weapons Nuclear Files.org—Arms Control and Disarmament Annotated bibliography for nuclear arms control from the Alsos Digital Library for Nuclear Issues The Woodrow Wilson Center's Nuclear Proliferation International History Project or NPIHP is a global network of individuals and institutions engaged in the study of international nuclear history through archival documents, oral history interviews and other empirical sources. Council for a Livable World Center for Arms Control and Non-Proliferation People v The Bomb: Showdown at the UN, TV documentary report on 2005 NPT Review crisis William Walker, "President-elect
60 cities in the United States to demonstrate against nuclear weapons. It was the largest national women's peace protest of the 20th century. In 1958, Linus Pauling and his wife presented the United Nations with the petition signed by more than 11,000 scientists calling for an end to nuclear-weapon testing. The "Baby Tooth Survey," headed by Dr Louise Reiss, demonstrated conclusively in 1961 that above-ground nuclear testing posed significant public health risks in the form of radioactive fallout spread primarily via milk from cows that had ingested contaminated grass. Public pressure and the research results subsequently led to a moratorium on above-ground nuclear weapons testing, followed by the Partial Test Ban Treaty, signed in 1963 by John F. Kennedy and Nikita Khrushchev. On the day that the treaty went into force, the Nobel Prize Committee awarded Pauling the Nobel Peace Prize, describing him as "Linus Carl Pauling, who ever since 1946 has campaigned ceaselessly, not only against nuclear weapons tests, not only against the spread of these armaments, not only against their very use, but against all warfare as a means of solving international conflicts." Pauling started the International League of Humanists in 1974. He was president of the scientific advisory board of the World Union for Protection of Life and also one of the signatories of the Dubrovnik-Philadelphia Statement. In the 1980s, a movement for nuclear disarmament again gained strength in the light of the weapons build-up and statements of US President Ronald Reagan. Reagan had "a world free of nuclear weapons" as his personal mission, and was largely scorned for this in Europe. Reagan was able to start discussions on nuclear disarmament with Soviet Union. He changed the name "SALT" (Strategic Arms Limitation Talks) to "START" (Strategic Arms Reduction Talks). On June 3, 1981, William Thomas launched the White House Peace Vigil in Washington, D.C. He was later joined on the vigil by anti-nuclear activists Concepcion Picciotto and Ellen Benjamin. On June 12, 1982, one million people demonstrated in New York City's Central Park against nuclear weapons and for an end to the cold war arms race. It was the largest anti-nuclear protest and the largest political demonstration in American history. International Day of Nuclear Disarmament protests were held on June 20, 1983 at 50 sites across the United States. In 1986, hundreds of people walked from Los Angeles to Washington, D.C. in the Great Peace March for Global Nuclear Disarmament. There were many Nevada Desert Experience protests and peace camps at the Nevada Test Site during the 1980s and 1990s. On May 1, 2005, 40,000 anti-nuclear/anti-war protesters marched past the United Nations in New York, 60 years after the atomic bombings of Hiroshima and Nagasaki. In 2008, 2009, and 2010, there have been protests about, and campaigns against, several new nuclear reactor proposals in the United States. There is an annual protest against U.S. nuclear weapons research at Lawrence Livermore National Laboratory in California and in the 2007 protest, 64 people were arrested. There have been a series of protests at the Nevada Test Site and in the April 2007 Nevada Desert Experience protest, 39 people were cited by police. There have been anti-nuclear protests at Naval Base Kitsap for many years, and several in 2008. In 2017, the International Campaign to Abolish Nuclear Weapons was awarded the Nobel Peace Prize "for its work to draw attention to the catastrophic humanitarian consequences of any use of nuclear weapons and for its ground-breaking efforts to achieve a treaty-based prohibition of such weapons". World Peace Council One of the earliest peace organisations to emerge after the Second World War was the World Peace Council, which was directed by the Communist Party of the Soviet Union through the Soviet Peace Committee. Its origins lay in the Communist Information Bureau's (Cominform) doctrine, put forward 1947, that the world was divided between peace-loving progressive forces led by the Soviet Union and warmongering capitalist countries led by the United States. In 1949, Cominform directed that peace "should now become the pivot of the entire activity of the Communist Parties", and most western Communist parties followed this policy. Lawrence Wittner, a historian of the post-war peace movement, argues that the Soviet Union devoted great efforts to the promotion of the WPC in the early post-war years because it feared an American attack and American superiority of arms at a time when the USA possessed the atom bomb but the Soviet Union had not yet developed it. In 1950, the WPC launched its Stockholm Appeal calling for the absolute prohibition of nuclear weapons. The campaign won support, collecting, it is said, 560 million signatures in Europe, most from socialist countries, including 10 million in France (including that of the young Jacques Chirac), and 155 million signatures in the Soviet Union – the entire adult population. Several non-aligned peace groups who had distanced themselves from the WPC advised their supporters not to sign the Appeal. The WPC had uneasy relations with the non-aligned peace movement and has been described as being caught in contradictions as "it sought to become a broad world movement while being instrumentalized increasingly to serve foreign policy in the Soviet Union and nominally socialist countries." From the 1950s until the late 1980s it tried to use non-aligned peace organizations to spread the Soviet point of view. At first there was limited co-operation between such groups and the WPC, but western delegates who tried to criticize the Soviet Union or the WPC's silence about Russian armaments were often shouted down at WPC conferences and by the early 1960s they had dissociated themselves from the WPC. Arms reduction treaties After the 1986 Reykjavik Summit between U.S. President Ronald Reagan and the new Soviet General Secretary Mikhail Gorbachev, the United States and the Soviet Union concluded two important nuclear arms reduction treaties: the INF Treaty (1987) and START I (1991). After the end of the Cold War, the United States and the Russian Federation concluded the Strategic Offensive Reductions Treaty (2003) and the New START Treaty (2010). The US withdrew from the INF Treaty in 2019 under president Donald Trump, and launched the United States–Russia Strategic Stability Dialogue (SSD) in 2021 under president Joe Biden. When the extreme danger intrinsic to nuclear war and the possession of nuclear weapons became apparent to all sides during the Cold War, a series of disarmament and nonproliferation treaties were agreed upon between the United States, the Soviet Union, and several other states throughout the world. Many of these treaties involved years of negotiations, and seemed to result in important steps in arms reductions and reducing the risk of nuclear war. Key treaties Partial Test Ban Treaty (PTBT) 1963: Prohibited all testing of nuclear weapons except underground. Nuclear Non-Proliferation Treaty (NPT)—signed 1968, came into force 1970: An international treaty (currently with 189 member states) to limit the spread of nuclear weapons. The treaty has three main pillars: nonproliferation, disarmament, and the right to peacefully use nuclear technology. Interim Agreement on Offensive Arms (SALT I) 1972: The Soviet Union and the United States agreed to a freeze in the number of intercontinental ballistic missiles (ICBMs) and submarine-launched ballistic missiles (SLBMs) that they would deploy. Anti-Ballistic Missile Treaty (ABM) 1972: The United States and Soviet Union could deploy ABM interceptors at two sites, each with up to 100 ground-based launchers for ABM interceptor missiles. In a 1974 Protocol, the US and Soviet Union agreed to only deploy an ABM system to one site. Strategic Arms Limitation Treaty (SALT II) 1979: Replacing SALT I, SALT II limited both the Soviet Union and the United States to an equal number of ICBM launchers, SLBM launchers, and heavy bombers. Also placed limits on Multiple Independent Reentry Vehicles (MIRVS). Intermediate-Range Nuclear Forces Treaty (INF) 1987: Banned US and Soviet Union land-based ballistic missiles,
be written (using the equals sign ) despite being false. Bases and subbases Given a subbase for the topology on (where note that every base for a topology is also a subbase) and given a point a net in converges to if and only if it is eventually in every neighborhood of This characterization extends to neighborhood subbases (and so also neighborhood bases) of the given point Convergence in metric spaces Suppose is a metric space (or a pseudometric space) and is endowed with the metric topology. If is a point and is a net, then in if and only if in where is a net of real numbers. In plain English, this characterization says that a net converges to a point in a metric space if and only if the distance between the net and the point converges to zero. If is a normed space (or a seminormed space) then in if and only if in where Convergence in topological subspaces If the set is endowed with the subspace topology induced on it by then in if and only if in In this way, the question of whether or not the net converges to the given point is depends on this topological subspace consisting of and the image of (that is, the points of) the net Limits in a Cartesian product A net in the product space has a limit if and only if each projection has a limit. Symbolically, suppose that the Cartesian product of the spaces is endowed with the product topology and that for every index the canonical projection to is denoted by and defined by Let be a net in directed by and for every index let denote the result of "plugging into ", which results in the net It is sometimes useful to think of this definition in terms of function composition: the net is equal to the composition of the net with the projection ; that is, If given then in if and only if for every in Tychonoff's theorem and relation to the axiom of choice If no is given but for every there exists some such that in then the tuple defined by will be a limit of in However, the axiom of choice might be need to be assumed in order to conclude that this tuple exists; the axiom of choice is not needed in some situations, such as when is finite or when every is the limit of the net (because then there is nothing to choose between), which happens for example, when every is a Hausdorff space. If is infinite and is not empty, then the axiom of choice would (in general) still be needed to conclude that the projections are surjective maps. The axiom of choice is equivalent to Tychonoff's theorem, which states that the product of any collection of compact topological spaces is compact. But if every compact space is also Hausdorff, then the so called "Tychonoff's theorem for compact Hausdorff spaces" can be used instead, which is equivalent to the ultrafilter lemma and so strictly weaker than the axiom of choice. Nets can be used to give short proofs of both version of Tychonoff's theorem by using the characterization of net convergence given above together with the fact that a space is compact if and only if every net has a convergent subnet. Cluster points of a net A net in is said to be or a given subset if for every there exists some such that and A point is said to be an or of a net if for every neighborhood of the net is frequently in A point is a cluster point of a given net if and only if it has a subset that converges to If is a net in then the set of all cluster points if in is equal to where for each If is a cluster point of some subnet of then is also a cluster point of Ultranets A net in set is called a or an if for every subset is eventually in or is eventually in the complement Every constant net is an ultranet. Every subnet of an ultranet is an ultranet and every net has some subnet that is an ultranet. If is an ultranet in and is a function then is an ultranet in Ultranets are closely related to ultrafilters. Given an ultranet clusters at if and only it converges to Examples of limits of nets Limit of a sequence and limit of a function: see below. Limits of nets of Riemann sums, in the definition of the Riemann integral. In this example, the directed set is the set of partitions of the interval of integration, partially ordered by inclusion. Examples Sequence in a topological space A sequence in a topological space can be considered a net in defined on The net is eventually in a subset of if there exists an such that for every integer the point is in So if and only if for every neighborhood of the net is eventually in The net is frequently in a subset of if and only if for every there exists some integer such that that is, if and only if infinitely many elements of the sequence are in Thus a point is a cluster point of the net if and only if every neighborhood of contains infinitely many elements of the sequence. Function from a metric space to a topological space Consider a function from a metric space to a topological space and a point We direct the set reversely according to distance from that is, the relation is "has at least the same distance to as", so that "large enough" with respect to the relation means "close enough to ". The function is a net in defined on The net is eventually in a subset of if there exists some such that for every with the point is in So if and only if for every neighborhood of is eventually in The net is frequently in a subset of if and only if for every there exists some with such that is in A point is a cluster point of the net if and only if for every neighborhood of the net is frequently in Function from a well-ordered set to a topological space Consider a well-ordered set with limit point and a function from to a topological space This function is a net on It is eventually in a subset of if there exists an such that for every the point is in So if and only if for every neighborhood of is eventually in The net is frequently in a subset of if and only if for every there exists some such that A point is a cluster point of the net if and only if for every neighborhood of the net is frequently in The first example is a special case of this with See also ordinal-indexed sequence. Properties Virtually all concepts of topology can be rephrased in the language of nets and limits. This may be useful to guide the intuition since the notion of limit of a net is very similar to that of limit of a sequence. The following set of theorems and lemmas help cement that similarity: Characterizations of topological properties Open sets and characterizations of topologies A subset is open if and only if no net in converges to a point of Also, subset is open if and only if every net converging to an element of is eventually contained in These characterization of open subsets that allows nets to characterize topologies. Closed sets Topologies can also be characterized by closed subsets. A subset is closed in if and only if every limit point of every convergent net in necessarily belongs to Explicitly, a subset is closed if and only if whenever and is a net valued in (meaning that for all ) such that in then necessarily More generally, if is any subset then a point is in the closure of if and only if there exists a net in with limit and such that for every index Continuity A function between topological spaces is continuous at the point if and only if for every net This theorem is in general not true if "net" is replaced by "sequence"; it is necessary to allow for directed sets other than just the natural numbers if is not a first-countable space (or not a sequential space). {| class="toccolours collapsible collapsed" width="90%" style="text-align:left" !Proof |- | One direction Let be continuous at point and let be a net such that Then for every open neighborhood of its preimage under is a neighborhood of (by the continuity of at ). Thus the interior of which is denoted by is an open neighborhood of and consequently is eventually in Therefore is eventually in and thus also eventually in which is a subset of Thus and this direction is proven. The other direction: Let be a point such that for every net such that Now suppose that is not continuous at Then there is a neighborhood of whose preimage under is not a neighborhood of Because necessarily Now the set of open neighborhoods of with the containment preorder is a directed set (since the intersection of every two such neighborhoods is an open neighborhood of as well). We construct a net such that for every open neighborhood of whose index is is a point in this neighborhood that is not in ; that there is always such a point follows from the fact that no open neighborhood of is included in (because by assumption, is not a neighborhood of ). It follows that is not in Now, for every open neighborhood of this neighborhood is a member of the directed set whose index we denote For every the member of the directed set whose index is is contained within ; therefore Thus and by our assumption But is an open neighborhood of and thus is eventually in and therefore also in in contradiction to not being in for every This is a contradiction so must be continuous at This completes the proof. |} Compactness A space is compact if and only if every net in has a subnet with a limit in This can be seen as a generalization of the Bolzano–Weierstrass theorem and Heine–Borel theorem. {| class="toccolours collapsible collapsed" width="90%" style="text-align:left" !Proof |- |First, suppose that is compact. We will need the following observation (see Finite intersection property). Let be any non-empty set and be a collection of closed subsets of such that for each finite Then as well. Otherwise, would be an open cover for with no finite subcover contrary to the compactness of Let be a net in directed by For every define The collection has the property that every finite subcollection has non-empty intersection. Thus, by the remark above, we have that and this is precisely the set of cluster points of By the proof given in the next section, it is equal to the set of limits of convergent subnets of Thus has a convergent subnet. Conversely, suppose that every net in has a convergent subnet. For the sake of contradiction, let be an open cover of with no finite subcover. Consider Observe that is a directed set under inclusion and for each there exists an such that for all Consider the net This net cannot have a convergent subnet, because for each there exists such that is a neighbourhood of ; however, for all we have that This is a contradiction and completes the proof. |} Cluster and limit points The set of cluster points of a net is equal to the set of limits of its convergent subnets. {| class="toccolours collapsible collapsed" width="90%" style="text-align:left" !Proof |- |Let be a net in a topological space (where as usual automatically assumed to be a directed set) and also let If is a limit of a subnet of then is a cluster point of Conversely, assume that is a cluster point of Let be the set of pairs where is an open neighborhood of in and is such that The map mapping to is then cofinal. Moreover, giving the product order (the neighborhoods of are ordered by
in nets being encountered much less often than filters outside of the fields of Analysis and Topology. A subnet is not merely the restriction of a net to a directed subset of see the linked page for a definition. Examples of nets Every non-empty totally ordered set is directed. Therefore, every function on such a set is a net. In particular, the natural numbers with the usual order form such a set, and a sequence is a function on the natural numbers, so every sequence is a net. Another important example is as follows. Given a point in a topological space, let denote the set of all neighbourhoods containing Then is a directed set, where the direction is given by reverse inclusion, so that if and only if is contained in For let be a point in Then is a net. As increases with respect to the points in the net are constrained to lie in decreasing neighbourhoods of so intuitively speaking, we are led to the idea that must tend towards in some sense. We can make this limiting concept precise. A subnet of a sequence is necessarily a sequence. For an example, let and let for every so that is the constant zero sequence. Let be directed by the usual order and let for each Define by letting be the ceiling of The map is an order morphism whose image is cofinal in its codomain and holds for every This shows that is a subnet of the sequence (where this subnet is not a subsequence of because it is not even a sequence since its domain is an uncountable set). Limits of nets If is a net from a directed set into and if is a subset of then is said to be (or ) if there exists some such that for every with the point A point is called a or of the net in if (and only if) for every open neighborhood of the net is eventually in in which case, this net is then also said to and to . Intuitively, convergence of this net means that the values come and stay as close as we want to for large enough The example net given above on the neighborhood system of a point does indeed converge to according to this definition. Notation If the net converges in to a point then this fact may be expressed by writing any of the following: where if the topological space is clear from context then the words "in " may be omitted. If in and if this limit in is unique (uniqueness in means that if is such that then necessarily ) then this fact may be indicated by writing or or where an equals sign is used in place of the arrow In a Hausdorff space, every net has at most one limit so the limit of a convergent net in a Hausdorff space is always unique. Some authors instead use the notation "" to mean with also requiring that the limit be unique; however, if this notation is defined in this way then the equals sign is no longer guaranteed to denote a transitive relationship and so no longer denotes equality. Specifically, without the uniqueness requirement, if are distinct and if each is also a limit of in then and could be written (using the equals sign ) despite being false. Bases and subbases Given a subbase for the topology on (where note that every base for a topology is also a subbase) and given a point a net in converges to if and only if it is eventually in every neighborhood of This characterization extends to neighborhood subbases (and so also neighborhood bases) of the given point Convergence in metric spaces Suppose is a metric space (or a pseudometric space) and is endowed with the metric topology. If is a point and is a net, then in if and only if in where is a net of real numbers. In plain English, this characterization says that a net converges to a point in a metric space if and only if the distance between the net and the point converges to zero. If is a normed space (or a seminormed space) then in if and only if in where Convergence in topological subspaces If the set is endowed with the subspace topology induced on it by then in if and only if in In this way, the question of whether or not the net converges to the given point is depends on this topological subspace consisting of and the image of (that is, the points of) the net Limits in a Cartesian product A net in the product space has a limit if and only if each projection has a limit. Symbolically, suppose that the Cartesian product of the spaces is endowed with the product topology and that for every index the canonical projection to is denoted by and defined by Let be a net in directed by and for every index let denote the result of "plugging into ", which results in the net It is sometimes useful to think of this definition in terms of function composition: the net is equal to the composition of the net with the projection ; that is, If given then in if and only if for every in Tychonoff's theorem and relation to the axiom of choice If no is given but for every there exists some such that in then the tuple defined by will be a limit of in However, the axiom of choice might be need to be assumed in order to conclude that this tuple exists; the axiom of choice is not needed in some situations, such as when is finite or when every is the limit of the net (because then there is nothing to choose between), which happens for example, when every is a Hausdorff space. If is infinite and is not empty, then the axiom of choice would (in general) still be needed to conclude that the projections are surjective maps. The axiom of choice is equivalent to Tychonoff's theorem, which states that the product of any collection of compact topological spaces is compact. But if every compact space is also Hausdorff, then the so called "Tychonoff's theorem for compact Hausdorff spaces" can be used instead, which is equivalent to the ultrafilter lemma and so strictly weaker than the axiom of choice. Nets can be used to give short proofs of both version of Tychonoff's theorem by using the characterization of net convergence given above together with the fact that a space is compact if and only if every net has a convergent subnet. Cluster points of a net A net in is said to be or a given subset if for every there exists some such that and A point is said to be an or of a net if for every neighborhood of the net is frequently in A point is a cluster point of a given net if and only if it has a subset that converges to If is a net in then the set of all cluster points if in is equal to where for each If is a cluster point of some subnet of
new climate model simulations show that the effects would last for more than a decade. 2007 study on global nuclear war A study published in the Journal of Geophysical Research in July 2007, titled "Nuclear winter revisited with a modern climate model and current nuclear arsenals: Still catastrophic consequences", used current climate models to look at the consequences of a global nuclear war involving most or all of the world's current nuclear arsenals (which the authors judged to be one similar to the size of the world's arsenals twenty years earlier). The authors used a global circulation model, ModelE from the NASA Goddard Institute for Space Studies, which they noted "has been tested extensively in global warming experiments and to examine the effects of volcanic eruptions on climate." The model was used to investigate the effects of a war involving the entire current global nuclear arsenal, projected to release about 150 Tg of smoke into the atmosphere, as well as a war involving about one third of the current nuclear arsenal, projected to release about 50 Tg of smoke. In the 150 Tg case they found that: In addition, they found that this cooling caused a weakening of the global hydrological cycle, reducing global precipitation by about 45%. As for the 50 Tg case involving one third of current nuclear arsenals, they said that the simulation "produced climate responses very similar to those for the 150 Tg case, but with about half the amplitude," but that "the time scale of response is about the same." They did not discuss the implications for agriculture in depth, but noted that a 1986 study which assumed no food production for a year projected that "most of the people on the planet would run out of food and starve to death by then" and commented that their own results show that, "This period of no food production needs to be extended by many years, making the impacts of nuclear winter even worse than previously thought." 2014 In 2014, Michael J. Mills (at the US National Center for Atmospheric Research, NCAR), et al., published "Multi-decadal global cooling and unprecedented ozone loss following a regional nuclear conflict" in the journal Earth's Future. The authors used computational models developed by NCAR to simulate the climatic effects of a soot cloud that they suggest would be a result of a regional nuclear war in which 100 "small" (15 Kt) weapons are detonated over cities. The model had outputs, due to the interaction of the soot cloud: global ozone losses of 20–50% over populated areas, levels unprecedented in human history, would accompany the coldest average surface temperatures in the last 1000 years. We calculate summer enhancements in UV indices of 30–80% over Mid-Latitudes, suggesting widespread damage to human health, agriculture, and terrestrial and aquatic ecosystems. Killing frosts would reduce growing seasons by 10–40 days per year for 5 years. Surface temperatures would be reduced for more than 25 years, due to thermal inertia and albedo effects in the ocean and expanded sea ice. The combined cooling and enhanced UV would put significant pressures on global food supplies and could trigger a global nuclear famine. 2018 Researchers at Los Alamos National Laboratory published the results of a multi-scale study of the climate impact of a regional nuclear exchange, the same scenario considered by Robock et al. and by Toon et al. in 2007. Unlike previous studies, this study simulated the processes whereby black carbon would be lofted into the atmosphere and found that very little would be lofted into the stratosphere and, as a result, the long-term climate impacts were much lower than those studies had concluded. In particular, "none of the simulations produced a nuclear winter effect," and "the probability of significant global cooling from a limited exchange scenario as envisioned in previous studies is highly unlikely." Research published in the peer-reviewed journal Safety suggested that no nation should possess more than 100 nuclear warheads because of the blowback effect on the aggressor nation's own population because of "nuclear autumn". 2019 2019 saw the publication of two studies on nuclear winter that build on previous modeling and describe new scenarios of nuclear winter from smaller exchanges of nuclear weapons than have been previously simulated. As in the 2007 study by Robock et al., a 2019 study by Coupe et al. models a scenario in which 150 Tg of black carbon is released into the atmosphere following an exchange of nuclear weapons between the United States and Russia where both countries use all of the nuclear weapons treaties permit them to. This amount of black carbon far exceeds that which has been emitted in the atmosphere by all volcanic eruptions in the past 1,200 years but is less than the asteroid impact which caused a mass extinction event 66 million years ago. Coupe et al. used the "whole atmosphere community climate model version 4" (WACCM4), which has a higher resolution and is more effective at simulating aerosols and stratospheric chemistry than the ModelE simulation used by Rocock et al.. The WACCM4 model simulates that black carbon molecules increase to ten times their normal size when they reach the stratosphere. ModelE did not account for this effect. This difference in black carbon particle size results in a greater optical depth in the WACCM4 model across the world for the first two years after the initial injection due to greater absorption of sunlight in the stratosphere. This will have the effect of increasing stratospheric temperatures by 100K and result in ozone depletion that is slightly greater than ModelE predicted. Another consequence of the larger particle size is accelerating the rate at which black carbon molecules fall out of the atmosphere; after ten years from the injection of black carbon into the atmosphere, WACCM4 predicts 2 Tg will remain, while ModelE predicted 19 Tg. The 2019 model and the 2007 model both predict significant temperature decreases across the globe, however the increased resolution and particle simulation in 2019 predict a greater temperature anomaly in the first six years after injection but a faster return to normal temperatures. Between a few months after the injection to the sixth year of anomaly, the WACCM4 predicts cooler global temperatures than ModelE, with temperatures more than 20K below normal leading to freezing temperatures during the summer months over much of the northern hemisphere leading to a 90% reduction in agricultural growing seasons in the midlatitudes, including the midwestern United States. WACCM4 simulations also predict a 58% reduction in global annual precipitation from normal levels in years three and four after injection, a 10% higher reduction than predicted in ModelE. Toon et al. simulated a nuclear scenario in 2025 where India and Pakistan engage in a nuclear exchange in which 100 urban areas in Pakistan and 150 urban areas in India are attacked with nuclear weapons ranging from 15 kt to 100 kt and examined the effects of black carbon released into the atmosphere from airburst-only detonations. The researchers modeled the atmospheric effects if all weapons were 15 kt, 50 kt, and 100 kt, providing a range where a nuclear exchange would likely fall into given the recent nuclear tests performed by both nations. The ranges provided are large because neither India or Pakistan is obligated to provide information on their nuclear arsenals, so their extent remains largely unknown. Toon et al. assume that either a firestorm or conflagration will occur after each detonation of the weapons, and the amount of black carbon inserted into the atmosphere from the two outcomes will be equivalent and of a profound extent; in Hiroshima in 1945, it is predicted that the firestorm released 1,000 times more energy than was released during the nuclear explosion. Such a large area being burned would release large amounts of black carbon into the atmosphere. The amount released ranges from 16.1 Tg if all weapons were 15 kt or less to 36.6 Tg for all 100 kt weapons. For the 15 kt and 100kt range of weapons, the researchers modeled global precipitation reductions of 15% to 30%, temperature reductions between 4K and 8K, and ocean temperature decreases of 1K to 3K. If all weapons used were 50 kt or more, Hadley cell circulation would be disrupted and cause a 50% decrease in precipitation in the American midwest. Net primary productivity (NPP) for oceans decreases from 10% to 20% for the 15 kt and 100 kt scenarios, respectively, while land NPP decreases between 15% and 30%; particularly affected are midlatitude agricultural regions in the United States and Europe, experiencing 25-50% reductions in NPP. As predicted by other literature, once the black carbon is removed from the atmosphere after ten years, temperatures and NPP will return to normal. 2021 Coupe et al. report the simulation of a El Niño effect lasting several years after six nuclear scenarios ranging from 5 to 150 Tg soot under the CESM-WACCM4 model. They term the change a "Nuclear Niño" and describe various changes in the ocean currents. Criticism and debate The four major, largely independent underpinnings that the nuclear winter concept has and continues to receive criticism over, are regarded as: firstly, would cities readily firestorm, and if so how much soot would be generated? Secondly, atmospheric longevity: would the quantities of soot assumed in the models remain in the atmosphere for as long as projected or would far more soot precipitate as black rain much sooner? Third, timing of events: how reasonable is it for the modeling of firestorms or war to commence in late spring or summer (this is done in almost all US-Soviet nuclear winter papers, thereby giving rise to the largest possible degree of modeled cooling)? Lastly, the issue of darkness or opacity: how much light-blocking effect the assumed quality of the soot reaching the atmosphere would have. While the highly popularized initial 1983 TTAPS 1-dimensional model forecasts were widely reported and criticized in the media, in part because every later model predicts far less of its "apocalyptic" level of cooling, most models continue to suggest that some deleterious global cooling would still result, under the assumption that a large number of fires occurred in the spring or summer. Starley L. Thompson's less primitive mid-1980s 3-Dimensional model, which notably contained the very same general assumptions, led him to coin the term "nuclear autumn" to more accurately describe the climate results of the soot in this model, in an on camera interview in which he dismisses the earlier "apocalyptic" models. A major criticism of the assumptions that continue to make these model results possible appeared in the 1987 book Nuclear War Survival Skills (NWSS), a civil defense manual by Cresson Kearny for the Oak Ridge National Laboratory. According to the 1988 publication An assessment of global atmospheric effects of a major nuclear war, Kearny's criticisms were directed at the excessive amount of soot that the modelers assumed would reach the stratosphere. Kearny cited a Soviet study that modern cities would not burn as firestorms, as most flammable city items would be buried under non-combustible rubble and that the TTAPS study included a massive overestimate on the size and extent of non-urban wildfires that would result from a nuclear war. The TTAPS authors responded that, amongst other things, they did not believe target planners would intentionally blast cities into rubble, but instead argued fires would begin in relatively undamaged suburbs when nearby sites were hit, and partially conceded his point about non-urban wildfires. Dr. Richard D. Small, director of thermal sciences at the Pacific-Sierra Research Corporation similarly disagreed strongly with the model assumptions, in particular the 1990 update by TTAPS that argues that some 5,075 Tg of material would burn in a total US-Soviet nuclear war, as analysis by Small of blueprints and real buildings returned a maximum of 1,475 Tg of material that could be burned, "assuming that all the available combustible material was actually ignited". Although Kearny was of the opinion that future more accurate models would "indicate there will be even smaller reductions in temperature", including future potential models that did not so readily accept that firestorms would occur as dependably as nuclear winter modellers assume, in NWSS Kearny did summarize the comparatively moderate cooling estimate of no more than a few days, from the 1986 Nuclear Winter Reappraised model by Starley Thompson and Stephen Schneider. This was done in an effort to convey to his readers that contrary to the popular opinion at the time, in the conclusion of these two climate scientists, "on scientific grounds the global apocalyptic conclusions of the initial nuclear winter hypothesis can now be relegated to a vanishing low level of probability." However, a 1988 article by Brian Martin in Science and Public Policy states that—although Nuclear Winter Reappraised concluded the US-Soviet "nuclear winter" would be much less severe than originally thought, with the authors describing the effects more as a "nuclear autumn"—other statements by Thompson and Schneider show that they "resisted the interpretation that this means a rejection of the basic points made about nuclear winter". In the Alan Robock et al. 2007 paper, they write that "because of the use of the term 'nuclear autumn' by Thompson and Schneider [1986], even though the authors made clear that the climatic consequences would be large, in policy circles the theory of nuclear winter is considered by some to have been exaggerated and disproved [e.g., Martin, 1988]." In 2007 Schneider expressed his tentative support for the cooling results of the limited nuclear war (Pakistan and India) analyzed in the 2006 model, saying "The sun is much stronger in the tropics than it is in mid-latitudes. Therefore, a much more limited war [there] could have a much larger effect, because you are putting the smoke in the worst possible place", and "anything that you can do to discourage people from thinking that there is any way to win anything with a nuclear exchange is a good idea." The contribution of smoke from the ignition of live non-desert vegetation, living forests, grasses and so on, nearby to many missile silos is a source of smoke originally assumed to be very large in the initial "Twilight at Noon" paper, and also found in the popular TTAPS publication. However, this assumption was examined by Bush and Small in 1987 and they found that the burning of live vegetation could only conceivably contribute very slightly to the estimated total "nonurban smoke production". With the vegetation's potential to sustain burning only probable if it is within a radius or two from the surface of the nuclear fireball, which is at a distance that would also experience extreme blast winds that would influence any such fires. This reduction in the estimate of the non-urban smoke hazard is supported by the earlier preliminary Estimating Nuclear Forest Fires publication of 1984, and by the 1950–60s in-field examination of surface-scorched, mangled but never burnt-down tropical forests on the surrounding islands from the shot points in the Operation Castle and Operation Redwing test series. A paper by the United States Department of Homeland Security, finalized in 2010, states that after a nuclear detonation targeting a city "If fires are able to grow and coalesce, a firestorm could develop that would be beyond the abilities of firefighters to control. However experts suggest in the nature of modern US city design and construction may make a raging firestorm unlikely". The nuclear bombing of Nagasaki for example, did not produce a firestorm. This was similarly noted as early as 1986–88, when the assumed quantity of fuel "mass loading" (the amount of fuel per square meter) in cities underpinning the winter models was found to be too high and intentionally creates heat fluxes that loft smoke into the lower stratosphere, yet assessments "more characteristic of conditions" to be found in real-world modern cities, had found that the fuel loading, and hence the heat flux that would result from efficient burning, would rarely loft smoke much higher than 4 km. Russell Seitz, Associate of the Harvard University Center for International Affairs, argues that the winter models' assumptions give results which the researchers want to achieve and is a case of "worst-case analysis run amok". In September 1986, Seitz published "Siberian fire as 'nuclear winter' guide" in the journal Nature, in which he investigated the 1915 Siberian fire, which started in the early summer months and was caused by the worst drought in the region's recorded history. The fire ultimately devastated the region, burning the world's largest boreal forest, the size of Germany. While approximately 8˚C of daytime summer cooling occurred under the smoke clouds during the weeks of burning, no increase in potentially devastating agricultural night frosts occurred. Following his investigation into the Siberian fire of 1915, Seitz criticized the "nuclear winter" model results for being based on successive worst-case events: Seitz cited Carl Sagan, adding an emphasis: "In almost any realistic case involving nuclear exchanges between the superpowers, global environmental changes sufficient to cause an extinction event equal to or more severe than that of the close of the Cretaceous when the dinosaurs and many other species died out are likely." Seitz comments: "The ominous rhetoric italicized in this passage puts even the 100 megaton [the original 100 city firestorm] scenario ... on a par with the 100 million megaton blast of an asteroid striking the Earth. This [is] astronomical mega-hype ..." Seitz concludes: Seitz's opposition caused the proponents of nuclear winter to issue responses in the media. The proponents believed it was simply necessary to show only the possibility of climatic catastrophe, often a worst-case scenario, while opponents insisted that to be taken seriously, nuclear winter should be shown as likely under "reasonable" scenarios. One of these areas of contention, as elucidated by Lynn R. Anspaugh, is upon the question of which season should be used as the backdrop for the US-USSR war models. Most models choose the summer in the Northern Hemisphere as the start point to produce the maximum soot lofting and therefore eventual winter effect. However, it has been pointed out that if the same number of firestorms occurred in the autumn or winter months, when there is much less intense sunlight to loft soot into a stable region of the stratosphere, the magnitude of the cooling effect would be negligible, according to a January model run by Covey et al. Schneider conceded the issue in 1990, saying "a war in late fall or winter would have no appreciable [cooling] effect". Anspaugh also expressed frustration that although a managed forest fire in Canada on 3 August 1985 is said to have been lit by proponents of nuclear winter, with the fire potentially serving as an opportunity to do some basic measurements of the optical properties of the smoke and smoke-to-fuel ratio, which would have helped refine the estimates of these critical model inputs, the proponents did not indicate that any such measurements were made. Peter V. Hobbs, who would later successfully attain funding to fly into and sample the smoke clouds from the Kuwait oil fires in 1991, also expressed frustration that he was denied funding to sample the Canadian, and other forest fires in this way. Turco wrote a 10-page memorandum with information derived from his notes and some satellite images, claiming that the smoke plume reached 6 km in altitude. In 1986, atmospheric scientist Joyce Penner from the Lawrence Livermore National Laboratory published an article in Nature in which she focused on the specific variables of the smoke's optical properties and the quantity of smoke remaining airborne after the city fires. She found that the published estimates of these variables varied so widely that depending on which estimates were chosen the climate effect could be negligible, minor or massive. The assumed optical properties for black carbon in more recent nuclear winter papers in 2006 are still "based on those assumed in earlier nuclear winter simulations". John Maddox, editor of the journal Nature, issued a series of skeptical comments about nuclear winter studies during his tenure. Similarly S. Fred Singer was a long term vocal critic of the hypothesis in the journal and in televised debates with Carl Sagan. Critical response to the more modern papers In a 2011 response to the more modern papers on the hypothesis, Russell Seitz published a comment in Nature challenging Alan Robock's claim that there has been no real scientific debate about the "nuclear winter" concept. In 1986 Seitz also contends that many others are reluctant to speak out for fear of being stigmatized as "closet Dr. Strangeloves", physicist Freeman Dyson of Princeton for example stated "It's an absolutely atrocious piece of science, but I quite despair of setting the public record straight." According to the Rocky Mountain News, Stephen Schneider had been called a fascist by some disarmament supporters for having written his 1986 article "Nuclear Winter Reappraised." MIT meteorologist Kerry Emanuel similarly wrote in a review in Nature that the winter concept is "notorious for its lack of scientific integrity" due to the unrealistic estimates selected for the quantity of fuel likely to burn, the imprecise global circulation models used. Emanuel ends by stating that the evidence of other models point to substantial scavenging of the smoke by rain. Emanuel also made an "interesting point" about questioning proponent's objectivity when it came to strong emotional or political issues that they hold. William R. Cotton, Professor of Atmospheric Science at Colorado State University, specialist in cloud physics modeling and co-creator of the highly influential and previously mentioned RAMS atmosphere model, had in the 1980s worked on soot rain-out models and supported the predictions made by his own and other nuclear winter models. However, he has since reversed this position, according to a book co-authored by him in 2007, stating that, amongst other systematically examined assumptions, far more rain out/wet deposition of soot will occur than is assumed in modern papers on the subject: "We must wait for a new generation of GCMs to be implemented to examine potential consequences quantitatively". He also reveals that, in his view, "nuclear winter was largely politically motivated from the beginning". Policy implications During the Cuban Missile Crisis, Fidel Castro and Che Guevara called on the USSR to launch a nuclear first strike against the US in the event of a US invasion of Cuba. In the 1980s, Castro was pressuring
greater concentrations when air is heated to high temperatures. Historical data on residence times of aerosols, albeit a different mixture of aerosols, in this case stratospheric sulfur aerosols and volcanic ash from megavolcano eruptions, appear to be in the one-to-two-year time scale, however aerosol–atmosphere interactions are still poorly understood. Soot properties Sooty aerosols can have a wide range of properties, as well as complex shapes, making it difficult to determine their evolving atmospheric optical depth value. The conditions present during the creation of the soot are believed to be considerably important as to their final properties, with soot generated on the more efficient spectrum of burning efficiency considered almost "elemental carbon black," while on the more inefficient end of the burning spectrum, greater quantities of partially burnt/oxidized fuel are present. These partially burnt "organics" as they are known, often form tar balls and brown carbon during common lower-intensity wildfires, and can also coat the purer black carbon particles. However, as the soot of greatest importance is that which is injected to the highest altitudes by the pyroconvection of the firestorm – a fire being fed with storm-force winds of air – it is estimated that the majority of the soot under these conditions is the more oxidized black carbon. Consequences Climatic effects A study presented at the annual meeting of the American Geophysical Union in December 2006 found that even a small-scale, regional nuclear war could disrupt the global climate for a decade or more. In a regional nuclear conflict scenario where two opposing nations in the subtropics would each use 50 Hiroshima-sized nuclear weapons (about 15 kilotons each) on major population centers, the researchers estimated as much as five million tons of soot would be released, which would produce a cooling of several degrees over large areas of North America and Eurasia, including most of the grain-growing regions. The cooling would last for years, and, according to the research, could be "catastrophic". Ozone depletion Nuclear detonations produce large amounts of nitrogen oxides by breaking down the air around them. These are then lifted upwards by thermal convection. As they reach the stratosphere, these nitrogen oxides are capable of catalytically breaking down the ozone present in this part of the atmosphere. Ozone depletion would allow a much greater intensity of harmful ultraviolet radiation from the sun to reach the ground. A 2008 study by Michael J. Mills et al., published in the Proceedings of the National Academy of Sciences, found that a nuclear weapons exchange between Pakistan and India using their current arsenals could create a near-global ozone hole, triggering human health problems and causing environmental damage for at least a decade. The computer-modeled study looked at a nuclear war between the two countries involving 50 Hiroshima-sized nuclear devices on each side, producing massive urban fires and lofting as much as five million metric tons of soot about into the stratosphere. The soot would absorb enough solar radiation to heat surrounding gases, increasing the break down of the stratospheric ozone layer protecting Earth from harmful ultraviolet radiation, with up to 70% ozone loss at northern high latitudes. Nuclear summer A "nuclear summer" is a hypothesized scenario in which, after a nuclear winter caused by aerosols inserted into the atmosphere that would prevent sunlight from reaching lower levels or the surface, has abated, a greenhouse effect then occurs due to carbon dioxide released by combustion and methane released from the decay of the organic matter and methane from dead organic matter and corpses that froze during the nuclear winter. Another more sequential hypothetical scenario, following the settling out of most of the aerosols in 1–3 years, the cooling effect would be overcome by a heating effect from greenhouse warming, which would raise surface temperatures rapidly by many degrees, enough to cause the death of much if not most of the life that had survived the cooling, much of which is more vulnerable to higher-than-normal temperatures than to lower-than-normal temperatures. The nuclear detonations would release CO2 and other greenhouse gases from burning, followed by more released from the decay of dead organic matter. The detonations would also insert nitrogen oxides into the stratosphere that would then deplete the ozone layer around the Earth. Other more straightforward hypothetical versions exist of the hypothesis that nuclear winter might give way to a nuclear summer. The high temperatures of the nuclear fireballs could destroy the ozone gas of the middle stratosphere. History Early work In 1952, a few weeks prior to the Ivy Mike (10.4 megaton) bomb test on Elugelab Island, there were concerns that the aerosols lifted by the explosion might cool the Earth. Major Norair Lulejian, USAF, and astronomer Natarajan Visvanathan studied this possibility, reporting their findings in Effects of Superweapons Upon the Climate of the World, the distribution of which was tightly controlled. This report is described in a 2013 report by the Defense Threat Reduction Agency as the initial study of the "nuclear winter" concept. It indicated no appreciable chance of explosion-induced climate change. The implications for civil defense of numerous surface bursts of high yield hydrogen bomb explosions on Pacific Proving Ground islands such as those of Ivy Mike in 1952 and Castle Bravo (15 Mt) in 1954 were described in a 1957 report on The Effects of Nuclear Weapons, edited by Samuel Glasstone. A section in that book entitled "Nuclear Bombs and the Weather" states: "The dust raised in severe volcanic eruptions, such as that at Krakatoa in 1883, is known to cause a noticeable reduction in the sunlight reaching the earth ... The amount of [soil or other surface] debris remaining in the atmosphere after the explosion of even the largest nuclear weapons is probably not more than about one percent or so of that raised by the Krakatoa eruption. Further, solar radiation records reveal that none of the nuclear explosions to date has resulted in any detectable change in the direct sunlight recorded on the ground." The US Weather Bureau in 1956 regarded it as conceivable that a large enough nuclear war with megaton-range surface detonations could lift enough soil to cause a new ice age. In the 1966 RAND corporation memorandum The Effects of Nuclear War on the Weather and Climate by E. S. Batten, while primarily analysing potential dust effects from surface bursts, it notes that "in addition to the effects of the debris, extensive fires ignited by nuclear detonations might change the surface characteristics of the area and modify local weather patterns ... however, a more thorough knowledge of the atmosphere is necessary to determine their exact nature, extent, and magnitude." In the United States National Research Council (NRC) book Long-Term Worldwide Effects of Multiple Nuclear-Weapons Detonations published in 1975, it states that a nuclear war involving 4,000 Mt from present arsenals would probably deposit much less dust in the stratosphere than the Krakatoa eruption, judging that the effect of dust and oxides of nitrogen would probably be slight climatic cooling which "would probably lie within normal global climatic variability, but the possibility of climatic changes of a more dramatic nature cannot be ruled out". In the 1985 report, The Effects on the Atmosphere of a Major Nuclear Exchange, the Committee on the Atmospheric Effects of Nuclear Explosions argues that a "plausible" estimate on the amount of stratospheric dust injected following a surface burst of 1 Mt is 0.3 teragrams, of which 8 percent would be in the micrometer range. The potential cooling from soil dust was again looked at in 1992, in a US National Academy of Sciences (NAS) report on geoengineering, which estimated that about 1010 kg (10 teragrams) of stratospheric injected soil dust with particulate grain dimensions of 0.1 to 1 micrometer would be required to mitigate the warming from a doubling of atmospheric carbon dioxide, that is, to produce ~2 °C of cooling. In 1969, Paul Crutzen discovered that oxides of nitrogen (NOx) could be an efficient catalyst for the destruction of the ozone layer/stratospheric ozone. Following studies on the potential effects of NOx generated by engine heat in stratosphere flying Supersonic Transport (SST) airplanes in the 1970s, in 1974, John Hampson suggested in the journal Nature that due to the creation of atmospheric NOx by nuclear fireballs, a full-scale nuclear exchange could result in depletion of the ozone shield, possibly subjecting the earth to ultraviolet radiation for a year or more. In 1975, Hampson's hypothesis "led directly" to the United States National Research Council (NRC) reporting on the models of ozone depletion following nuclear war in the book Long-Term Worldwide Effects of Multiple Nuclear-Weapons Detonations. In the section of this 1975 NRC book pertaining to the issue of fireball generated NOx and ozone layer loss therefrom, the NRC present model calculations from the early-to-mid 1970s on the effects of a nuclear war with the use of large numbers of multi-megaton yield detonations, which returned conclusions that this could reduce ozone levels by 50 percent or more in the northern hemisphere. However independent of the computer models presented in the 1975 NRC works, a paper in 1973 in the journal Nature depicts the stratospheric ozone levels worldwide overlaid upon the number of nuclear detonations during the era of atmospheric testing. The authors conclude that neither the data nor their models show any correlation between the approximate 500 Mt in historical atmospheric testing and an increase or decrease of ozone concentration. In 1976, a study on the experimental measurements of an earlier atmospheric nuclear test as it affected the ozone layer also found that nuclear detonations are exonerated of depleting ozone, after the at first alarming model calculations of the time. Similarly, a 1981 paper found that the models on ozone destruction from one test and the physical measurements taken were in disagreement, as no destruction was observed. In total, about 500 Mt were atmospherically detonated between 1945 and 1971, peaking in 1961–62, when 340 Mt were detonated in the atmosphere by the United States and Soviet Union. During this peak, with the multi-megaton range detonations of the two nations nuclear test series, in exclusive examination, a total yield estimated at 300 Mt of energy was released. Due to this, 3 × 1034 additional molecules of nitric oxide (about 5,000 tons per Mt, 5 × 109 grams per megaton) are believed to have entered the stratosphere, and while ozone depletion of 2.2 percent was noted in 1963, the decline had started prior to 1961 and is believed to have been caused by other meteorological effects. In 1982 journalist Jonathan Schell in his popular and influential book The Fate of the Earth, introduced the public to the belief that fireball generated NOx would destroy the ozone layer to such an extent that crops would fail from solar UV radiation and then similarly painted the fate of the Earth, as plant and aquatic life going extinct. In the same year, 1982, Australian physicist Brian Martin, who frequently corresponded with John Hampson who had been greatly responsible for much of the examination of NOx generation, penned a short historical synopsis on the history of interest in the effects of the direct NOx generated by nuclear fireballs, and in doing so, also outlined Hampson's other non-mainstream viewpoints, particularly those relating to greater ozone destruction from upper-atmospheric detonations as a result of any widely used anti-ballistic missile (ABM-1 Galosh) system. However, Martin ultimately concludes that it is "unlikely that in the context of a major nuclear war" ozone degradation would be of serious concern. Martin describes views about potential ozone loss and therefore increases in ultraviolet light leading to the widespread destruction of crops, as advocated by Jonathan Schell in The Fate of the Earth, as highly unlikely. More recent accounts on the specific ozone layer destruction potential of NOx species are much less than earlier assumed from simplistic calculations, as "about 1.2 million tons" of natural and anthropogenic generated stratospheric NOx is believed to be formed each year according to Robert P. Parson in the 1990s. Science fiction The first published suggestion that cooling of the climate could be an effect of a nuclear war, appears to have been originally put forth by Poul Anderson and F. N. Waldrop in their story "Tomorrow's Children", in the March 1947 issue of the Astounding Science Fiction magazine. The story, primarily about a team of scientists hunting down mutants, warns of a "Fimbulwinter" caused by dust that blocked sunlight after a recent nuclear war and speculated that it may even trigger a new Ice Age. Anderson went on to publish a novel based partly on this story in 1961, titling it Twilight World. Similarly in 1985 it was noted by T. G. Parsons that the story "Torch" by C. Anvil, which also appeared in Astounding Science Fiction magazine, but in the April 1957 edition, contains the essence of the "Twilight at Noon"/"nuclear winter" hypothesis. In the story, a nuclear warhead ignites an oil field, and the soot produced "screens out part of the sun's radiation", resulting in Arctic temperatures for much of the population of North America and the Soviet Union. 1980s The 1988 Air Force Geophysics Laboratory publication, An assessment of global atmospheric effects of a major nuclear war by H. S. Muench, et al., contains a chronology and review of the major reports on the nuclear winter hypothesis from 1983 to 1986. In general, these reports arrive at similar conclusions as they are based on "the same assumptions, the same basic data", with only minor model-code differences. They skip the modeling steps of assessing the possibility of fire and the initial fire plumes and instead start the modeling process with a "spatially uniform soot cloud" which has found its way into the atmosphere. Although never openly acknowledged by the multi-disciplinary team who authored the most popular 1980s TTAPS model, in 2011 the American Institute of Physics states that the TTAPS team (named for its participants, who had all previously worked on the phenomenon of dust storms on Mars, or in the area of asteroid impact events: Richard P. Turco, Owen Toon, Thomas P. Ackerman, James B. Pollack and Carl Sagan) announcement of their results in 1983 "was with the explicit aim of promoting international arms control". However, "the computer models were so simplified, and the data on smoke and other aerosols were still so poor, that the scientists could say nothing for certain." In 1981, William J. Moran began discussions and research in the National Research Council (NRC) on the airborne soil/dust effects of a large exchange of nuclear warheads, having seen a possible parallel in the dust effects of a war with that of the asteroid-created K-T boundary and its popular analysis a year earlier by Luis Alvarez in 1980.}} An NRC study panel on the topic met in December 1981 and April 1982 in preparation for the release of the NRC's The Effects on the Atmosphere of a Major Nuclear Exchange, published in 1985. As part of a study on the creation of oxidizing species such as NOx and ozone in the troposphere after a nuclear war, launched in 1980 by AMBIO, a journal of the Royal Swedish Academy of Sciences, Paul J. Crutzen and John W. Birks began preparing for the 1982 publication of a calculation on the effects of nuclear war on stratospheric ozone, using the latest models of the time. However, they found that as a result of the trend towards more numerous but less energetic, sub-megaton range nuclear warheads (made possible by the ceaseless march to increase ICBM warhead accuracy), the ozone layer danger was "not very significant". It was after being confronted with these results that they "chanced" upon the notion, as "an afterthought" of nuclear detonations igniting massive fires everywhere and, crucially, the smoke from these conventional fires then going on to absorb sunlight, causing surface temperatures to plummet. In early-1982, the two circulated a draft paper with the first suggestions of alterations in short-term climate from fires presumed to occur following a nuclear war. Later in the same year, the special issue of Ambio devoted to the possible environmental consequences of nuclear war by Crutzen and Birks was titled "The Atmosphere after a Nuclear War: Twilight at Noon", and largely anticipated the nuclear winter hypothesis. The paper looked into fires and their climatic effect and discussed particulate matter from large fires, nitrogen oxide, ozone depletion and the effect of nuclear twilight on agriculture. Crutzen and Birks' calculations suggested that smoke particulates injected into the atmosphere by fires in cities, forests and petroleum reserves could prevent up to 99 percent of sunlight from reaching the Earth's surface. This darkness, they said, could exist "for as long as the fires burned", which was assumed to be many weeks, with effects such as: "The normal dynamic and temperature structure of the atmosphere would ... change considerably over a large fraction of the Northern Hemisphere, which will probably lead to important changes in land surface temperatures and wind systems." An implication of their work was that a successful nuclear decapitation strike could have severe climatic consequences for the perpetrator. After reading a paper by N. P. Bochkov and E. I. Chazov, published in the same edition of Ambio that carried Crutzen and Birks's paper "Twilight at Noon", Soviet atmospheric scientist Georgy Golitsyn applied his research on Mars dust storms to soot in the Earth's atmosphere. The use of these influential Martian dust storm models in nuclear winter research began in 1971, when the Soviet spacecraft Mars 2 arrived at the red planet and observed a global dust cloud. The orbiting instruments together with the 1971 Mars 3 lander determined that temperatures on the surface of the red planet were considerably colder than temperatures at the top of the dust cloud. Following these observations, Golitsyn received two telegrams from astronomer Carl Sagan, in which Sagan asked Golitsyn to "explore the understanding and assessment of this phenomenon." Golitsyn recounts that it was around this time that he had "proposed a theory to explain how Martian dust may be formed and how it may reach global proportions." In the same year Alexander Ginzburg, an employee in Golitsyn's institute, developed a model of dust storms to describe the cooling phenomenon on Mars. Golitsyn felt that his model would be applicable to soot after he read a 1982 Swedish magazine dedicated to the effects of a hypothetical nuclear war between the USSR and the US. Golitsyn would use Ginzburg's largely unmodified dust-cloud model with soot assumed as the aerosol in the model instead of soil dust and in an identical fashion to the results returned, when computing dust-cloud cooling in the Martian atmosphere, the cloud high above the planet would be heated while the planet below would cool drastically. Golitsyn presented his intent to publish this Martian-derived Earth-analog model to the Andropov instigated Committee of Soviet Scientists in Defence of Peace Against the Nuclear Threat in May 1983, an organization that Golitsyn would later be appointed a position of vice-chairman of. The establishment of this committee was done with the expressed approval of the Soviet leadership with the intent "to expand controlled contacts with Western "nuclear freeze" activists". Having gained this committees approval, in September 1983, Golitsyn published the first computer model on the nascent "nuclear winter" effect in the widely read Herald of the Russian Academy of Sciences. On 31 October 1982, Golitsyn and Ginsburg's model and results were presented at the conference on "The World after Nuclear War", hosted in Washington, D.C. Both Golitsyn and Sagan had been interested in the cooling on the dust storms on the planet Mars in the years preceding their focus on "nuclear winter". Sagan had also worked on Project A119 in the 1950s–1960s, in which he attempted to model the movement and longevity of a plume of lunar soil. After the publication of "Twilight at Noon" in 1982, the TTAPS team have said that they began the process of doing a 1-dimensional computational modeling study of the atmospheric consequences of nuclear war/soot in the stratosphere, though they would not publish a paper in Science magazine until late-December 1983. The phrase "nuclear winter" had been coined by Turco just prior to publication. In this early paper, TTAPS used assumption-based estimates on the total smoke and dust emissions that would result from a major nuclear exchange, and with that, began analyzing the subsequent effects on the atmospheric radiation balance and temperature structure as a result of this quantity of assumed smoke. To compute dust and smoke effects, they employed a one-dimensional microphysics/radiative-transfer model of the Earth's lower atmosphere (up to the mesopause), which defined only the vertical characteristics of the global climate perturbation. Interest in the environmental effects of nuclear war, however, had continued in the Soviet Union after Golitsyn's September paper, with Vladimir Alexandrov and G. I. Stenchikov also publishing a paper in December 1983 on the climatic consequences, although in contrast to the contemporary TTAPS paper, this paper was based on simulations with a three-dimensional global circulation model. (Two years later Alexandrov disappeared under mysterious circumstances). Richard Turco and Starley L. Thompson were both critical of the Soviet research. Turco called it "primitive" and Thompson said it used obsolete US computer models. Later they were to rescind these criticisms and instead applauded Alexandrov's pioneering work, saying that the Soviet model shared the weaknesses of all the others. In 1984, the World Meteorological Organization (WMO) commissioned Golitsyn and N. A. Phillips to review the state of the science. They found that studies generally assumed a scenario where half of the world's nuclear weapons would be used, ~5000 Mt, destroying approximately 1,000 cities, and creating large quantities of carbonaceous smoke – 1– being most likely, with a range of 0.2– (NAS; TTAPS assumed ). The smoke resulting would be largely opaque to solar radiation but transparent to infrared, thus cooling the Earth by blocking sunlight, but not creating warming by enhancing the greenhouse effect. The optical depth of the smoke can be much greater than unity. Forest fires resulting from non-urban targets could increase aerosol production further. Dust from near-surface explosions against hardened targets also contributes; each megaton-equivalent explosion could release up to five million tons of dust, but most would quickly fall out; high altitude dust is estimated at 0.1–1 million tons per megaton-equivalent of explosion. Burning of crude oil could also contribute substantially. The 1-D radiative-convective models used in these studies produced a range of results, with coolings up to 15–42 °C between 14 and 35 days after the war, with a "baseline" of about 20 °C. Somewhat more sophisticated calculations using 3-D GCMs produced similar results: temperature drops of about 20 °C, though with regional variations. All calculations show large heating (up to 80 °C) at the top of the smoke layer at about ; this implies a substantial modification of the circulation there and the possibility of advection of the cloud into low latitudes and the southern hemisphere. 1990 In a 1990 paper entitled "Climate and Smoke: An Appraisal of Nuclear Winter", TTAPS gave a more detailed description of the short- and long-term atmospheric effects of a nuclear war using a three-dimensional model: First one to three months: 10–25% of soot injected is immediately removed by precipitation, while the rest is transported over the globe in one to two weeks SCOPE figures for July smoke injection: 22 °C drop in mid-latitudes 10 °C drop in humid climates 75% decrease in rainfall in mid-latitudes Light level reduction of 0% in low latitudes to 90% in high smoke injection areas SCOPE figures for winter smoke injection: Temperature drops between 3 and 4 °C Following one to three years: 25–40% of injected smoke is stabilised in atmosphere (NCAR). Smoke stabilised for approximately one year. Land temperatures of several degrees below normal Ocean surface temperature between 2 and 6 °C Ozone depletion of 50% leading to 200% increase in UV radiation incident on surface. Kuwait wells in the first Gulf War One of the major results of TTAPS' 1990 paper was the re-iteration of the team's 1983 model that 100 oil refinery fires would be sufficient to bring about a small scale, but still globally deleterious nuclear winter. Following Iraq's invasion of Kuwait and Iraqi threats of igniting the country's approximately 800 oil wells, speculation on the cumulative climatic effect of this, presented at the World Climate Conference in Geneva that November in 1990, ranged from a nuclear winter type scenario, to heavy acid rain and even short term immediate global warming. In articles printed in the Wilmington Morning Star and the Baltimore Sun newspapers in January 1991, prominent authors of nuclear winter papers – Richard P. Turco, John W. Birks, Carl Sagan, Alan Robock and Paul Crutzen – collectively stated that they expected catastrophic nuclear winter like effects with continental-sized effects of sub-freezing temperatures as a result of the Iraqis going through with their threats of igniting 300 to 500 pressurized oil wells that could subsequently burn for several months. As threatened, the wells were set on fire by the retreating Iraqis in March 1991, and the 600 or so burning oil wells were not fully extinguished until November 6, 1991, eight months after the end of the war, and they consumed an estimated six million barrels of oil per day at their peak intensity. When Operation Desert Storm began in January 1991, coinciding with the first few oil fires being lit, Dr. S. Fred Singer and Carl Sagan discussed the possible environmental effects of the Kuwaiti petroleum fires on the ABC News program Nightline. Sagan again argued that some of the effects of the smoke could be similar to the effects of a nuclear winter, with smoke lofting into the stratosphere, beginning around above sea level in Kuwait, resulting in global effects. He also argued that he believed the net effects would be very similar to the explosion of the Indonesian volcano Tambora in 1815, which resulted in the year 1816 being known as the "Year Without a Summer". Sagan listed modeling outcomes that forecast effects extending to South Asia, and perhaps to the Northern Hemisphere as well. Sagan stressed this outcome was so likely that "It should affect the war plans." Singer, on the other hand, anticipated that the smoke would go to an altitude of about and then be rained out after about three to five days, thus limiting the lifetime of the smoke. Both height estimates made by Singer and Sagan turned out to be wrong, albeit with Singer's narrative being closer to what transpired, with the comparatively minimal atmospheric effects remaining limited to the Persian Gulf region, with smoke plumes, in general, lofting to about and a few as high as . Sagan and his colleagues expected that a "self-lofting" of the sooty smoke would occur when it absorbed the sun's heat radiation, with little to no scavenging occurring, whereby the black particles of soot would be heated by the sun and lifted/lofted higher and higher into the air, thereby injecting the soot into the stratosphere, a position where they argued it would take years for the sun-blocking effect of this aerosol of soot to fall out of the air, and with that, catastrophic ground level cooling and agricultural effects in Asia and possibly the Northern Hemisphere as a whole. In a 1992 follow-up, Peter Hobbs and others had observed no appreciable evidence for the nuclear winter team's predicted massive "self-lofting" effect and the oil-fire smoke clouds contained less soot than the nuclear winter modelling team had assumed. The atmospheric scientist tasked with studying the atmospheric effect of the Kuwaiti fires by the National Science Foundation, Peter Hobbs, stated that the fires' modest impact suggested that "some numbers [used to support the Nuclear Winter hypothesis]... were probably a little overblown." Hobbs found that at the peak of the fires, the smoke absorbed 75 to 80% of the sun's radiation. The particles rose to a maximum of , and when combined with scavenging by clouds the smoke had a short residency time of a maximum of a few days in the atmosphere. Pre-war claims of wide scale, long-lasting, and significant global environmental effects were thus not borne out, and found to be significantly exaggerated by the media and speculators, with climate models by those not supporting the nuclear winter hypothesis at the time of the fires predicting only more localized effects such as a daytime temperature drop of ~10 °C within 200 km of the source. Sagan later conceded in his book The Demon-Haunted World that his predictions obviously did not turn out to be correct: "it was pitch black at noon and temperatures dropped 4–6° C over the Persian Gulf, but not much smoke reached stratospheric altitudes and Asia was spared." The idea of oil well and oil reserve smoke pluming into the stratosphere serving as a main contributor to the soot of a nuclear winter was a central idea of the early climatology papers on the hypothesis; they were considered more of a possible contributor than smoke from cities, as the smoke from oil has a higher ratio of black soot, thus absorbing more sunlight. Hobbs compared the papers' assumed "emission factor" or soot generating efficiency from ignited oil pools and found, upon comparing to measured values from oil pools at Kuwait, which were the greatest soot producers, the emissions of soot assumed in the nuclear winter calculations were still "too high". Following the results of the Kuwaiti oil fires being in disagreement with the core nuclear winter promoting scientists, 1990s nuclear winter papers generally attempted to distance themselves from suggesting oil well and reserve smoke will reach the stratosphere. In 2007, a nuclear winter study noted that modern computer models have been applied to the Kuwait oil fires, finding that individual smoke plumes are not able to loft smoke into the stratosphere, but that smoke from fires covering a large area like some forest fires can lift smoke into the stratosphere, and recent evidence suggests that this occurs far more often than previously thought. The study also suggested that the burning of the comparably smaller cities, which would be expected to follow a nuclear strike, would also loft significant amounts of smoke into the stratosphere: However, the above simulation notably contained the assumption that no dry or wet deposition would occur. Recent modeling Between 1990 and 2003, commentators noted that no peer-reviewed papers on "nuclear winter" were published. Based on new work published in 2007 and 2008 by some of the authors of the original studies, several new hypotheses have been put forth, primarily the assessment that as few as 100 firestorms would result in a nuclear winter. However, far from the hypothesis being "new", it drew the same conclusion as earlier 1980s models, which similarly regarded 100 or so city firestorms as a threat. Compared to climate change for the past millennium, even the smallest exchange modeled would plunge the planet into temperatures colder than the Little Ice Age (the period of history between approximately 1600 and 1850 AD). This would take effect instantly, and agriculture would be severely threatened. Larger amounts of smoke would produce larger climate changes, making agriculture impossible for years. In both cases, new climate model simulations show that the effects would last for more than a decade. 2007 study on global nuclear war A study published in the Journal of Geophysical Research in July 2007, titled "Nuclear winter revisited with a modern climate model and current nuclear arsenals: Still catastrophic consequences", used current climate models to look at the consequences of a global nuclear war involving most or all of the world's current nuclear arsenals (which the authors judged to be one similar to the size of the world's arsenals twenty years earlier). The authors used a global circulation model, ModelE from the NASA Goddard Institute for Space Studies, which they noted "has been tested extensively in global warming experiments and to examine the effects of volcanic eruptions on climate." The model was used to investigate the effects of a war involving the entire current global nuclear arsenal, projected to release about 150 Tg of smoke into the atmosphere, as well as a war involving about one third of the current nuclear arsenal, projected to release about 50 Tg of smoke. In the 150 Tg case they found that: In addition, they found that this cooling caused a weakening of the global hydrological cycle, reducing global precipitation by about 45%. As for the 50 Tg case involving one third of current nuclear arsenals, they said that the simulation "produced climate responses very similar to those for the 150 Tg case, but with about half the amplitude," but that "the time scale of response is about the same." They did not discuss the implications for agriculture in depth, but noted that a 1986 study which assumed no food production for a year projected that "most of the people on the planet would run out of food and starve to death by then" and commented that their own results show that, "This period of no food production needs to be extended by many years, making the impacts of nuclear winter even worse than previously thought." 2014 In 2014, Michael J. Mills (at the US National Center for Atmospheric Research, NCAR), et al., published "Multi-decadal global cooling and unprecedented ozone loss following a regional nuclear conflict" in the journal Earth's Future. The authors used computational models developed by NCAR to simulate the climatic effects of a soot cloud that they suggest would be a result of a regional nuclear war in which 100 "small" (15 Kt) weapons are detonated over cities. The model had outputs, due to the interaction of the soot cloud: global ozone losses of 20–50% over populated areas, levels unprecedented in human history, would accompany the coldest average surface temperatures in the last 1000 years. We calculate summer enhancements in UV indices of 30–80% over Mid-Latitudes, suggesting widespread damage to human health, agriculture, and terrestrial and aquatic ecosystems. Killing frosts would reduce growing seasons by 10–40 days per year for 5 years. Surface temperatures would be reduced for more than 25 years, due to thermal inertia and albedo effects in the ocean and expanded sea ice. The combined cooling and enhanced UV would put significant pressures on global food supplies and could trigger a global nuclear famine. 2018 Researchers at Los Alamos National Laboratory published the results of a multi-scale study of the climate impact of a regional nuclear exchange, the same scenario considered by Robock et al. and by Toon et al. in 2007. Unlike previous studies, this study simulated the processes whereby black carbon would be lofted into the atmosphere and found that very little would be lofted into the stratosphere and, as a result, the long-term climate impacts were much lower than those studies had concluded. In particular, "none of the simulations produced a nuclear winter effect," and "the probability of significant global cooling from a limited exchange scenario as envisioned in previous studies is highly unlikely." Research published in the peer-reviewed journal Safety suggested that no nation should possess more than 100 nuclear warheads because of the blowback effect on the aggressor nation's own population because of "nuclear autumn". 2019 2019 saw the publication of two studies on nuclear winter that build on previous modeling and describe new scenarios of nuclear winter from smaller exchanges of nuclear weapons than have been previously simulated. As in the 2007 study by Robock et al., a 2019 study by Coupe et al. models a scenario in which 150 Tg of black carbon is released into the atmosphere following an exchange of nuclear weapons between the United States and Russia where both countries use all of the nuclear weapons treaties permit them to. This amount of black carbon far exceeds that which has been emitted in the atmosphere by all volcanic eruptions in the past 1,200 years but is less than the asteroid impact which caused a mass extinction event 66 million years ago. Coupe et al. used the "whole atmosphere community climate model version 4" (WACCM4), which has a higher resolution and is more effective at simulating aerosols and stratospheric chemistry than the ModelE simulation used by Rocock et al.. The WACCM4 model simulates that black carbon molecules increase to ten times their normal size when they reach the stratosphere. ModelE did not account for this effect. This difference in black carbon particle size results in a greater optical depth in the WACCM4 model across the world for the first two years after the initial injection due to greater absorption of sunlight in the stratosphere. This will have the effect of increasing stratospheric temperatures by 100K and result in ozone depletion that is slightly greater than ModelE predicted. Another consequence of the larger particle size is accelerating the rate at which black carbon molecules fall out of the atmosphere; after ten years from the injection of black carbon into the atmosphere, WACCM4 predicts 2 Tg will remain, while ModelE predicted 19 Tg. The 2019 model and the 2007 model both predict significant temperature decreases across the globe, however the increased resolution and particle simulation in 2019 predict a greater temperature anomaly in the first six years after injection but a faster return to normal temperatures. Between a few months after the injection to the sixth year of anomaly, the WACCM4 predicts cooler global temperatures than ModelE, with temperatures more than 20K below normal leading to freezing temperatures during the summer months over much of the northern hemisphere leading to a 90% reduction in agricultural growing seasons in the midlatitudes, including the midwestern United States. WACCM4 simulations also predict a 58% reduction in global annual precipitation from normal levels in years three and four after injection, a 10% higher reduction than predicted in ModelE. Toon et al. simulated a nuclear scenario in 2025 where India and Pakistan engage in a nuclear exchange in which 100 urban areas in Pakistan and 150 urban areas in India are attacked with nuclear weapons ranging from 15 kt to 100 kt and examined the effects of black carbon released into the atmosphere from airburst-only detonations. The researchers modeled the atmospheric effects if all weapons were 15 kt, 50 kt, and 100 kt, providing a range where a nuclear exchange would likely fall into given the recent nuclear tests performed by both nations. The ranges provided are large because neither India or Pakistan is obligated to provide information on their nuclear arsenals, so their extent remains largely unknown. Toon et al. assume that either a firestorm or conflagration will occur after each detonation of the weapons, and the amount of black carbon inserted into the atmosphere from the two outcomes will be equivalent and of a profound extent; in Hiroshima in 1945, it is predicted that the firestorm released 1,000 times more energy than was released during the nuclear explosion. Such a large area being burned would release large amounts of black carbon into the atmosphere. The amount released ranges from 16.1 Tg if all weapons were 15 kt or less to 36.6 Tg for all 100 kt weapons. For the 15 kt and 100kt range of weapons, the researchers modeled global precipitation reductions of 15% to 30%, temperature reductions between 4K and 8K, and ocean temperature decreases of 1K to 3K. If all weapons used were 50 kt or more, Hadley cell circulation would be disrupted and cause a 50% decrease in precipitation in the American midwest. Net primary productivity (NPP) for oceans decreases from 10% to 20% for the 15 kt and 100 kt scenarios, respectively, while land NPP decreases between 15% and 30%; particularly affected
notable success by John Dryden. With Pindar's metre being better understood in the 18th century, the fashion for Pindaric odes faded, though there are notable actual Pindaric odes by Thomas Gray, The Progress of Poesy and The Bard. There was a time when meadow, grove, and stream, The earth, and every common sight, To me did seem Apparelled in celestial light, The glory and the freshness of a dream. It is not now as it hath been of yore;— Turn wheresoe'er I may, By night or day, The things which I have seen I now can see no more.... Our birth is but a sleep and a forgetting: The Soul that rises with us, our life's Star, Hath had elsewhere its setting, And cometh from afar: Not in entire forgetfulness, And not in utter nakedness, But trailing clouds of glory do we come From God, who is our home... (Excerpt from Wordsworth's Intimations of Immortality) Around 1800, William Wordsworth revived Cowley's Pindarick for one of his finest poems, the Intimations of Immortality ode. Others also wrote odes: Samuel Taylor Coleridge, John Keats, and Percy Bysshe Shelley who wrote odes with regular stanza patterns. Shelley's Ode to the West Wind, written in fourteen line terza rima stanzas, is a major poem in the form. Perhaps the greatest odes of the 19th century, however, were Keats's Five Great Odes of 1819, which included "Ode to a Nightingale", "Ode on Melancholy", "Ode on a Grecian Urn", "Ode to Psyche", and "To Autumn". After Keats, there have been
patterns and rhyme schemes. Cowley based the principle of his Pindariques on an apparent misunderstanding of Pindar's metrical practice but, nonetheless, others widely imitated his style, with notable success by John Dryden. With Pindar's metre being better understood in the 18th century, the fashion for Pindaric odes faded, though there are notable actual Pindaric odes by Thomas Gray, The Progress of Poesy and The Bard. There was a time when meadow, grove, and stream, The earth, and every common sight, To me did seem Apparelled in celestial light, The glory and the freshness of a dream. It is not now as it hath been of yore;— Turn wheresoe'er I may, By night or day, The things which I have seen I now can see no more.... Our birth is but a sleep and a forgetting: The Soul that rises with us, our life's Star, Hath had elsewhere its setting, And cometh from afar: Not in entire forgetfulness, And not in utter nakedness, But trailing clouds of glory do we come From God, who is our home... (Excerpt from Wordsworth's Intimations of Immortality) Around 1800, William Wordsworth revived Cowley's Pindarick for one of his finest poems, the Intimations of Immortality ode. Others also wrote odes: Samuel Taylor Coleridge, John Keats, and Percy Bysshe Shelley who wrote odes with regular stanza patterns. Shelley's Ode to the West Wind, written in fourteen line terza rima stanzas, is a major poem in the form. Perhaps the greatest odes of the 19th century, however, were Keats's Five Great Odes of 1819, which included "Ode to a Nightingale", "Ode on Melancholy", "Ode on a Grecian Urn", "Ode to Psyche", and "To Autumn".
of a colossal new Temple of Olympian Zeus was begun around 520 BC by his sons, Hippias and Hipparchos. They sought to surpass two famous contemporary temples, the Heraion of Samos and the second Temple of Artemis at Ephesus. Designed by the architects Antistates, Callaeschrus, Antimachides and Phormos, the Temple of Olympian Zeus was intended to be built of local limestone in the Doric style on a colossal platform measuring by . It was to be flanked by a double colonnade of eight columns across the front and back and twenty-one on the flanks, surrounding the cella. The work was abandoned when the tyranny was overthrown and Hippias was expelled in 510 BC. Only the platform and some elements of the columns had been completed by that point, and the temple remained in that state for 336 years. The temple was left unfinished during the years of Athenian democracy, apparently, because the Greeks thought it was hubris to build on such a scale. In his treatise Politics, Aristotle cited the temple as an example of how tyrannies engaged the populace in great works for the state (like a white elephant) and left them no time, energy or means to rebel. It was not until 174 BC that the Seleucid king Antiochus IV Epiphanes, who presented himself as the earthly embodiment of Zeus, revived the project and placed the Roman architect Decimus Cossutius in charge. The design was changed to have three rows of eight columns across the front and back of the temple and a double row of twenty on the flanks, for a total of 104 columns. The columns would stand high and in diameter. The building material was changed to the expensive but high-quality Pentelic marble and the order was changed from Doric to Corinthian, marking the first time that this order had been used on the exterior of a major temple. However, the project ground to a halt again in 164 BC with the death of Antiochus. The temple was still only half-finished by that stage. Serious damage was inflicted on the partly built temple by Lucius Cornelius Sulla's sack of Athens in 86 BC. While looting the city, Sulla seized some of the incomplete columns and transported them to Rome, where they were re-used in the Temple of Jupiter on the Capitoline Hill. A half-hearted attempt was made to complete the temple during Augustus' reign as the first Roman emperor, but it was not until the accession of Hadrian in the 2nd century AD that the project was finally completed around 638 years after it had begun. Roman era In 124–125 AD, when the strongly Philhellene Hadrian visited Athens, a massive building programme was begun that included the completion of the Temple of Olympian Zeus. A walled marble-paved precinct was constructed around the temple, making it a central focus of the ancient city. Cossutius' design was used with few changes and the temple was formally dedicated by Hadrian in 132, who took the title of "Panhellenios" in commemoration of the occasion. The temple and the surrounding precinct were adorned with numerous statues depicting Hadrian, the gods, and personifications of the Roman provinces. A colossal statue of Hadrian was raised behind the building by the people of Athens in honor of the emperor's generosity. An equally colossal chryselephantine statue of Zeus occupied the cella of the temple. The statue's form of construction was unusual, as the use of chryselephantine was by this time regarded as archaic. It has been suggested that Hadrian was deliberately imitating Phidias' famous statue of Athena Parthenos in the Parthenon, seeking to draw attention to the temple and himself by doing so. Pausanias describes the temple as it was in the 2nd century: Before the entrance to the sanctuary of Zeus Olympios [in Athens] – Hadrian the Roman emperor dedicated the temple and the statue, one
marble and the order was changed from Doric to Corinthian, marking the first time that this order had been used on the exterior of a major temple. However, the project ground to a halt again in 164 BC with the death of Antiochus. The temple was still only half-finished by that stage. Serious damage was inflicted on the partly built temple by Lucius Cornelius Sulla's sack of Athens in 86 BC. While looting the city, Sulla seized some of the incomplete columns and transported them to Rome, where they were re-used in the Temple of Jupiter on the Capitoline Hill. A half-hearted attempt was made to complete the temple during Augustus' reign as the first Roman emperor, but it was not until the accession of Hadrian in the 2nd century AD that the project was finally completed around 638 years after it had begun. Roman era In 124–125 AD, when the strongly Philhellene Hadrian visited Athens, a massive building programme was begun that included the completion of the Temple of Olympian Zeus. A walled marble-paved precinct was constructed around the temple, making it a central focus of the ancient city. Cossutius' design was used with few changes and the temple was formally dedicated by Hadrian in 132, who took the title of "Panhellenios" in commemoration of the occasion. The temple and the surrounding precinct were adorned with numerous statues depicting Hadrian, the gods, and personifications of the Roman provinces. A colossal statue of Hadrian was raised behind the building by the people of Athens in honor of the emperor's generosity. An equally colossal chryselephantine statue of Zeus occupied the cella of the temple. The statue's form of construction was unusual, as the use of chryselephantine was by this time regarded as archaic. It has been suggested that Hadrian was deliberately imitating Phidias' famous statue of Athena Parthenos in the Parthenon, seeking to draw attention to the temple and himself by doing so. Pausanias describes the temple as it was in the 2nd century: Before the entrance to the sanctuary of Zeus Olympios [in Athens] – Hadrian the Roman emperor dedicated the temple and the statue, one worth seeing, which in size exceeds all other statues save the colossi at Rhodes and Rome, and is made of ivory and gold with an artistic skill which is remarkable when the size is taken into account – before the entrance, I say, stand statues of Hadrian, two of Thasian stone, two of Egyptian. Before the pillars stand bronze statues which the Athenians call ‘colonies.’ The whole circumference of the precincts is about four states, and they are full of statues; for every city has dedicated a likeness of the emperor Hadrian, and the Athenians have surpassed them in dedicating, behind the temple, the remarkable colossus. Within the precincts are antiquities: a bronze Zeus, a temple of Kronos and Rhea and an enclosure of Gaia (Earth) surnamed Olympias. Here the floor opens to the width of a cubit, and they say that along this bed flowed off the water after the deluge that occurred in the time of [the mythical king] Deukalion, and into it, they cast every year wheat meal mixed with honey. On a pillar is a statue of Isokrates . . . There are also statues in Phrygian marble of Persians supporting a bronze tripod; both the figures and the tripod are worth seeing. The ancient sanctuary of Zeus Olympios the Athenians say was built by Deukalion, and they cite as evidence that Deukalion lived at Athens a grave which is not far from the present temple. Hadrian constructed other buildings also for the Athenians: a temple
work on polyacetylene and related conductive polymers. Many families of electrically conducting polymers have been identified including polythiophene, polyphenylene sulfide, and others. J.E. Lilienfeld first proposed the field-effect transistor in 1930, but the first OFET was not reported until 1987, when Koezuka et al. constructed one using Polythiophene which shows extremely high conductivity. Other conductive polymers have been shown to act as semiconductors, and newly synthesized and characterized compounds are reported weekly in prominent research journals. Many review articles exist documenting the development of these materials. In 1987, the first organic diode was produced at Eastman Kodak by Ching W. Tang and Steven Van Slyke. Electrically conductive charge transfer salts In the 1950s, organic molecules were shown to exhibit electrical conductivity. Specifically, the organic compound pyrene was shown to form semiconducting charge-transfer complex salts with halogens. In 1972, researchers found metallic conductivity (conductivity comparable to a metal) in the charge-transfer complex TTF-TCNQ. Light and electrical conductivity André Bernanose was the first person to observe electroluminescence in organic materials, and Ching W. Tang, reported fabrication of an OLED device in 1987. The OLED device incorporated a double-layer structure motif composed of copper phthalocyanine and a derivative of perylenetetracarboxylic dianhydride. In 1990, a polymer light emitting diodes was demonstrated by Bradley, Burroughes, Friend. Moving from molecular to macromolecular materials solved the problems previously encountered with the long-term stability of the organic films and enabled high-quality films to be easily made. In the late 1990's, highly efficient electroluminescent dopants were shown to dramatically increase the light-emitting efficiency of OLEDs These results suggested that electroluminescent materials could displace traditional hot-filament lighting. Subsequent research developed multilayer polymers and the new field of plastic electronics and organic light-emitting diodes (OLED) research and device production grew rapidly. Conductive organic materials Organic conductive materials can be grouped into two main classes: polymers and conductive molecular solids and salts. Polycyclic aromatic compounds such as pentacene and rubrene often form semiconducting materials when partially oxidized. Conductive polymers are often typically intrinsically conductive or at least semiconductors. They sometimes show mechanical properties comparable to those of conventional organic polymers. Both organic synthesis and advanced dispersion techniques can be used to tune the electrical properties of conductive polymers, unlike typical inorganic conductors. Well-studied class of conductive polymers include polyacetylene, polypyrrole, polythiophenes, and polyaniline. Poly(p-phenylene vinylene) and its derivatives are electroluminescent semiconducting polymers. Poly(3-alkythiophenes) have been incorporated into prototypes of solar cells and transistors. Organic light-emitting diode An OLED (organic light-emitting diode) consists of a thin film of organic material that emits light under stimulation by an electric current. A typical OLED consists of an anode, a cathode, OLED organic material and a conductive layer. OLED organic materials can be divided into two major families: small-molecule-based and polymer-based. Small molecule OLEDs (SM-OLEDs) include tris(8-hydroxyquinolinato)aluminium fluorescent and phosphorescent dyes, and conjugated dendrimers. Fluorescent dyes can be selected according to the desired range of emission wavelengths; compounds like perylene and rubrene are often used. Devices based on small molecules are usually fabricated by thermal evaporation under vacuum. While this method enables the formation of well-controlled homogeneous film; is hampered by high cost and limited scalability. Polymer light-emitting diodes (PLEDs) are generally more efficient than SM-OLEDs. Common polymers used in PLEDs include derivatives of poly(p-phenylene vinylene) and polyfluorene. The emitted color by the structure of the polymer. Compared to thermal evaporation, solution-based methods are more suited to creating films with large dimensions. Organic field-effect transistor An Organic field-effect transistor is a field-effect transistor utilizing organic molecules or polymers as the active semiconducting layer. A field-effect transistor (FET) is any semiconductor material that utilizes electric field to control the shape of a channel of one type of charge carrier, thereby changing its conductivity. Two major classes of FET are n-type and p-type semiconductor, classified according to the charge type carried. In the case of organic FETs (OFETs), p-type OFET compounds are generally more stable than n-type due to the susceptibility of the latter to oxidative damage. As for OLEDs, some OFETs are molecular and some are polymer-based system. Rubrene-based OFETs show high carrier mobility of 20–40 cm2/(V·s). Another popular OFET material is Pentacene. Due to its low solubility in most organic solvents,
field-effect transistor is a field-effect transistor utilizing organic molecules or polymers as the active semiconducting layer. A field-effect transistor (FET) is any semiconductor material that utilizes electric field to control the shape of a channel of one type of charge carrier, thereby changing its conductivity. Two major classes of FET are n-type and p-type semiconductor, classified according to the charge type carried. In the case of organic FETs (OFETs), p-type OFET compounds are generally more stable than n-type due to the susceptibility of the latter to oxidative damage. As for OLEDs, some OFETs are molecular and some are polymer-based system. Rubrene-based OFETs show high carrier mobility of 20–40 cm2/(V·s). Another popular OFET material is Pentacene. Due to its low solubility in most organic solvents, it's difficult to fabricate thin film transistors (TFTs) from pentacene itself using conventional spin-cast or, dip coating methods, but this obstacle can be overcome by using the derivative TIPS-pentacene. Organic electronic devices Organic solar cells could cut the cost of solar power compared with conventional solar-cell manufacturing. Silicon thin-film solar cells on flexible substrates allow a significant cost reduction of large-area photovoltaics for several reasons: The so-called 'roll-to-roll'-deposition on flexible sheets is much easier to realize in terms of technological effort than deposition on fragile and heavy glass sheets. Transport and installation of lightweight flexible solar cells also saves cost as compared to cells on glass. Inexpensive polymeric substrates like polyethylene terephthalate (PET) or polycarbonate (PC) have the potential for further cost reduction in photovoltaics. Protomorphous solar cells prove to be a promising concept for efficient and low-cost photovoltaics on cheap and flexible substrates for large-area production as well as small and mobile applications. One advantage of printed electronics is that different electrical and electronic components can be printed on top of each other, saving space and increasing reliability and sometimes they are all transparent. One ink must not damage another, and low temperature annealing is vital if low-cost flexible materials such as paper and plastic film are to be used. There is much sophisticated engineering and chemistry involved here, with iTi, Pixdro, Asahi Kasei, Merck & Co.|Merck, BASF, HC Starck, Hitachi Chemical and Frontier Carbon Corporation among the leaders. Electronic devices based on organic compounds are now widely used, with many new products under development. Sony reported the first full-color, video-rate, flexible, plastic display made purely of organic materials; television screen based on OLED materials; biodegradable electronics based on organic compound and low-cost organic solar cell are also available. Fabrication methods Small molecule semiconductors are often insoluble, necessitating deposition via vacuum sublimation. Devices based on conductive polymers can be prepared by solution processing methods. Both solution processing and vacuum based methods produce amorphous and polycrystalline films with variable degree of disorder. "Wet" coating techniques require polymers to be dissolved in a volatile solvent, filtered and
kernel is achieved by executing a software interrupt instruction, such as the Motorola 68000 TRAP instruction. The software interrupt causes the processor to switch from user mode to supervisor mode and begin executing code that allows the kernel to take control. In user mode, programs usually have access to a restricted set of processor instructions, and generally cannot execute any instructions that could potentially cause disruption to the system's operation. In supervisor mode, instruction execution restrictions are typically removed, allowing the kernel unrestricted access to all machine resources. The term "user mode resource" generally refers to one or more CPU registers, which contain information that the running program isn't allowed to alter. Attempts to alter these resources generally causes a switch to supervisor mode, where the operating system can deal with the illegal operation the program was attempting, for example, by forcibly terminating ("killing") the program. Memory management Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory. Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen any more, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system. Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which doesn't exist in all computers. In both segmentation and paging, certain protected mode registers specify to the CPU what memory address it should allow a running program to access. Attempts to access other addresses trigger an interrupt which cause the CPU to re-enter supervisor mode, placing the kernel in charge. This is called a segmentation violation or Seg-V for short, and since it is both difficult to assign a meaningful result to such an operation, and because it is usually a sign of a misbehaving program, the kernel generally resorts to terminating the offending program, and reports the error. Windows versions 3.1 through ME had some level of memory protection, but programs could easily circumvent the need to use it. A general protection fault would be produced, indicating a segmentation violation had occurred; however, the system would often crash anyway. Virtual memory The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks. If a program tries to access memory that isn't in its current range of accessible memory, but nonetheless has been allocated to it, the kernel is interrupted in the same way as it would if the program were to exceed its allocated memory. (See section on memory management.) Under UNIX this kind of interrupt is referred to as a page fault. When the kernel detects a page fault it generally adjusts the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has actually been allocated yet. In modern operating systems, memory which is accessed less frequently can be temporarily stored on disk or other media to make that space available for use by other programs. This is called swapping, as an area of memory can be used by multiple programs, and what that memory area contains can be swapped or exchanged on demand. "Virtual memory" provides the programmer or the user with the perception that there is a much larger amount of RAM in the computer than is really there. Multitasking Multitasking refers to the running of multiple independent computer programs on the same computer; giving the appearance that it is performing the tasks at the same time. Since most computers can do at most one or two things at one time, this is generally done via time-sharing, which means that each program uses a share of the computer's time to execute. An operating system kernel contains a scheduling program which determines how much time each process spends executing, and in which order execution control should be passed to programs. Control is passed to a process by the kernel, which allows the program access to the CPU and memory. Later, control is returned to the kernel through some mechanism, so that another program may be allowed to use the CPU. This so-called passing of control between the kernel and applications is called a context switch. An early model which governed the allocation of time to programs was called cooperative multitasking. In this model, when control is passed to a program by the kernel, it may execute for as long as it wants before explicitly returning control to the kernel. This means that a malicious or malfunctioning program may not only prevent any other programs from using the CPU, but it can hang the entire system if it enters an infinite loop. Modern operating systems extend the concepts of application preemption to device drivers and kernel code, so that the operating system has preemptive control over internal run-times as well. The philosophy governing preemptive multitasking is that of ensuring that all programs are given regular time on the CPU. This implies that all programs must be limited in how much time they are allowed to spend on the CPU without being interrupted. To accomplish this, modern operating system kernels make use of a timed interrupt. A protected mode timer is set by the kernel which triggers a return to supervisor mode after the specified time has elapsed. (See above sections on Interrupts and Dual Mode Operation.) On many single user operating systems cooperative multitasking is perfectly adequate, as home computers generally run a small number of well tested programs. The AmigaOS is an exception, having preemptive multitasking from its first version. Windows NT was the first version of Microsoft Windows which enforced preemptive multitasking, but it didn't reach the home user market until Windows XP (since Windows NT was targeted at professionals). Disk access and file systems Access to data stored on disks is a central feature of all operating systems. Computers store data on disks using files, which are structured in specific ways in order to allow for faster access, higher reliability, and to make better use of the drive's available space. The specific way in which files are stored on a disk is called a file system, and enables files to have names and attributes. It also allows them to be stored in a hierarchy of directories or folders arranged in a directory tree. Early operating systems generally supported a single type of disk drive and only one kind of file system. Early file systems were limited in their capacity, speed, and in the kinds of file names and directory structures they could use. These limitations often reflected limitations in the operating systems they were designed for, making it very difficult for an operating system to support more than one file system. While many simpler operating systems support a limited range of options for accessing storage systems, operating systems like UNIX and Linux support a technology known as a virtual file system or VFS. An operating system such as UNIX supports a wide array of storage devices, regardless of their design or file systems, allowing them to be accessed through a common application programming interface (API). This makes it unnecessary for programs to have any knowledge about the device they are accessing. A VFS allows the operating system to provide programs with access to an unlimited number of devices with an infinite variety of file systems installed on them, through the use of specific device drivers and file system drivers. A connected storage device, such as a hard drive, is accessed through a device driver. The device driver understands the specific language of the drive and is able to translate that language into a standard language used by the operating system to access all disk drives. On UNIX, this is the language of block devices. When the kernel has an appropriate device driver in place, it can then access the contents of the disk drive in raw format, which may contain one or more file systems. A file system driver is used to translate the commands used to access each specific file system into a standard set of commands that the operating system can use to talk to all file systems. Programs can then deal with these file systems on the basis of filenames, and directories/folders, contained within a hierarchical structure. They can create, delete, open, and close files, as well as gather various information about them, including access permissions, size, free space, and creation and modification dates. Various differences between file systems make supporting all file systems difficult. Allowed characters in file names, case sensitivity, and the presence of various kinds of file attributes makes the implementation of a single interface for every file system a daunting task. Operating systems tend to recommend using (and so support natively) file systems specifically designed for them; for example, NTFS in Windows and ReiserFS, Reiser4, ext3, ext4 and Btrfs in Linux. However, in practice, third party drivers are usually available to give support for the most widely used file systems in most general-purpose operating systems (for example, NTFS is available in Linux through NTFS-3g, and ext2/3 and ReiserFS are available in Windows through third-party software). Support for file systems is highly varied among modern operating systems, although there are several common file systems which almost all operating systems include support and drivers for. Operating systems vary on file system support and on the disk formats they may be installed on. Under Windows, each file system is usually limited in application to certain media; for example, CDs must use ISO 9660 or UDF, and as of Windows Vista, NTFS is the only file system which the operating system can be installed on. It is possible to install Linux onto many types of file systems. Unlike other operating systems, Linux and UNIX allow any file system to be used regardless of the media it is stored in, whether it is a hard drive, a disc (CD, DVD...), a USB flash drive, or even contained within a file located on another file system. Device drivers A device driver is a specific type of computer software developed to allow interaction with hardware devices. Typically this constitutes an interface for communicating with the device, through the specific computer bus or communications subsystem that the hardware is connected to, providing commands to and/or receiving data from the device, and on the other end, the requisite interfaces to the operating system and software applications. It is a specialized hardware-dependent computer program which is also operating system specific that enables another program, typically an operating system or applications software package or computer program running under the operating system kernel, to interact transparently with a hardware device, and usually provides the requisite interrupt handling necessary for any necessary asynchronous time-dependent hardware interfacing needs. The key design goal of device drivers is abstraction. Every model of hardware (even within the same class of device) is different. Newer models also are released by manufacturers that provide more reliable or better performance and these newer models are often controlled differently. Computers and their operating systems cannot be expected to know how to control every device, both now and in the future. To solve this problem, operating systems essentially dictate how every type of device should be controlled. The function of the device driver is then to translate these operating system mandated function calls into device specific calls. In theory a new device, which is controlled in a new manner, should function correctly if a suitable driver is available. This new driver ensures that the device appears to operate as usual from the operating system's point of view. Under versions of Windows before Vista and versions of Linux before 2.6, all driver execution was co-operative, meaning that if a driver entered an infinite loop it would freeze the system. More recent revisions of these operating systems incorporate kernel preemption, where the kernel interrupts the driver to give it tasks, and then separates itself from the process until it receives a response from the device driver, or gives it more tasks to do. Networking Currently most operating systems support a variety of networking protocols, hardware, and applications for using them. This means that computers running dissimilar operating systems can participate in a common network for sharing resources such as computing, files, printers, and scanners using either wired or wireless connections. Networks can essentially allow a computer's operating system to access the resources of a remote computer to support the same functions as it could if those resources were connected directly to the local computer. This includes everything from simple communication, to using networked file systems or even sharing another computer's graphics or sound hardware. Some network services allow the resources of a computer to be accessed transparently, such as SSH which allows networked users direct access to a computer's command line interface. Client/server networking allows a program on a computer, called a client, to connect via a network to another computer, called a server. Servers offer (or host) various services to other network computers and users. These services are usually provided through ports or numbered access points beyond the server's IP address. Each port number is usually associated with a maximum of one running program, which is responsible for handling requests to that port. A daemon, being a user program, can in turn access the local hardware resources of that computer by passing requests to the operating system kernel. Many operating systems support one or more vendor-specific or open networking protocols as well, for example, SNA on IBM systems, DECnet on systems from Digital Equipment Corporation, and Microsoft-specific protocols (SMB) on Windows. Specific protocols for specific tasks may also be supported such as NFS for file access. Protocols like ESound, or esd can be easily extended over the network to provide sound from local applications, on a remote system's sound hardware. Security A computer being secure depends on a number of technologies working properly. A modern operating system provides access to a number of resources, which are available to software running on the system, and to external devices like networks via the kernel. The operating system must be capable of distinguishing between requests which should be allowed to be processed, and others which should not be processed. While some systems may simply distinguish between "privileged" and "non-privileged", systems commonly have a form of requester identity, such as a user name. To establish identity there may be a process of authentication. Often a username must be quoted, and each username may have a password. Other methods of authentication, such as magnetic cards or biometric data, might be used instead. In some cases, especially connections from the network, resources may be accessed with no authentication at all (such as reading files over a network share). Also covered by the concept of requester identity is authorization; the particular services and resources accessible by the requester once logged into a system are tied to either the requester's user account or to the variously configured groups of users to which the requester belongs. In addition to the allow or disallow model of security, a system with a high level of security also offers auditing options. These would allow tracking of requests for access to resources (such as, "who has been reading this file?"). Internal security, or security from an already running program is only possible if all possibly harmful requests must be carried out through interrupts to the operating system kernel. If programs can directly access hardware and resources, they cannot be secured. External security involves a request from outside the computer, such as a login at a connected console or some kind of network connection. External requests are often passed through device drivers to the operating system's kernel, where they can be passed onto applications, or carried out directly. Security of operating systems has long been a concern because of highly sensitive data held on computers, both of a commercial and military nature. The United States Government Department of Defense (DoD) created the Trusted Computer System Evaluation Criteria (TCSEC) which is a standard that sets basic requirements for assessing the effectiveness of security. This became of vital importance to operating system makers, because the TCSEC was used to evaluate, classify and select trusted operating systems being considered for the processing, storage and retrieval of sensitive or classified information. Network services include offerings such as file sharing, print services, email, web sites, and file transfer protocols (FTP), most of which can have compromised security. At the front line of security are hardware devices known as firewalls or intrusion detection/prevention systems. At the operating system level, there are a number of software firewalls available, as well as intrusion detection/prevention systems. Most modern operating systems include a software firewall, which is enabled by default. A software firewall can be configured to allow or deny network traffic to or from a service or application running on the operating system. Therefore, one can install and be running an insecure service, such as Telnet or FTP, and not have to be threatened by a security breach because the firewall would deny all traffic trying to connect to the service on that port. An alternative strategy, and the only sandbox strategy available in systems that do not meet the Popek and Goldberg virtualization requirements, is where the operating system is not running user programs as native code, but instead either emulates a processor or provides a host for a p-code based system such as Java. Internal security is especially relevant for multi-user systems; it allows each user of the system to have private files that the other users cannot tamper with or read. Internal security is also vital if auditing is to be of any use, since a program can potentially bypass the operating system, inclusive of bypassing auditing. User interface Every computer that is to be operated by an individual requires a user interface. The user interface is usually referred to as a shell and is essential if human interaction is to be supported. The user interface views the directory structure and requests services from the operating system that will acquire data from input hardware devices, such as a keyboard, mouse or credit card reader, and requests operating system services to display prompts, status messages and such on output hardware devices, such as a video monitor or printer. The two most common forms of a user interface have historically been the command-line interface, where computer commands are typed out line-by-line, and the graphical user interface, where a visual
including a mail transfer agent, a Samba server, an LDAP server, a domain name server, and others. With Mac OS X v10.7 Lion, all server aspects of Mac OS X Server have been integrated into the client version and the product re-branded as "OS X" (dropping "Mac" from the name). The server tools are now offered as an application. z/OS UNIX System Services First introduced as the OpenEdition upgrade to MVS/ESA System Product Version 4 Release 3, announced February 1993 with support for POSIX and other standards., z/OS UNIX System Services is built on top of MVS services and cannot run independently. While IBM initially introduced OpenEdition to satisfy FIPS requirements, several z/OS component now require UNIX services, e.g., TCP/IP. Linux The Linux kernel originated in 1991, as a project of Linus Torvalds, while a university student in Finland. He posted information about his project on a newsgroup for computer students and programmers, and received support and assistance from volunteers who succeeded in creating a complete and functional kernel. Linux is Unix-like, but was developed without any Unix code, unlike BSD and its variants. Because of its open license model, the Linux kernel code is available for study and modification, which resulted in its use on a wide range of computing machinery from supercomputers to smart-watches. Although estimates suggest that Linux is used on only 1.82% of all "desktop" (or laptop) PCs, it has been widely adopted for use in servers and embedded systems such as cell phones. Linux has superseded Unix on many platforms and is used on most supercomputers including the top 385. Many of the same computers are also on Green500 (but in different order), and Linux runs on the top 10. Linux is also commonly used on other small energy-efficient computers, such as smartphones and smartwatches. The Linux kernel is used in some popular distributions, such as Red Hat, Debian, Ubuntu, Linux Mint and Google's Android, Chrome OS, and Chromium OS. Microsoft Windows Microsoft Windows is a family of proprietary operating systems designed by Microsoft Corporation and primarily targeted to Intel architecture based computers, with an estimated 88.9 percent total usage share on Web connected computers. The latest version is Windows 11. In 2011, Windows 7 overtook Windows XP as most common version in use. Microsoft Windows was first released in 1985, as an operating environment running on top of MS-DOS, which was the standard operating system shipped on most Intel architecture personal computers at the time. In 1995, Windows 95 was released which only used MS-DOS as a bootstrap. For backwards compatibility, Win9x could run real-mode MS-DOS and 16-bit Windows 3.x drivers. Windows ME, released in 2000, was the last version in the Win9x family. Later versions have all been based on the Windows NT kernel. Current client versions of Windows run on IA-32, x86-64 and ARM microprocessors. In addition Itanium is still supported in older server version Windows Server 2008 R2. In the past, Windows NT supported additional architectures. Server editions of Windows are widely used. In recent years, Microsoft has expended significant capital in an effort to promote the use of Windows as a server operating system. However, Windows' usage on servers is not as widespread as on personal computers as Windows competes against Linux and BSD for server market share. ReactOS is a Windows-alternative operating system, which is being developed on the principles of Windows without using any of Microsoft's code. Other There have been many operating systems that were significant in their day but are no longer so, such as AmigaOS; OS/2 from IBM and Microsoft; classic Mac OS, the non-Unix precursor to Apple's macOS; BeOS; XTS-300; RISC OS; MorphOS; Haiku; BareMetal and FreeMint. Some are still used in niche markets and continue to be developed as minority platforms for enthusiast communities and specialist applications. OpenVMS, formerly from DEC, is still under active development by VMS Software Inc. Yet other operating systems are used almost exclusively in academia, for operating systems education or to do research on operating system concepts. A typical example of a system that fulfills both roles is MINIX, while for example Singularity is used purely for research. Another example is the Oberon System designed at ETH Zürich by Niklaus Wirth, Jürg Gutknecht and a group of students at the former Computer Systems Institute in the 1980s. It was used mainly for research, teaching, and daily work in Wirth's group. Other operating systems have failed to win significant market share, but have introduced innovations that have influenced mainstream operating systems, not least Bell Labs' Plan 9. Components The components of an operating system all exist in order to make the different parts of a computer work together. All user software needs to go through the operating system in order to use any of the hardware, whether it be as simple as a mouse or keyboard or as complex as an Internet component. Kernel With the aid of the firmware and device drivers, the kernel provides the most basic level of control over all of the computer's hardware devices. It manages memory access for programs in the RAM, it determines which programs get access to which hardware resources, it sets up or resets the CPU's operating states for optimal operation at all times, and it organizes the data for long-term non-volatile storage with file systems on such media as disks, tapes, flash memory, etc. Program execution The operating system provides an interface between an application program and the computer hardware, so that an application program can interact with the hardware only by obeying rules and procedures programmed into the operating system. The operating system is also a set of services which simplify development and execution of application programs. Executing an application program involves the creation of a process by the operating system kernel which assigns memory space and other resources, establishes a priority for the process in multi-tasking systems, loads program binary code into memory, and initiates execution of the application program which then interacts with the user and with hardware devices. Interrupts Interrupts are central to most operating systems, as they provide an efficient way to react to the environment. Interrupts cause the central processing unit (CPU) to have a control flow change away from the currently running process. Input/Output (I/O) devices are slower than the CPU's clock signal. Therefore, it would slow down the computer if the CPU had to wait for each I/O to finish. Instead, a computer may implement direct memory access (DMA) I/O. The details of how a computer processes an interrupt vary from architecture to architecture, and the details of how interrupt service routines behave from operating system to operating system. The scenario below is typical, although the details for some other hardware and operating systems vary significantly. If a computer program in a computer with a direct memory access chip executes a system call to perform a DMA I/O blocking write operation, then the system call might execute the following instructions: Set the contents of the CPU's registers (including the program counter) into the process control block. Create an entry in the device-status table. The operating system maintains this table to keep track of which processes are waiting for which devices. One field in the table is the memory address of the process control block. Place all the characters to be sent to the device into a memory buffer. Set the memory address of the memory buffer to a predetermined device register. Set the buffer size (an integer) to another predetermined register. Execute the machine instruction to begin the writing. Perform a context switch to the next process in the ready queue. While the writing takes place, the operating system will context switch to other processes as normal. When the device finishes writing, the device will interrupt the currently running process by asserting an interrupt request. The device will also place an integer onto the device's data bus. Upon accepting the interrupt request, the CPU will: Push the contents of its program counter and program status word onto the call stack. Read the integer from the data bus. The integer is an offset to the interrupt vector table. The vector table's instructions will return control to the operating system. The operating system will then: Access the device-status table. Extract the process control block. Perform a context switch back to the writing process. When the writing process has its time slice expired, the CPU will: Pop from the call stack the program status word and set it back to its register. Pop from the call stack the address of the interrupted process' next instruction and set it back into the program counter. The interrupted process will then resume its time slice. Modes Modern computers support multiple modes of operation. CPUs with this capability offer at least two modes: user mode and supervisor mode. In general terms, supervisor mode operation allows unrestricted access to all machine resources, including all MPU instructions. User mode operation sets limits on instruction use and typically disallows direct access to machine resources. CPUs might have other modes similar to user mode as well, such as the virtual modes in order to emulate older processor types, such as 16-bit processors on a 32-bit one, or 32-bit processors on a 64-bit one. At power-on or reset, the system begins in supervisor mode. Once an operating system kernel has been loaded and started, the boundary between user mode and supervisor mode (also known as kernel mode) can be established. Supervisor mode is used by the kernel for low level tasks that need unrestricted access to hardware, such as controlling how memory is accessed, and communicating with devices such as disk drives and video display devices. User mode, in contrast, is used for almost everything else. Application programs, such as word processors and database managers, operate within user mode, and can only access machine resources by turning control over to the kernel, a process which causes a switch to supervisor mode. Typically, the transfer of control to the kernel is achieved by executing a software interrupt instruction, such as the Motorola 68000 TRAP instruction. The software interrupt causes the processor to switch from user mode to supervisor mode and begin executing code that allows the kernel to take control. In user mode, programs usually have access to a restricted set of processor instructions, and generally cannot execute any instructions that could potentially cause disruption to the system's operation. In supervisor mode, instruction execution restrictions are typically removed, allowing the kernel unrestricted access to all machine resources. The term "user mode resource" generally refers to one or more CPU registers, which contain information that the running program isn't allowed to alter. Attempts to alter these resources generally causes a switch to supervisor mode, where the operating system can deal with the illegal operation the program was attempting, for example, by forcibly terminating ("killing") the program. Memory management Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory. Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen any more, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system. Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which doesn't exist in all computers. In both segmentation and paging, certain protected mode registers specify to the CPU what memory address it should allow a running program to access. Attempts to access other addresses trigger an interrupt which cause the CPU to re-enter supervisor mode, placing the kernel in charge. This is called a segmentation violation or Seg-V for short, and since it is both difficult to assign a meaningful result to such an operation, and because it is usually a sign of a misbehaving program, the kernel generally resorts to terminating the offending program, and reports the error. Windows versions 3.1 through ME had some level of memory protection, but programs could easily circumvent the need to use it. A general protection fault would be produced, indicating a segmentation violation had occurred; however, the system would often crash anyway. Virtual memory The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks. If a program tries to access memory that isn't in its current range of accessible memory, but nonetheless has been allocated to it, the kernel is interrupted in the same way as it would if the program were to exceed its allocated memory. (See section on memory management.) Under UNIX this kind of interrupt is referred to as a page fault. When the kernel detects a page fault it generally adjusts the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has actually been allocated yet. In modern operating systems, memory which is accessed less frequently can be temporarily stored on disk or other media to make that space available for use by other programs. This is called swapping, as an area of memory can be used by multiple programs, and what that memory area contains can be swapped or exchanged on demand. "Virtual memory" provides the programmer or the user with the perception that there is a much larger amount of RAM in the computer than is really there. Multitasking Multitasking refers to the running of multiple independent computer programs on the same computer; giving the appearance that it is performing the tasks at the same time. Since most computers can do at most one or two things at one time, this is generally done via time-sharing, which means that each program uses a share of the computer's time to execute. An operating system kernel contains a scheduling program which determines how much time each process spends executing, and in which order execution control should be passed to programs. Control is passed to a process by the kernel, which allows the program access to the CPU and memory. Later, control is returned to the kernel through some mechanism, so that another program may be allowed to use the CPU. This so-called passing of control between the kernel and applications is called a context switch. An early model which governed the allocation of time to programs was called cooperative multitasking. In this model, when control is passed to a program by the kernel, it may execute for as long as it wants before explicitly returning control to the kernel. This means that a malicious or malfunctioning program may not only prevent any other programs from using the CPU, but it can hang the entire system if it enters an infinite loop. Modern operating systems extend the concepts of application preemption to device drivers and kernel code, so that the operating system has preemptive control over internal run-times as well. The philosophy governing preemptive multitasking is that of ensuring that all programs are given regular time on the CPU. This implies that all programs must be limited in how much time they are allowed to spend on the CPU without being interrupted. To accomplish this, modern operating system kernels make use of a timed interrupt. A protected mode timer is set by the kernel which triggers a return to supervisor mode after the specified time has elapsed. (See above sections on Interrupts and Dual Mode Operation.) On many single user operating systems cooperative multitasking is perfectly adequate, as home computers generally run a small number of well tested programs. The AmigaOS is an exception, having preemptive multitasking from its first version. Windows NT was the first version of Microsoft Windows which enforced preemptive multitasking, but it didn't reach the home user market until Windows XP (since Windows NT was targeted at professionals). Disk access and file systems Access to data stored on disks is a central feature of all operating systems. Computers store data on disks using files, which are structured in specific ways in order to allow for faster access, higher reliability, and to make better use of the drive's available space. The specific way in which files are stored on a disk is called a file system, and enables files to have names and attributes. It also allows them to be stored in a hierarchy of directories or folders arranged in a directory tree. Early operating systems generally supported a single type of disk drive and only one kind of file system. Early file systems were limited in their capacity, speed, and in the kinds of file names and directory structures they could use. These limitations often reflected limitations in the operating systems they were designed for, making it very difficult for an operating system to support more than one file system. While many simpler operating systems support a limited range of options for accessing storage systems, operating systems like UNIX and Linux support a technology known as a virtual file system or VFS. An operating system such as UNIX supports a wide array of storage devices, regardless of their design or file systems, allowing them to be accessed through a common application programming interface (API). This makes it unnecessary for programs to have any knowledge about the device they are accessing. A VFS allows the operating system to provide programs with access to an unlimited number of devices with an infinite variety of file systems installed on them, through the use of specific device drivers and file system drivers. A connected storage device, such as a hard drive, is accessed through a device driver. The device driver understands the specific language of the drive and is able to translate that language into a standard language used by the operating system to access all disk drives. On UNIX, this is the language of block devices. When the kernel has an appropriate device driver in place, it can then access the contents of the disk drive in raw format, which may contain one or more file systems. A file system driver is used to translate the commands used to access each specific file system into a standard set of commands that the operating system can use to talk to all file systems. Programs can then deal with these file systems on the basis of filenames, and directories/folders, contained within a hierarchical structure. They can create, delete, open, and close files, as well as gather various information about them, including access permissions, size, free space, and creation and modification dates. Various differences between file systems make supporting all file systems difficult. Allowed characters in file names, case sensitivity, and the presence of various kinds of file attributes makes the implementation of a single interface for every file system a daunting task. Operating systems tend to recommend using (and so support natively) file systems specifically designed for them; for example, NTFS in Windows and ReiserFS, Reiser4, ext3, ext4 and Btrfs in Linux. However, in practice, third party drivers are usually available to give support for the most widely used file systems in most general-purpose operating systems (for example, NTFS is available in Linux through NTFS-3g, and ext2/3 and ReiserFS are available in Windows through third-party software). Support for file systems is highly varied among modern operating systems, although there are several common file systems which almost all operating systems include support and drivers for. Operating systems vary on file system support and on the disk formats they may be installed on. Under Windows, each file system is usually limited in application to certain media; for example, CDs must use ISO 9660 or UDF, and as of Windows Vista, NTFS is the only file system which the operating system can be installed on. It is possible to install Linux onto many types of file systems. Unlike other operating systems, Linux and UNIX allow any file system to be used regardless of the media it is stored in, whether it is a hard drive, a disc (CD, DVD...), a USB flash drive, or even contained within a file located on another file system. Device drivers A device driver is a specific type of computer software developed to allow interaction with hardware devices. Typically this constitutes an interface for communicating with the device, through the specific computer bus or communications subsystem that the hardware is connected to, providing commands to and/or receiving data from the device, and on the other end, the requisite interfaces to the operating system and software applications. It is a specialized hardware-dependent computer program which is also operating system specific that enables another program, typically an operating system or applications software package or computer program running under the operating system kernel, to interact transparently with a hardware device, and usually provides the requisite interrupt handling necessary for any necessary asynchronous time-dependent hardware interfacing needs. The key design goal of device drivers is abstraction. Every model of hardware (even within the same class of device) is different. Newer models also are released by manufacturers that provide more reliable or better performance and these newer models are often controlled differently. Computers and their operating systems cannot be expected to know how to control every device, both now and in the future. To solve this problem, operating systems essentially dictate how every type of device should be controlled. The function of the device driver is then to translate these operating system mandated function calls into device specific calls. In theory a new device, which is controlled in a new manner, should function correctly if a suitable driver is available. This new driver ensures that the device appears to operate as usual from the operating system's point of view. Under versions of Windows before Vista and versions of Linux before 2.6, all driver execution was co-operative, meaning that if a driver entered an infinite
were divorced on November 10, 1947. During his last interview, recorded for The Merv Griffin Show on the evening before his death, Welles called Hayworth "one of the dearest and sweetest women that ever lived ... and we were a long time together—I was lucky enough to have been with her longer than any of the other men in her life." In 1955, Welles married actress Paola Mori (née Countess Paola di Gerfalco), an Italian aristocrat who starred as Raina Arkadin in his 1955 film, Mr. Arkadin. The couple began a passionate affair, and they were married at her parents' insistence. They were wed in London May 8, 1955, and never divorced. Croatian-born artist and actress Oja Kodar became Welles's longtime companion both personally and professionally from 1966 onward, and they lived together for some of the last 20 years of his life. Welles had three daughters from his marriages: Christopher Welles Feder (born March 27, 1938, with Virginia Nicolson); Rebecca Welles Manning (December 17, 1944 – October 17, 2004, with Rita Hayworth); and Beatrice Welles (born November 13, 1955, with Paola Mori). Welles is thought to have had a son, British director Michael Lindsay-Hogg (born May 5, 1940), with Irish actress Geraldine Fitzgerald, then the wife of Sir Edward Lindsay-Hogg, 4th baronet. When Lindsay-Hogg was 16, his mother reluctantly divulged pervasive rumors that his father was Welles, and she denied them—but in such detail that he doubted her veracity. Fitzgerald evaded the subject for the rest of her life. Lindsay-Hogg knew Welles, worked with him in the theatre and met him at intervals throughout Welles's life. After learning that Welles's oldest daughter, Chris, his childhood playmate, had long suspected that he was her brother, Lindsay-Hogg initiated a DNA test that proved inconclusive. In his 2011 autobiography, Lindsay-Hogg reported that his questions were resolved by his mother's close friend Gloria Vanderbilt, who wrote that Fitzgerald had told her that Welles was his father. A 2015 Welles biography by Patrick McGilligan, however, reports the impossibility of Welles's paternity: Fitzgerald left the U.S. for Ireland in May 1939, and her son was conceived before her return in late October, whereas Welles did not travel overseas during that period. After the death of Rebecca Welles Manning, a man named Marc McKerrow was revealed to be her son—and therefore a direct descendant of Orson Welles and Rita Hayworth—after he requested his adoption records unsealed. While McKerrow and Rebecca were never able to meet due to her cancer, they were in touch before her death, and he attended her funeral. McKerrow's reactions to the revelation and his meeting with Oja Kodar are documented in the 2008 film Prodigal Sons by his sister Kim Reed. McKerrow died on June 18, 2010, suddenly in his sleep at the age of 44. His death was "...caused by complications from a nocturnal seizure" related to a car accident and resulting injury when he was younger. In the 1940s, Welles had a brief relationship with Maila Nurmi, who, according to the bio Glamour Ghoul: The Passions and Pain of the Real Vampira, Maila Nurmi, became pregnant; since Welles was at the time married to Hayworth, Nurmi gave the child up for adoption. However, the child mentioned in the book was born in 1944. Nurmi revealed in an interview weeks before her death in January 2008 how she met Welles in a New York casting office in the spring of 1946. Despite an urban legend promoted by Welles, he was not related to Abraham Lincoln's wartime Secretary of the Navy, Gideon Welles. The myth dates back to the first newspaper feature ever written about Welles—"Cartoonist, Actor, Poet and only 10"—in the February 19, 1926, issue of The Capital Times. The article falsely states that he was descended from "Gideon Welles, who was a member of President Lincoln's cabinet". As presented by Charles Higham in a genealogical chart that introduces his 1985 biography of Welles, Orson Welles's father was Richard Head Welles (born Wells), son of Richard Jones Wells, son of Henry Hill Wells (who had an uncle named Gideon Wells), son of William Hill Wells, son of Richard Wells (1734–1801). Physical characteristics Peter Noble's 1956 biography describes Welles as "a magnificent figure of a man, over six feet tall, handsome, with flashing eyes and a gloriously resonant speaking-voice". Welles said that a voice specialist once told him he was born to be a heldentenor, a heroic tenor, but that when he was young and working at the Gate Theatre in Dublin, he forced his voice down into a bass-baritone. Even as a baby, Welles was prone to illness, including diphtheria, measles, whooping cough, and malaria. From infancy he suffered from asthma, sinus headaches, and backache that was later found to be caused by congenital anomalies of the spine. Foot and ankle trouble throughout his life was the result of flat feet. "As he grew older", Brady wrote, "his ill health was exacerbated by the late hours he was allowed to keep [and] an early penchant for alcohol and tobacco". In 1928, at age 13, Welles was already more than six feet tall (1.83 meters) and weighed over 180 pounds (81.6 kg). His passport recorded his height as six feet three inches (192 cm), with brown hair and green eyes. "Crash diets, [pharmaceutical] drugs, and corsets had slimmed him for his early film roles", wrote biographer Barton Whaley. "Then always back to gargantuan consumption of high-caloric food and booze. By summer 1949, when he was 34, his weight had crept up to a stout 230 pounds (104 kg). In 1953, he ballooned from 250 to 275 pounds (113 to 125 kg). After 1960, he remained permanently obese." Religious beliefs When Peter Bogdanovich once asked him about his religion, Welles gruffly replied that it was none of his business, then misinformed him that he was raised Catholic. Although the Welles family was no longer devout, it was fourth-generation Episcopalian and before that, Quaker and Puritan. The funeral of Welles's father, Richard H. Welles, was Episcopalian. In April 1982, when interviewer Merv Griffin asked him about his religious beliefs, Welles replied, "I try to be a Christian. I don't pray really, because I don't want to bore God." Near the end of his life, Welles was dining at Ma Maison, his favorite restaurant in Los Angeles, when proprietor Patrick Terrail conveyed an invitation from the head of the Greek Orthodox Church, who asked Welles to be his guest of honor at divine liturgy at Saint Sophia Cathedral. Welles replied, "Please tell him I really appreciate that offer, but I am an atheist." "Orson never joked or teased about the religious beliefs of others", wrote biographer Barton Whaley. "He accepted it as a cultural artifact, suitable for the births, deaths, and marriages of strangers and even some friends—but without emotional or intellectual meaning for himself." Politics and activities Welles was politically active from the beginning of his career. He remained aligned with left-wing politics and the American Left throughout his life, and always defined his political orientation as "progressive". Despite being a Democrat, he was an outspoken critic of racism in the United States and the practice of segregation both supported by the Democratic party. He was a strong supporter of Franklin D. Roosevelt and the New Deal and often spoke out on radio in support of progressive politics. He campaigned heavily for Roosevelt in the 1944 election. Welles did not support the 1948 presidential bid of Roosevelt's second vice president Henry A. Wallace for the Progressive Party, later describing Wallace as "a prisoner of the Communist Party."p. 66 In a 1983 conversation with his friend Roger Hill, Welles recalled: "During a White House dinner, when I was campaigning for Roosevelt, in a toast, with considerable tongue in cheek, he said, 'Orson, you and I are the two greatest actors alive today.' In private that evening, and on several other occasions, he urged me to run for a Senate seat in either California or Wisconsin. He wasn't alone." In the 1980s, Welles still expressed admiration for Roosevelt but also described his presidency as "a semidictatorship."p. 187 During a 1970 appearance on The Dick Cavett Show, Welles claimed to have met Hitler while hiking in Austria with a teacher who was a "budding Nazi". He said that Hitler made no impression on him at all and does not remember him. He said that he had no personality at all: "He was invisible. There was nothing there until there were 5,000 people yelling sieg heil." For several years, he wrote a newspaper column on political issues and considered running for the U.S. Senate in 1946, representing his home state of Wisconsin—a seat that was ultimately won by Joseph McCarthy. Welles's political activities were reported on pages 155–157 of Red Channels, the anti-Communist publication that, in part, fueled the already flourishing Hollywood Blacklist. He was in Europe during the height of the Red Scare, thereby adding one more reason for the Hollywood establishment to ostracize him. In 1970, Welles narrated (but did not write) a satirical political record on the rise of President Richard Nixon titled The Begatting of the President. He was a lifelong member of the International Brotherhood of Magicians and the Society of American Magicians. Death and tributes On the evening of October 9, 1985, Welles recorded his final interview on the syndicated TV program The Merv Griffin Show, appearing with biographer Barbara Leaming. "Both Welles and Leaming talked of Welles's life, and the segment was a nostalgic interlude," wrote biographer Frank Brady. Welles returned to his house in Hollywood and worked into the early hours typing stage directions for the project he and Gary Graver were planning to shoot at UCLA the following day. Welles died sometime on the morning of October 10, following a heart attack. He was found by his chauffeur at around 10 a.m.; the first of Welles's friends to arrive was Paul Stewart. Welles was 70 years old at his death. Welles was cremated by prior agreement with the executor of his estate, Greg Garrison, whose advice about making lucrative TV appearances in the 1970s made it possible for Welles to pay off a portion of the taxes he owed the IRS. A brief private funeral was attended by Paola Mori and Welles's three daughters—the first time they had ever been together. Only a few close friends were invited: Garrison, Graver, Roger Hill and Prince Alessandro Tasca di Cuto. Chris Welles Feder later described the funeral as an awful experience. A public memorial tribute took place November 2, 1985, at the Directors Guild of America Theater in Los Angeles. Host Peter Bogdanovich introduced speakers including Charles Champlin, Geraldine Fitzgerald, Greg Garrison, Charlton Heston, Roger Hill, Henry Jaglom, Arthur Knight, Oja Kodar, Barbara Leaming, Janet Leigh, Norman Lloyd, Dan O'Herlihy, Patrick Terrail and Robert Wise. "I know what his feelings were regarding his death", Joseph Cotten later wrote. "He did not want a funeral; he wanted to be buried quietly in a little place in Spain. He wanted no memorial services ..." Cotten declined to attend the memorial program; instead, he sent a short message, ending with the last two lines of a Shakespeare sonnet that Welles had sent him on his most recent birthday: But if the while I think on thee, dear friend,All losses are restored and sorrows end. In 1987 the ashes of Welles and Mori (killed in a 1986 car crash) were taken to Ronda, Spain, and buried in an old well covered by flowers on the rural estate of a longtime friend, bullfighter Antonio Ordóñez. Unfinished projects Welles's reliance on self-production meant that many of his later projects were filmed piecemeal or were not completed. Welles financed his later projects through his own fundraising activities. He often also took on other work to obtain money to fund his own films. Don Quixote In the mid-1950s, Welles began work on Don Quixote, initially a commission from CBS television. Welles expanded the film to feature length, developing the screenplay to take Quixote and Sancho Panza into the modern age. Filming stopped with the death of Francisco Reiguera, the actor playing Quixote, in 1969. Orson Welles continued editing the film into the early 1970s. At the time of his death, the film remained largely a collection of footage in various states of editing. The project and, more important, Welles's conception of the project changed radically over time. A version Oja Kodar supervised, with help from Jess Franco, assistant director during production, was released in 1992 to poor reviews. Frederick Muller, the film editor for The Trial, Chimes at Midnight, and the CBS Special Orson Bag, worked on editing three reels of the original, unadulterated version. When asked in 2013 by a journalist of Time Out for his opinion, he said that he felt that if released without image re-editing but with the addition of ad hoc sound and music, it probably would have been rather successful. The Merchant of Venice In 1969, Welles was given a TV commission to film a condensed adaptation of The Merchant of Venice. Welles completed the film by 1970, but the finished negative was later mysteriously stolen from his Rome production office. A restored and reconstructed version of the film, made by using the original script and composer's notes, premiered at pre-opening ceremonies of the 72nd Venice International Film Festival, alongside Othello, in 2015. The Other Side of the Wind In 1970, Welles began shooting The Other Side of the Wind. The film relates the efforts of a film director (played by John Huston) to complete his last Hollywood picture and is largely set at a lavish party. By 1972 the filming was reported by Welles as being "96% complete", though by 1979 Welles had only edited about 40 minutes of the film. In that year, legal complications over the ownership of the film put the negative into a Paris vault. In 2004 director Peter Bogdanovich, who acted in the film, announced his intention to complete the production. On October 28, 2014, Los Angeles-based production company Royal Road Entertainment announced it had negotiated an agreement, with the assistance of producer Frank Marshall, and would purchase the rights to complete and release The Other Side of the Wind. Bogdanovich and Marshall planned to complete Welles's nearly finished film in Los Angeles, aiming to have it ready for screening on May 6, 2015, the 100th anniversary of Welles's birth. Royal Road Entertainment and German producer Jens Koethner Kaul acquired the rights held by Les Films de l'Astrophore and the late Mehdi Boushehri. They reached an agreement with Oja Kodar, who inherited Welles's ownership of the film, and Beatrice Welles, manager of the Welles estate; but at the end of 2015, efforts to complete the film were at an impasse. In March 2017, Netflix acquired distribution rights to the film. That month, the original negative, dailies and other footage arrived in Los Angeles for post-production; the film was completed in 2018. The film premiered at the 75th Venice International Film Festival on August 31, 2018. On November 2, 2018, the film debuted in select theaters and on Netflix, forty-eight years after principal photography began. Some footage is included in the documentaries Working with Orson Welles (1993), Orson Welles: One Man Band (1995), and most extensively They'll Love Me When I'm Dead (2018). Other unfinished films and unfilmed screenplays Too Much Johnson Too Much Johnson is a 1938 comedy film written and directed by Welles. Designed as the cinematic aspect of Welles's Mercury Theatre stage presentation of William Gillette's 1894 comedy, the film was not completely edited or publicly screened. Too Much Johnson was considered a lost film until August 2013, with news reports that a pristine print had been discovered in Italy in 2008. A copy restored by the George Eastman House museum was scheduled to premiere October 9, 2013, at the Pordenone Silent Film Festival, with a U.S. premiere to follow. The film was shown at a single screening at the Los Angeles County Museum of Art on May 3, 2014. A single performance of Too Much Johnson, on February 2, 2015, at the Film Forum in New York City, was a great success. Produced by Bruce Goldstein and adapted and directed by Allen Lewis Rickman, it featured the Film Forum Players with live piano. Heart of Darkness Heart of Darkness was Welles's projected first film, in 1940. It was planned in extreme detail and some test shots were filmed; the footage is now lost. It was planned to be entirely shot in long takes from the point of view of the narrator, Marlow, who would be played by Welles; his reflection would occasionally be seen in the window as his boat sailed down river. The project was abandoned because it could not be delivered on budget, and Citizen Kane was made instead. Santa In 1941, Welles planned a film with his then partner, the Mexican actress Dolores del Río. Santa was adapted from the novel by Mexican writer Federico Gamboa. The film would have marked the debut of Dolores del Río in the Mexican cinema. Welles made a correction of the script in 13 extraordinary sequences. The high salary demanded by del Río stopped the project. In 1943, the film was finally completed with the settings of Welles, led by Norman Foster and starring Mexican actress Esther Fernández. The Way to Santiago In 1941 Welles also planned a Mexican drama with Dolores del Río, which he gave to RKO to be budgeted. The film was a movie version of the novel by the same name by Calder Marshall. In the story, del Río would play Elena Medina, "the most beautiful girl in the world", with Welles playing an American who becomes entangled in a mission to disrupt a Nazi plot to overthrow the Mexican government. Welles planned to shoot in Mexico, but the Mexican government had to approve the story, and this never occurred. The Life of Christ In 1941, Welles received the support of Bishop Fulton Sheen for a retelling of the life of Christ, to be set in the American West in the 1890s. After filming of Citizen Kane was complete, Welles, Perry Ferguson, and Gregg Toland scouted locations in Baja California and Mexico. Welles wrote a screenplay with dialogue from the Gospels of Mark, Matthew, and Luke. "Every word in the film was to be from the Bible — no original dialogue, but done as a sort of American primitive," Welles said, "set in the frontier country in the last century." The unrealized project was revisited by Welles in the 1950s, when he wrote a second unfilmed screenplay, to be shot in Egypt. It's All True Welles did not originally want to direct It's All True, a 1942 documentary about South America, but after its abandonment by RKO, he spent much of the 1940s attempting to buy the negative of his material from RKO, so that he could edit and release it in some form. The footage remained unseen in vaults for decades and was assumed lost. Over 50 years later, some (but not all) of the surviving material saw release in the 1993 documentary It's All True: Based on an Unfinished Film by Orson Welles. Monsieur Verdoux In 1944, Welles wrote the first-draft script of Monsieur Verdoux, a film that he also intended to direct. Charlie Chaplin initially agreed to star in it, but later changed his mind, citing never having been directed by someone else in a feature before. Chaplin bought the film rights and made the film himself in 1947, with some changes. The final film credits Chaplin with the script, "based on an idea by Orson Welles". Cyrano de Bergerac Welles spent around nine months around 1947–48 co-writing the screenplay for Cyrano de Bergerac along with Ben Hecht, a project Welles was assigned to direct for Alexander Korda. He began scouting for locations in Europe whilst filming Black Magic, but Korda was short of money, so sold the rights to Columbia pictures, who eventually dismissed Welles from the project, and then sold the rights to United Artists, who in turn made a film version in 1950, which was not based on Welles's script. Around the World in Eighty Days After Welles's elaborate musical stage version of this Jules Verne novel, encompassing 38 different sets, went live in 1946, Welles shot some test footage in Morocco in 1947 for a film version. The footage was never edited, funding never came through, and Welles abandoned the project. Nine years later, the stage show's producer Mike Todd made his own award-winning film version of the book. Moby Dick—Rehearsed Moby Dick—Rehearsed was a film version of Welles's 1955 London meta-play, starring Gordon Jackson, Christopher Lee, Patrick McGoohan, and with Welles as Ahab. Using bare, minimalist sets, Welles alternated between a cast of nineteenth-century actors rehearsing a production of Moby Dick, with scenes from Moby Dick itself. Kenneth Williams, a cast member who was apprehensive about the entire project, recorded in his autobiography that Welles's dim, atmospheric stage lighting made some of the footage so dark as to be unwatchable. The entire play was filmed but is now presumed lost. This was made during one weekend at the Hackney Empire theater. Histoires extraordinaires The producers of Histoires extraordinaires, a 1968 anthology film based on short stories by Edgar Allan Poe, announced in June 1967 that Welles would direct one segment based on both "Masque of the Red Death" and "The Cask of Amontillado" for the omnibus film. Welles withdrew in September 1967 and was replaced. The script, written in English by Welles and Oja Kodar, is in the Filmmuseum Munchen collection. One-Man Band This Monty Python-esque spoof in which Welles plays all but one of the characters (including two characters in drag), was made around 1968–9. Welles intended this completed sketch to be one of several items in a television special on London. Other items filmed for this special – all included in the "One Man Band" documentary by his partner Oja Kodar — comprised a sketch on Winston Churchill (played in silhouette by Welles), a sketch on peers in a stately home, a feature on London gentlemen's clubs, and a sketch featuring Welles being mocked by his snide Savile Row tailor (played by Charles Gray). Treasure Island Welles wrote two screenplays for Treasure Island in the 1960s, and was eager to seek financial backing to direct it. His plan was to film it in Spain in concert with Chimes at Midnight. Welles intended to play the part of Long John Silver. He wanted Keith Baxter to play Doctor Livesey and John Gielgud to take on the role of Squire Trelawney. Australian-born child actor Fraser MacIntosh (The Boy Cried Murder), then 11-years old, was cast as Jim Hawkins and flown to Spain for the shoot, which would have been directed by Jess Franco. About 70 percent of the Chimes at Midnight cast would have had roles in Treasure Island. However, funding for the project fell through. Eventually, Welles's own screenplay (under the pseudonym of O.W. Jeeves) was further rewritten, and formed the basis of the 1972 film version directed by John Hough, in which Welles played Long John Silver. The Deep The Deep, an adaptation of Charles Williams's Dead Calm, was entirely set on two boats and shot mostly in close-ups. It was filmed off the coasts of Yugoslavia and the Bahamas between 1966 and 1969, with all but one scene completed. It was originally planned as a commercially viable thriller, to show that Welles could make a popular, successful film. It was put on hold in 1970 when Welles worried that critics would not respond favorably to this film as his theatrical follow-up to the much-lauded Chimes at Midnight, and Welles focused instead on F for Fake. It was abandoned altogether in 1973, perhaps due to the death of its star Laurence Harvey. In a 2015 interview, Oja Kodar blamed Welles's failure to complete the film on Jeanne Moreau's refusal to participate in its dubbing. Dune Dune, an early attempt at adapting Frank Herbert's sci-fi novel by Chilean film director Alejandro Jodorowsky, was to star Welles as the evil Baron Vladimir Harkonnen. Jodorowsky had personally chosen Welles for the role, but the planned film never advanced past pre-production. Saint Jack In 1978 Welles was lined up by his long-time protégé Peter Bogdanovich (who was then acting as Welles's de facto agent) to direct Saint Jack, an adaptation of the 1973 Paul Theroux novel about an American pimp in Singapore. Hugh Hefner and Bogdanovich's then-partner Cybill Shepherd were both attached to the project as producers, with Hefner providing finance through his Playboy productions. However, both Hefner and Shepherd became convinced that Bogdanovich himself would be a more commercially viable director than Welles and insisted that Bogdanovich take over. Since Bogdanovich was also in need of work after a series of box office flops, he agreed. When the film was finally made in 1979 by Bogdanovich and Hefner (but without Welles or Shepherd's participation), Welles felt betrayed and according to Bogdanovich the two "drifted apart a bit". Filming The Trial After the success of his 1978 film Filming Othello made for West German television, and mostly consisting of a monolog to the camera, Welles began shooting scenes for this follow-up film, but never completed it. What Welles did film was an 80-minute question-and-answer session in 1981 with film students asking about the film. The footage was kept by Welles's cinematographer Gary Graver, who donated it to the Munich Film Museum, which then pieced it together with Welles's trailer for the film, into an 83-minute film which is occasionally screened at film festivals. The Big Brass Ring Written by Welles with Oja Kodar, The Big Brass Ring was adapted and filmed by director George Hickenlooper in partnership with writer F.X. Feeney. Both the Welles script and the 1999 film center on a U.S. presidential hopeful in his 40s, his elderly mentor—a former candidate for the Presidency, brought low by homosexual scandal—and the Italian journalist probing for the truth of the relationship between these men. During the last years of his life, Welles struggled to get financing for the planned film; however, his efforts at casting Jack Nicholson, Robert Redford, Warren Beatty, Clint Eastwood, Burt Reynolds and Paul Newman as the main character were unsuccessful. All of the actors turned down the role for various reasons. The Cradle Will Rock In 1984, Welles wrote the screenplay for a film he planned to direct, an autobiographical drama about the 1937 staging of The Cradle Will Rock. Rupert Everett was slated to play the young Welles. However, Welles was unable to acquire funding. Tim Robbins later directed a similar film, but it was not based on Welles's script. King Lear At the time of his death, Welles was in talks with a French production company to direct a film version of the Shakespeare play King Lear, in which he would also play the title role. Ada or Ardor: A Family Chronicle Ada or Ardor: A Family Chronicle was an adaptation of Vladimir Nabokov's novel. Welles admired Nabokov's Ada or Ardor: A Family Chronicle and initiated a film project of the same title in collaboration with the author. Welles flew to Paris to discuss the project personally with Nabokov, because at that time the Russian author moved from America to Europe. Welles and Nabokov had a promising discussion, but the project was not finished. Theatre credits Radio credits Filmography Discography Awards and honors 1933: Welles's stage production of Twelfth Night for the Todd School for Boys received first prize from the Chicago Drama League after competition at the Century of Progress Exposition of 1933, the Chicago World's Fair. 1938: As director of the Mercury Theatre, Welles received the New York Drama Study Club Award for "the greatest contribution toward a living, breathing theatre this season". 1939: For "his most conspicuous contribution this last year to the theatre and to radio drama," Welles received the Essex County Symphony Society's first annual Achievement Award. 1941: Citizen Kane received the New York Film Critics Circle Award for Best Picture. 1942: The National Board of Review voted Citizen Kane Best Film of 1941, and recognized Welles for his performance. 1942: Citizen Kane received nine nominations at the 1941 Academy Awards, including Best Picture, Best Director and Best Actor in a Leading Role for Welles. It won the Academy Award for Best Original Screenplay, an award Welles shared with Herman J. Mankiewicz. 1943: The Magnificent Ambersons was nominated for four 1942 Academy Awards, including Best Picture. 1945: On May 24, 1945, the Interracial Film and Radio Guild honored Welles for his contributions to interracial harmony through radio. Presented at the Shrine Auditorium in Los Angeles, the guild's second annual awards ceremony also honored Eddie "Rochester" Anderson, Norman Corwin, Bing Crosby, Bette Davis, Lena Horne, James Wong Howe, Earl Robinson, Nathan Straus and Miguel C. Torres. 1947: The Stranger was nominated for the Golden Lion at the Venice Film Festival. 1952: Othello won the Palme d'Or at the 1952 Cannes Film Festival. 1958: Although Universal Pictures did its best to prevent Touch of Evil from being selected for the 1958 Brussels World Film Festival—part of the Expo 58 world's fair—the film received its European premiere and Welles was invited to attend. To his astonishment, Welles collected the two top awards. Touch of Evil received the International Critics Prize, and Welles was recognized for his body of work. 1959: Welles received a special 1958 Peabody Award for The Fountain of Youth, the only unsold TV pilot ever so honored. 1959: For their ensemble work in Compulsion, Orson Welles, Bradford Dillman and Dean Stockwell shared the prize for Best Actor at the Cannes Film Festival. 1966: Chimes at Midnight was screened in competition for the Palme d'Or at the 1966 Cannes Films Festival and won the 20th Anniversary Prize ("Honneure") and the Technical Grand Prize. In Spain, it won the Citizens Writers Circle Award for Best Film. 1968: Welles was nominated for Best Foreign Actor in a Leading Role at the 21st British Academy Film Awards for his performance in Chimes at Midnight. 1970: The Venice Film Festival awarded Welles the Golden Lion for Career Achievement. 1970: Welles was given an Academy Honorary Award for "superlative and distinguished service in the making of motion pictures." Welles did not attend the ceremony: "I didn't go, because I feel like a damn fool at those things. I feel foolish, really foolish. ... I made a piece of film and said that I was in Spain, and thanked them." 1975: Welles received the American Film Institute Lifetime Achievement Award. 1976: Grammy Award for Best Spoken Word or Non-Musical Album for "Great American Documents", shared with Helen Hayes, Henry Fonda and James Earl Jones. 1978: Welles was presented with the Los Angeles Film Critics Association Career Achievement Award. 1979: Welles received the Grammy Award for Best Spoken Word Recording for the complete motion picture soundtrack for Citizen Kane. 1979: Welles was inducted into the National Association of Broadcasters Broadcasting Hall of Fame. 1981: Welles received a Grammy Award for Best Spoken Word Recording for his role on Donovan's Brain. 1982: In Paris on February 23, 1982, President François Mitterrand presented Welles with the Order of Commander of the Légion d'honneur, the highest civilian decoration in France. 1982: Welles was nominated for Best Supporting Actor in a Motion Picture at the Golden Globe Awards for his role in Butterfly, the same role that had him nominated for the Golden Raspberry Award for Worst Supporting Actor, won by Ed McMahon in the same film, which also won the award for Worst Picture. 1983: Welles was made a member of the Académie des Beaux-Arts. 1983: Welles was an inaugural recipient of the British Film Institute Fellowship. 1984: The Directors Guild of America presented Welles with its greatest honor, the D. W. Griffith Award. 1984: Welles received a Special Fellowship from The Academy of Magical Arts. 1985: Welles received the Career Achievement Award from the National Board of Review. 1988: Welles was inducted into the National Radio Hall of Fame. 1993: The 1992 audiobook version of This is Orson Welles by Welles and Peter Bogdanovich was nominated for a Grammy Award for Best Spoken Word or Non-Musical Album. 1998: In 1998 and 2007, the American Film Institute ranked Citizen Kane as the greatest American movie. These other Welles films were nominated for the AFI list: The Magnificent Ambersons (1942, director/producer/screenwriter); The Third Man (1949, actor); Touch of Evil (1958, actor/director/screenwriter); and A Man for All Seasons (1966, actor). 1999: The American Film Institute acknowledged Welles as one of the top 25 male motion picture stars of Classic Hollywood cinema in its survey, AFI's 100 Years...100 Stars. 2002: Welles was voted the greatest film director of all time in two British Film Institute polls, of directors and critics. 2002: A highly divergent genus of Hawaiian spiders Orsonwelles was named in his honor. 2003: A crater on Mars was named in his honor. 2007: A statue of Welles sculpted by Oja Kodar was installed in the city of Split, Croatia. 2013: On February 10, 2013, the Woodstock Opera House in Woodstock, Illinois, dedicated its stage to Welles, honoring the site of his American debut as a professional theatre director. 2015: Throughout 2015, numerous festivals and events observed the 100th anniversary of Welles's birth. 2017: A survey of critical consensus, best-of lists, and historical retrospectives finds Welles to be the second most acclaimed director of all time (behind Alfred Hitchcock). Cultural references Director Peter Jackson cast Montreal actor Jean Guérin as Welles in his 1994 film, Heavenly Creatures. Vincent D'Onofrio portrayed Welles in a cameo appearance in Tim Burton's 1994 film, Ed Wood, where he briefly appears and encourages the eponymous filmmaker to fight for making his movies his own way in spite of his producers. Voice actor Maurice LaMarche is known for his Welles impression, heard in Ed Wood (in which he dubbed the dialog of Vincent D'Onofrio); the 1994–95 primetime animated series, The Critic; a 2006 episode of The Simpsons; and a 2011 episode of Futurama for which LaMarche won an Emmy Award. The voice he created for the character Brain from the animated series Animaniacs and Pinky and the Brain was largely influenced by Welles. The 1996 film The Battle Over Citizen Kane, which chronicles the conflict between Welles and Hearst, was nominated for an Academy Award for Best Documentary Feature. Welles is a recurring character in the Anno Dracula series by author and critic Kim Newman, appearing in Dracula Cha Cha Cha (1998) and Johnny Alucard (2013). In 1999 Welles appeared on a U.S. postage stamp in a scene from Citizen Kane. The United States Postal Service was petitioned to honor Welles with a stamp in 2015, the 100th anniversary of his birth, but the effort did not succeed. The 1999 HBO docudrama, RKO 281, tells the story of the making of Citizen Kane, starring Liev Schreiber as Orson Welles. Tim Robbins's 1999 film Cradle Will Rock chronicles the process and events surrounding Welles and John Houseman's production of the 1937 musical by Marc Blitzstein. Welles is played by actor Angus MacFadyen. Austin Pendleton's 2000 play, Orson's Shadow, concerns the 1960 London production of Eugène Ionesco's play Rhinoceros directed by Welles and starring Laurence Olivier. First presented by the Steppenwolf Theatre Company in 2000, the play opened off-Broadway in 2005 and had its European premiere in London in 2015. In Michael Chabon's 2000 Pulitzer Prize-winning novel The Amazing Adventures of Kavalier & Clay, the protagonists meet Orson Welles and attend the premiere of Citizen Kane. In the film Fade to Black (2006), a fictional thriller set during Welles's 1948 journey to Rome to star in the movie Black Magic, Danny Huston stars as Welles. Me and Orson Welles (2009), based on Robert Kaplow's 2003 novel, stars Zac Efron as a teenager who convinces Welles (Christian McKay) to cast him in his 1937 production of Julius Caesar. McKay received numerous accolades for his performance, including a BAFTA nomination. Welles is the central character in "Ian, George, and George," a novelette by Paul Levinson published in 2013 in Analog Science Fiction and Fact magazine. In 2014 comedic actor Jack Black portrayed Welles in the sketch comedy show Drunk History. A 2014 documentary by Chuck Workman, Magician: The Astonishing Life and Work of Orson Welles, was released to critical acclaim. Rapper Logic samples Orson Welles twice on his 2020 album No Pressure, with a portion of the August 11, 1946 "Orson Welles Commentaries" episode featured as the outro to the album, titled Obediently Yours. Tom Burke portrayed Welles in David Fincher's 2020 film, Mank, which focuses on Herman J. Mankiewicz, the co-writer of Citizen Kane. Welles is portrayed by three avatars as he comes to grips with his own death in the 2020
Edwards as Desdemona's father Brabantio. Suzanne Cloutier starred as Desdemona and Campbell Playhouse alumnus Robert Coote appeared as Iago's associate Roderigo. Filming was suspended several times as Welles ran out of funds and left for acting jobs, accounted in detail in MacLiammóir's published memoir Put Money in Thy Purse. The American release prints had a technically flawed soundtrack, suffering from a dropout of sound at every quiet moment. Welles's daughter, Beatrice Welles-Smith, restored Othello in 1992 for a wide re-release. The restoration included reconstructing Angelo Francesco Lavagnino's original musical score, which was originally inaudible, and adding ambient stereo sound effects, which were not in the original film. The restoration went on to a successful theatrical run in America. In 1952, Welles continued finding work in England after the success of the Harry Lime radio show. Harry Alan Towers offered Welles another series, The Black Museum, which ran for 52 weeks with Welles as host and narrator. Director Herbert Wilcox offered Welles the part of the murdered victim in Trent's Last Case, based on the novel by E. C. Bentley. In 1953, the BBC hired Welles to read an hour of selections from Walt Whitman's epic poem Song of Myself. Towers hired Welles again, to play Professor Moriarty in the radio series, The Adventures of Sherlock Holmes, starring John Gielgud and Ralph Richardson. Welles briefly returned to America to make his first appearance on television, starring in the Omnibus presentation of King Lear, broadcast live on CBS October 18, 1953. Directed by Peter Brook, the production costarred Natasha Parry, Beatrice Straight and Arnold Moss. In 1954, director George More O'Ferrall offered Welles the title role in the 'Lord Mountdrago' segment of Three Cases of Murder, co-starring Alan Badel. Herbert Wilcox cast Welles as the antagonist in Trouble in the Glen opposite Margaret Lockwood, Forrest Tucker and Victor McLaglen. Old friend John Huston cast him as Father Mapple in his 1956 film adaptation of Herman Melville's Moby-Dick, starring Gregory Peck. Mr. Arkadin Welles's next turn as director was the film Mr. Arkadin (1955), which was produced by his political mentor from the 1940s, Louis Dolivet. It was filmed in France, Germany, Spain and Italy on a very limited budget. Based loosely on several episodes of the Harry Lime radio show, it stars Welles as a billionaire who hires a man to delve into the secrets of his past. The film stars Robert Arden, who had worked on the Harry Lime series; Welles's third wife, Paola Mori, whose voice was dubbed by actress Billie Whitelaw; and guest stars Akim Tamiroff, Michael Redgrave, Katina Paxinou and Mischa Auer. Frustrated by his slow progress in the editing room, producer Dolivet removed Welles from the project and finished the film without him. Eventually, five different versions of the film would be released, two in Spanish and three in English. The version that Dolivet completed was retitled Confidential Report. In 2005 Stefan Droessler of the Munich Film Museum oversaw a reconstruction of the surviving film elements. Television projects In 1955, Welles also directed two television series for the BBC. The first was Orson Welles' Sketch Book, a series of six 15-minute shows featuring Welles drawing in a sketchbook to illustrate his reminiscences for the camera (including such topics as the filming of It's All True and the Isaac Woodard case), and the second was Around the World with Orson Welles, a series of six travelogues set in different locations around Europe (such as Vienna, the Basque Country between France and Spain, and England). Welles served as host and interviewer, his commentary including documentary facts and his own personal observations (a technique he would continue to explore in later works). During Episode 3 of Sketchbook, Welles makes a deliberate attack on the abuse of police powers around the world. The episode starts with him telling the story of Isaac Woodard, an African-American veteran of the South Pacific during World War II being falsely accused by a bus driver of being drunk and disorderly, who then has a policeman remove the man from the bus. Woodard is not arrested right away, but rather he is beaten into unconsciousness nearly to the point of death and when he finally regains consciousness he is permanently blinded. By the time doctors from the US Army located him three weeks later, there was nothing that could be done. Welles assures the audience that he personally saw to it that justice was served to this policeman although he doesn't mention what type of justice was delivered. Welles then goes on to give other examples of police being given more power and authority than is necessary. The title of this episode is "The Police". In 1956, Welles completed Portrait of Gina. He left the only copy of it in his room at the Hôtel Ritz in Paris. The film cans would remain in a lost-and-found locker at the hotel for several decades, where they were discovered in 1986, after Welles's death. Return to Hollywood (1956–1959) In 1956, Welles returned to Hollywood. He began filming a projected pilot for Desilu, owned by Lucille Ball and her husband Desi Arnaz, who had recently purchased the former RKO studios. The film was The Fountain of Youth, based on a story by John Collier. Originally deemed not viable as a pilot, the film was not aired until 1958—and won the Peabody Award for excellence. Welles guest starred on television shows including I Love Lucy. On radio, he was narrator of Tomorrow (October 17, 1956), a nuclear holocaust drama produced and syndicated by ABC and the Federal Civil Defense Administration. Welles's next feature film role was in Man in the Shadow for Universal Pictures in 1957, starring Jeff Chandler. Touch of Evil Welles stayed on at Universal to direct (and co-star with) Charlton Heston in the 1958 film Touch of Evil, based on Whit Masterson's novel Badge of Evil. Originally only hired as an actor, Welles was promoted to director by Universal Studios at the insistence of Charlton Heston. The film reunited many actors and technicians with whom Welles had worked in Hollywood in the 1940s, including cameraman Russell Metty (The Stranger), makeup artist Maurice Seiderman (Citizen Kane), and actors Joseph Cotten, Marlene Dietrich and Akim Tamiroff. Filming proceeded smoothly, with Welles finishing on schedule and on budget, and the studio bosses praising the daily rushes. Nevertheless, after the end of production, the studio re-edited the film, re-shot scenes, and shot new exposition scenes to clarify the plot. Welles wrote a 58-page memo outlining suggestions and objections, stating that the film was no longer his version—it was the studio's, but as such, he was still prepared to help with it. In 1978, a longer preview version of the film was discovered and released. As Universal reworked Touch of Evil, Welles began filming his adaptation of Miguel de Cervantes's novel Don Quixote in Mexico, starring Mischa Auer as Quixote and Akim Tamiroff as Sancho Panza. Return to Europe (1959–1970) He continued shooting Don Quixote in Spain and Italy, but replaced Mischa Auer with Francisco Reiguera, and resumed acting jobs. In Italy in 1959, Welles directed his own scenes as King Saul in Richard Pottier's film David and Goliath. In Hong Kong, he co-starred with Curt Jürgens in Lewis Gilbert's film Ferry to Hong Kong. In 1960, in Paris he co-starred in Richard Fleischer's film Crack in the Mirror. In Yugoslavia he starred in Richard Thorpe's film The Tartars and Veljko Bulajić's Battle of Neretva. Throughout the 1960s, filming continued on Quixote on-and-off until the end of the decade, as Welles evolved the concept, tone and ending several times. Although he had a complete version of the film shot and edited at least once, he would continue toying with the editing well into the 1980s, he never completed a version of the film he was fully satisfied with and would junk existing footage and shoot new footage. (In one case, he had a complete cut ready in which Quixote and Sancho Panza end up going to the moon, but he felt the ending was rendered obsolete by the 1969 moon landings and burned 10 reels of this version.) As the process went on, Welles gradually voiced all of the characters himself and provided narration. In 1992, the director Jesús Franco constructed a film out of the portions of Quixote left behind by Welles. Some of the film stock had decayed badly. While the Welles footage was greeted with interest, the post-production by Franco was met with harsh criticism. In 1961, Welles directed In the Land of Don Quixote, a series of eight half-hour episodes for the Italian television network RAI. Similar to the Around the World with Orson Welles series, they presented travelogues of Spain and included Welles's wife, Paola, and their daughter, Beatrice. Though Welles was fluent in Italian, the network was not interested in him providing Italian narration because of his accent, and the series sat unreleased until 1964, by which time the network had added Italian narration of its own. Ultimately, versions of the episodes were released with the original musical score Welles had approved, but without the narration. The Trial In 1962, Welles directed his adaptation of The Trial, based on the novel by Franz Kafka and produced by Michael and Alexander Salkind. The cast included Anthony Perkins as Josef K, Jeanne Moreau, Romy Schneider, Paola Mori and Akim Tamiroff. While filming exteriors in Zagreb, Welles was informed that the Salkinds had run out of money, meaning that there could be no set construction. No stranger to shooting on found locations, Welles soon filmed the interiors in the Gare d'Orsay, at that time an abandoned railway station in Paris. Welles thought the location possessed a "Jules Verne modernism" and a melancholy sense of "waiting", both suitable for Kafka. To remain in the spirit of Kafka Welles set up the cutting room together with the Film Editor, Frederick Muller (as Fritz Muller), in the old un-used, cold, depressing, station master office. The film failed at the box-office. Peter Bogdanovich would later observe that Welles found the film riotously funny. Welles also told a BBC interviewer that it was his best film. While filming The Trial Welles met Oja Kodar, who later became his partner and collaborator for the last 20 years of his life. Welles played a film director in La Ricotta (1963), Pier Paolo Pasolini's segment of the Ro.Go.Pa.G. movie, although his renowned voice was dubbed by Italian writer Giorgio Bassani. He continued taking what work he could find acting, narrating or hosting other people's work, and began filming Chimes at Midnight, which was completed in 1965. Chimes at Midnight Filmed in Spain, Chimes at Midnight was based on Welles's play, Five Kings, in which he drew material from six Shakespeare plays to tell the story of Sir John Falstaff (Welles) and his relationship with Prince Hal (Keith Baxter). The cast includes John Gielgud, Jeanne Moreau, Fernando Rey and Margaret Rutherford; the film's narration, spoken by Ralph Richardson, is taken from the chronicler Raphael Holinshed. Welles held the film in high regard: "It's my favorite picture, yes. If I wanted to get into heaven on the basis of one movie, that's the one I would offer up." In 1966, Welles directed a film for French television, an adaptation of The Immortal Story, by Karen Blixen. Released in 1968, it stars Jeanne Moreau, Roger Coggio and Norman Eshley. The film had a successful run in French theaters. At this time Welles met Oja Kodar again, and gave her a letter he had written to her and had been keeping for four years; they would not be parted again. They immediately began a collaboration both personal and professional. The first of these was an adaptation of Blixen's The Heroine, meant to be a companion piece to The Immortal Story and starring Kodar. Unfortunately, funding disappeared after one day's shooting. After completing this film, he appeared in a brief cameo as Cardinal Wolsey in Fred Zinnemann's adaptation of A Man for All Seasons—a role for which he won considerable acclaim. In 1967, Welles began directing The Deep, based on the novel Dead Calm by Charles Williams and filmed off the shore of Yugoslavia. The cast included Jeanne Moreau, Laurence Harvey and Kodar. Personally financed by Welles and Kodar, they could not obtain the funds to complete the project, and it was abandoned a few years later after the death of Harvey. The surviving footage was eventually edited and released by the Filmmuseum München. In 1968 Welles began filming a TV special for CBS under the title Orson's Bag, combining travelogue, comedy skits and a condensation of Shakespeare's play The Merchant of Venice with Welles as Shylock. In 1969 Welles called again the Film Editor Frederick Muller to work with him re-editing the material and they set up cutting rooms at the Safa Palatino Studios in Rome. Funding for the show sent by CBS to Welles in Switzerland was seized by the IRS. Without funding, the show was not completed. The surviving film clips portions were eventually released by the Filmmuseum München. In 1969, Welles authorized the use of his name for a cinema in Cambridge, Massachusetts. The Orson Welles Cinema remained in operation until 1986, with Welles making a personal appearance there in 1977. Also in 1969, he played a supporting role in John Huston's The Kremlin Letter. Drawn by the numerous offers he received to work in television and films, and upset by a tabloid scandal reporting his affair with Kodar, Welles abandoned the editing of Don Quixote and moved back to America in 1970. Later career (1970–1985) Welles returned to Hollywood, where he continued to self-finance his film and television projects. While offers to act, narrate and host continued, Welles also found himself in great demand on television talk shows. He made frequent appearances for Dick Cavett, Johnny Carson, Dean Martin and Merv Griffin. Welles's primary focus during his final years was The Other Side of the Wind, a project that was filmed intermittently between 1970 and 1976. Co-written by Welles and Oja Kodar, it is the story of an aging film director (John Huston) looking for funds to complete his final film. The cast includes Peter Bogdanovich, Susan Strasberg, Norman Foster, Edmond O'Brien, Cameron Mitchell and Dennis Hopper. Financed by Iranian backers, ownership of the film fell into a legal quagmire after the Shah of Iran was deposed. The legal disputes kept the film in its unfinished state until early 2017 and it was finally released in November 2018. Welles portrayed Louis XVIII of France in the 1970 film Waterloo, and narrated the beginning and ending scenes of the historical comedy Start the Revolution Without Me (1970). In 1971, Welles directed a short adaptation of Moby-Dick, a one-man performance on a bare stage, reminiscent of his 1955 stage production Moby Dick—Rehearsed. Never completed, it was eventually released by the Filmmuseum München. He also appeared in Ten Days' Wonder, co-starring with Anthony Perkins and directed by Claude Chabrol (who reciprocated with a bit part as himself in Other Wind), based on a detective novel by Ellery Queen. That same year, the Academy of Motion Picture Arts and Sciences gave him an Academy Honorary Award "for superlative artistry and versatility in the creation of motion pictures." Welles pretended to be out of town and sent John Huston to claim the award, thanking the Academy on film. In his speech, Huston criticized the Academy for presenting the award while refusing to support Welles's projects. In 1972, Welles acted as on-screen narrator for the film documentary version of Alvin Toffler's 1970 book Future Shock. Working again for a British producer, Welles played Long John Silver in director John Hough's Treasure Island (1972), an adaptation of the Robert Louis Stevenson novel, which had been the second story broadcast by The Mercury Theatre on the Air in 1938. This was the last time he played the lead role in a major film. Welles also contributed to the script, although his writing credit was attributed to the pseudonym 'O. W. Jeeves'. In some versions of the film Welles's original recorded dialog was redubbed by Robert Rietty. In 1973, Welles completed F for Fake, a personal essay film about art forger Elmyr de Hory and the biographer Clifford Irving. Based on an existing documentary by François Reichenbach, it included new material with Oja Kodar, Joseph Cotten, Paul Stewart and William Alland. An excerpt of Welles's 1930s War of the Worlds broadcast was recreated for this film; however, none of the dialogue heard in the film actually matches what was originally broadcast. Welles filmed a five-minute trailer, rejected in the U.S., that featured several shots of a topless Kodar. Welles hosted a British syndicated anthology series, Orson Welles's Great Mysteries, during the 1973–74 television season. His brief introductions to the 26 half-hour episodes were shot in July 1973 by Gary Graver. The year 1974 also saw Welles lending his voice for that year's remake of Agatha Christie's classic thriller Ten Little Indians produced by his former associate, Harry Alan Towers and starring an international cast that included Oliver Reed, Elke Sommer and Herbert Lom. In 1975, Welles narrated the documentary Bugs Bunny: Superstar, focusing on Warner Bros. cartoons from the 1940s. Also in 1975, the American Film Institute presented Welles with its third Lifetime Achievement Award (the first two going to director John Ford and actor James Cagney). At the ceremony, Welles screened two scenes from the nearly finished The Other Side of the Wind. In 1976, Paramount Television purchased the rights for the entire set of Rex Stout's Nero Wolfe stories for Orson Welles. Welles had once wanted to make a series of Nero Wolfe movies, but Rex Stout—who was leery of Hollywood adaptations during his lifetime after two disappointing 1930s films—turned him down. Paramount planned to begin with an ABC-TV movie and hoped to persuade Welles to continue the role in a miniseries. Frank D. Gilroy was signed to write the television script and direct the TV movie on the assurance that Welles would star, but by April 1977 Welles had bowed out. In 1980 the Associated Press reported "the distinct possibility" that Welles would star in a Nero Wolfe TV series for NBC television. Again, Welles bowed out of the project due to creative differences and William Conrad was cast in the role. In 1979, Welles completed his documentary Filming Othello, which featured Michael MacLiammoir and Hilton Edwards. Made for West German television, it was also released in theaters. That same year, Welles completed his self-produced pilot for The Orson Welles Show television series, featuring interviews with Burt Reynolds, Jim Henson and Frank Oz and guest-starring the Muppets and Angie Dickinson. Unable to find network interest, the pilot was never broadcast. Also in 1979, Welles appeared in the biopic The Secret of Nikola Tesla, and a cameo in The Muppet Movie as Lew Lord. Beginning in the late 1970s, Welles participated in a series of famous television commercial advertisements. For two years he was on-camera spokesman for the Paul Masson Vineyards, and sales grew by one third during the time Welles intoned what became a popular catchphrase: "We will sell no wine before its time." He was also the voice behind the long-running Carlsberg "Probably the best lager in the world" campaign, promoted Domecq sherry on British television and provided narration on adverts for Findus, though the actual adverts have been overshadowed by a famous blooper reel of voice recordings, known as the Frozen Peas reel. He also did commercials for the Preview Subscription Television Service seen on stations around the country including WCLQ/Cleveland, KNDL/St. Louis and WSMW/Boston. As money ran short, he began directing commercials to make ends meet, including the famous British "Follow the Bear" commercials for Hofmeister lager. In 1981, Welles hosted the documentary The Man Who Saw Tomorrow, about Renaissance-era prophet Nostradamus. In 1982, the BBC broadcast The Orson Welles Story in the Arena series. Interviewed by Leslie Megahey, Welles examined his past in great detail, and several people from his professional past were interviewed as well. It was reissued in 1990 as With Orson Welles: Stories of a Life in Film. Welles provided narration for the tracks "Defender" from Manowar's 1987 album Fighting the World and "Dark Avenger" on their 1982 album, Battle Hymns. He also recorded the concert introduction for the live performances of Manowar that says, "Ladies and gentlemen, from the United States of America, all hail Manowar." Manowar have been using this introduction for all of their concerts since then. During the 1980s, Welles worked on such film projects as The Dreamers, based on two stories by Isak Dinesen and starring Oja Kodar, and Orson Welles' Magic Show, which reused material from his failed TV pilot. Another project he worked on was Filming the Trial, the second in a proposed series of documentaries examining his feature films. While much was shot for these projects, none of them was completed. All of them were eventually released by the Filmmuseum München. In 1984, Welles narrated the short-lived television series Scene of the Crime. During the early years of Magnum, P.I., Welles was the voice of the unseen character Robin Masters, a famous writer and playboy. Welles's death forced this minor character to largely be written out of the series. In an oblique homage to Welles, the Magnum, P.I. producers ambiguously concluded that story arc by having one character accuse another of having hired an actor to portray Robin Masters. He also, in this penultimate year released a music single, titled "I Know What It Is to Be Young (But You Don't Know What It Is to Be Old)", which he recorded under Italian label Compagnia Generale del Disco. The song was performed with the Nick Perito Orchestra and the Ray Charles Singers and produced by Jerry Abbott (father of guitarist "Dimebag Darrell" Abbott). The last film roles before Welles's death included voice work in the animated films Enchanted Journey (1984) and the animated film The Transformers: The Movie (1986), in which he provided the voice the planet-eating supervillian Unicron. His last film appearance was in Henry Jaglom's 1987 independent film Someone to Love, released two years after his death but produced before his voice-over in Transformers: The Movie. His last television appearance was on the television show Moonlighting. He recorded an introduction to an episode entitled "The Dream Sequence Always Rings Twice", which was partially filmed in black and white. The episode aired five days after his death and was dedicated to his memory. In the mid-1980s, Henry Jaglom taped lunch conversations with Welles at Los Angeles's Ma Maison as well as in New York. Edited transcripts of these sessions appear in Peter Biskind's 2013 book My Lunches With Orson: Conversations Between Henry Jaglom and Orson Welles. Personal life Relationships and family Orson Welles and Chicago-born actress and socialite Virginia Nicolson (1916–1996) were married on November 14, 1934. The couple separated in December 1939 and were divorced on February 1, 1940. After bearing with Welles's romances in New York, Virginia had learned that Welles had fallen in love with Mexican actress Dolores del Río. Infatuated with her since adolescence, Welles met del Río at Darryl Zanuck's ranch soon after he moved to Hollywood in 1939. Their relationship was kept secret until 1941, when del Río filed for divorce from her second husband. They openly appeared together in New York while Welles was directing the Mercury stage production Native Son. They acted together in the movie Journey into Fear (1943). Their relationship came to an end due, among other things, to Welles's infidelities. Del Río returned to Mexico in 1943, shortly before Welles married Rita Hayworth. Welles married Rita Hayworth on September 7, 1943. They were divorced on November 10, 1947. During his last interview, recorded for The Merv Griffin Show on the evening before his death, Welles called Hayworth "one of the dearest and sweetest women that ever lived ... and we were a long time together—I was lucky enough to have been with her longer than any of the other men in her life." In 1955, Welles married actress Paola Mori (née Countess Paola di Gerfalco), an Italian aristocrat who starred as Raina Arkadin in his 1955 film, Mr. Arkadin. The couple began a passionate affair, and they were married at her parents' insistence. They were wed in London May 8, 1955, and never divorced. Croatian-born artist and actress Oja Kodar became Welles's longtime companion both personally and professionally from 1966 onward, and they lived together for some of the last 20 years of his life. Welles had three daughters from his marriages: Christopher Welles Feder (born March 27, 1938, with Virginia Nicolson); Rebecca Welles Manning (December 17, 1944 – October 17, 2004, with Rita Hayworth); and Beatrice Welles (born November 13, 1955, with Paola Mori). Welles is thought to have had a son, British director Michael Lindsay-Hogg (born May 5, 1940), with Irish actress Geraldine Fitzgerald, then the wife of Sir Edward Lindsay-Hogg, 4th baronet. When Lindsay-Hogg was 16, his mother reluctantly divulged pervasive rumors that his father was Welles, and she denied them—but in such detail that he doubted her veracity. Fitzgerald evaded the subject for the rest of her life. Lindsay-Hogg knew Welles, worked with him in the theatre and met him at intervals throughout Welles's life. After learning that Welles's oldest daughter, Chris, his childhood playmate, had long suspected that he was her brother, Lindsay-Hogg initiated a DNA test that proved inconclusive. In his 2011 autobiography, Lindsay-Hogg reported that his questions were resolved by his mother's close friend Gloria Vanderbilt, who wrote that Fitzgerald had told her that Welles was his father. A 2015 Welles biography by Patrick McGilligan, however, reports the impossibility of Welles's paternity: Fitzgerald left the U.S. for Ireland in May 1939, and her son was conceived before her return in late October, whereas Welles did not travel overseas during that period. After the death of Rebecca Welles Manning, a man named Marc McKerrow was revealed to be her son—and therefore a direct descendant of Orson Welles and Rita Hayworth—after he requested his adoption records unsealed. While McKerrow and Rebecca were never able to meet due to her cancer, they were in touch before her death, and he attended her funeral. McKerrow's reactions to the revelation and his meeting with Oja Kodar are documented in the 2008 film Prodigal Sons by his sister Kim Reed. McKerrow died on June 18, 2010, suddenly in his sleep at the age of 44. His death was "...caused by complications from a nocturnal seizure" related to a car accident and resulting injury when he was younger. In the 1940s, Welles had a brief relationship with Maila Nurmi, who, according to the bio Glamour Ghoul: The Passions and Pain of the Real Vampira, Maila Nurmi, became pregnant; since Welles was at the time married to Hayworth, Nurmi gave the child up for adoption. However, the child mentioned in the book was born in 1944. Nurmi revealed in an interview weeks before her death in January 2008 how she met Welles in a New York casting office in the spring of 1946. Despite an urban legend promoted by Welles, he was not related to Abraham Lincoln's wartime Secretary of the Navy, Gideon Welles. The myth dates back to the first newspaper feature ever written about Welles—"Cartoonist, Actor, Poet and only 10"—in the February 19, 1926, issue of The Capital Times. The article falsely states that he was descended from "Gideon Welles, who was a member of President Lincoln's cabinet". As presented by Charles Higham in a genealogical chart that introduces his 1985 biography of Welles, Orson Welles's father was Richard Head Welles (born Wells), son of Richard Jones Wells, son of Henry Hill Wells (who had an uncle named Gideon Wells), son of William Hill Wells, son of Richard Wells (1734–1801). Physical characteristics Peter Noble's 1956 biography describes Welles as "a magnificent figure of a man, over six feet tall, handsome, with flashing eyes and a gloriously resonant speaking-voice". Welles said that a voice specialist once told him he was born to be a heldentenor, a heroic tenor, but that when he was young and working at the Gate Theatre in Dublin, he forced his voice down into a bass-baritone. Even as a baby, Welles was prone to illness, including diphtheria, measles, whooping cough, and malaria. From infancy he suffered from asthma, sinus headaches, and backache that was later found to be caused by congenital anomalies of the spine. Foot and ankle trouble throughout his life was the result of flat feet. "As he grew older", Brady wrote, "his ill health was exacerbated by the late hours he was allowed to keep [and] an early penchant for alcohol and tobacco". In 1928, at age 13, Welles was already more than six feet tall (1.83 meters) and weighed over 180 pounds (81.6 kg). His passport recorded his height as six feet three inches (192 cm), with brown hair and green eyes. "Crash diets, [pharmaceutical] drugs, and corsets had slimmed him for his early film roles", wrote biographer Barton Whaley. "Then always back to gargantuan consumption of high-caloric food and booze. By summer 1949, when he was 34, his weight had crept up to a stout 230 pounds (104 kg). In 1953, he ballooned from 250 to 275 pounds (113 to 125 kg). After 1960, he remained permanently obese." Religious beliefs When Peter Bogdanovich once asked him about his religion, Welles gruffly replied that it was none of his business, then misinformed him that he was raised Catholic. Although the Welles family was no longer devout, it was fourth-generation Episcopalian and before that, Quaker and Puritan. The funeral of Welles's father, Richard H. Welles, was Episcopalian. In April 1982, when interviewer Merv Griffin asked him about his religious beliefs, Welles replied, "I try to be a Christian. I don't pray really, because I don't want to bore God." Near the end of his life, Welles was dining at Ma Maison, his favorite restaurant in Los Angeles, when proprietor Patrick Terrail conveyed an invitation from the head of the Greek Orthodox Church, who asked Welles to be his guest of honor at divine liturgy at Saint Sophia Cathedral. Welles replied, "Please tell him I really appreciate that offer, but I am an atheist." "Orson never joked or teased about the religious beliefs of others", wrote biographer Barton Whaley. "He accepted it as a cultural artifact, suitable for the births, deaths, and marriages of strangers and even some friends—but without emotional or intellectual meaning for himself." Politics and activities Welles was politically active from the beginning of his career. He remained aligned with left-wing politics and the American Left throughout his life, and always defined his political orientation as "progressive". Despite being a Democrat, he was an outspoken critic of racism in the United States and the practice of segregation both supported by the Democratic party. He was a strong supporter of Franklin D. Roosevelt and the New Deal and often spoke out on radio in support of progressive politics. He campaigned heavily for Roosevelt in the 1944 election. Welles did not support the 1948 presidential bid of Roosevelt's second vice president Henry A. Wallace for the Progressive Party, later describing Wallace as "a prisoner of the Communist Party."p. 66 In a 1983 conversation with his friend Roger Hill, Welles recalled: "During a White House dinner, when I was campaigning for Roosevelt, in a toast, with considerable tongue in cheek, he said, 'Orson, you and I are the two greatest actors alive today.' In private that evening, and on several other occasions, he urged me to run for a Senate seat in either California or Wisconsin. He wasn't alone." In the 1980s, Welles still expressed admiration for Roosevelt but also described his presidency as "a semidictatorship."p. 187 During a 1970 appearance on The Dick Cavett Show, Welles claimed to have met Hitler while hiking in Austria with a teacher who was a "budding Nazi". He said that Hitler made no impression on him at all and does not remember him. He said that he had no personality at all: "He was invisible. There was nothing there until there were 5,000 people yelling sieg heil." For several years, he wrote a newspaper column on political issues and considered running for the U.S. Senate in 1946, representing his home state of Wisconsin—a seat that was ultimately won by Joseph McCarthy. Welles's political activities were reported on pages 155–157 of Red Channels, the anti-Communist publication that, in part, fueled the already flourishing Hollywood Blacklist. He was in Europe during the height of the Red Scare, thereby adding one more reason for the Hollywood establishment to ostracize him. In 1970, Welles narrated (but did not write) a satirical political record on the rise of President Richard Nixon titled The Begatting of the President. He was a lifelong member of the International Brotherhood of Magicians and the Society of American Magicians. Death and tributes On the evening of October 9, 1985, Welles recorded his final interview on the syndicated TV program The Merv Griffin Show, appearing with biographer Barbara Leaming. "Both Welles and Leaming talked of Welles's life, and the segment was a nostalgic interlude," wrote biographer Frank Brady. Welles returned to his house in Hollywood and worked into the early hours typing stage directions for the project he and Gary Graver were planning to
research and development including the Battelle Memorial Institute, information/library companies such as OCLC and Chemical Abstracts Service, steel processing and pressure cylinder manufacturer Worthington Industries, financial institutions such as JPMorgan Chase and Huntington Bancshares, as well as Owens Corning. Fast food chains Wendy's and White Castle are also headquartered in Columbus. Located in Northeast Ohio along the Lake Erie shore, Cleveland is characterized by its New England heritage, ethnic immigrant cultures, and history as a major American manufacturing and healthcare center. It anchors the Cleveland–Akron–Canton Combined Statistical Area, the largest CSA in the state, of which the cities of Akron, Canton, Mansfield, and Youngstown are constituent parts. Northeast Ohio is known for major industrial companies Goodyear Tire and Rubber and Timken, top-ranked colleges Case Western Reserve University, Oberlin College, and Kent State University, the Cleveland Clinic, and cultural attractions including the Cleveland Museum of Art, Big Five group Cleveland Orchestra, Playhouse Square, the Pro Football Hall of Fame, and the Rock and Roll Hall of Fame. Toledo and Lima are the major cities in Northwest Ohio, an area known for its glass-making industry. It is home to Owens Corning and Owens-Illinois, two Fortune 500 corporations. Cincinnati anchors Southwest Ohio and Metro Cincinnati, which also encompasses counties in the neighboring states of Kentucky and Indiana. The metropolitan area is home to Miami University and the University of Cincinnati, Cincinnati Union Terminal, Cincinnati Symphony Orchestra, and various Fortune 500 companies including Procter & Gamble, Kroger, Macy's, Inc., and Fifth Third Bank. Dayton and Springfield are located in the Miami Valley, which is home to the University of Dayton, the Dayton Ballet, and the extensive Wright-Patterson Air Force Base. Steubenville is the only metropolitan city in Appalachian Ohio, which is home to Hocking Hills State Park. Metropolitan areas The Cincinnati metropolitan area extends into Kentucky and Indiana, the Steubenville metropolitan area extends into West Virginia, and the Youngstown metropolitan area extends into Pennsylvania. Other metropolitan areas that contain cities in Ohio, but are primarily in other states include: Huntington-Ashland, WV-KY-OH Metropolitan Statistical Area (Lawrence County) Wheeling, WV Metropolitan Statistical Area (Belmont County) Additionally, 30 Ohio cities function as centers of micropolitan areas, urban clusters smaller than that of metropolitan areas. Many of these are included as part of larger combined statistical areas, as shown in the table above. Demographics Population From just over 45,000 residents in 1800, Ohio's population grew faster than 10% per decade (except for the 1940 census) until the 1970 census, which recorded just over 10.65 million Ohioans. Growth then slowed for the next four decades. The United States Census Bureau counted 11,808,848 in the 2020 census, a 2.4% increase since the 2010 United States census. Ohio's population growth lags that of the entire United States, and whites are found in a greater density than the US average. , Ohio's center of population is located in Morrow County, in the county seat of Mount Gilead. This is approximately south and west of Ohio's population center in 1990. As of 2011, 27.6% of Ohio's children under the age of 1 belonged to minority groups. 6.2% of Ohio's population is under five years of age, 23.7 percent under 18 years of age, and 14.1 percent were 65 or older. Females made up approximately 51.2 percent of the population. Birth data Note: Births in table do not add up because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number. Since 2016, data for births of White Hispanic origin are not collected, but included in one Hispanic group; persons of Hispanic origin may be of any race. Ancestry In 2010, there were 469,700 foreign-born residents in Ohio, corresponding to 4.1% of the total population. Of these, 229,049 (2.0%) were naturalized US citizens and 240,699 (2.1%) were not. The largest groups were: Mexico (54,166), India (50,256), China (34,901), Germany (19,219), Philippines (16,410), United Kingdom (15,917), Canada (14,223), Russia (11,763), South Korea (11,307), and Ukraine (10,681). Though predominantly white, Ohio has large black populations in all major metropolitan areas throughout the state, Ohio has a significant Hispanic population made up of Mexicans in Toledo and Columbus, and Puerto Ricans in Cleveland and Columbus, and also has a significant and diverse Asian population in Columbus. The largest ancestry groups (which the census defines as not including racial terms) in the state are: 26.5% German 14.1% Irish 9.0% English 6.4% Italian 3.8% Polish 2.5% French 1.9% Scottish 1.7% Hungarian 1.6% Dutch 1.5% Mexican 1.2% Slovak 1.1% Welsh 1.1% Scotch-Irish Ancestries claimed by less than 1% of the population include Sub-Saharan African, Puerto Rican, Swiss, Swedish, Arab, Greek, Norwegian, Romanian, Austrian, Lithuanian, Finnish, West Indian, Portuguese and Slovene. Languages About 6.7% of the population age 5 years and older reported speaking a language other than English, with 2.2% of the population speaking Spanish, 2.6% speaking other Indo-European languages, 1.1% speaking Asian and Austronesian languages, and 0.8% speaking other languages. Numerically: 10,100,586 spoke English, 239,229 Spanish, 55,970 German, 38,990 Chinese, 33,125 Arabic, and 32,019 French. In addition 59,881 spoke a Slavic language and 42,673 spoke another West Germanic language according to the 2010 census. Ohio also had the nation's largest population of Slovene speakers, second largest of Slovak speakers, second largest of Pennsylvania Dutch (German) speakers, and the third largest of Serbian speakers. Religion According to a Pew Forum poll, as of 2014, 73% of Ohioans identified as Christian. Specifically, 29% of Ohio's population identified as Evangelical Protestant, 17% as Mainline Protestant, 7% as Historically Black Protestant, and 18% as Catholic. 22% of the population is unaffiliated with any religious body. Small minorities of Jews (1%), Jehovah's Witnesses (1%), Muslims (1%), Hindus (<1%), Buddhists (1%), Mormons (1%), and other faiths (1-1.5%) exist. According to the Association of Religion Data Archives (ARDA), in 2010 the largest denominations by adherents were the Catholic Church with 1,992,567; the United Methodist Church with 496,232; the Evangelical Lutheran Church in America with 223,253, the Southern Baptist Convention with 171,000, the Christian Churches and Churches of Christ with 141,311, the United Church of Christ with 118,000, and the Presbyterian Church (USA) with 110,000. With about 80,000 adherents in 2020, Ohio has the second largest Amish population of all U.S. states, only behind neighboring Pennsylvania. According to the same data, a majority of Ohioans, 56%, feel religion is "very important", 25% that it is "somewhat important", and 19% that religion is "not too important/not important at all". 38% of Ohioans indicate that they attend religious services at least once weekly, 32% occasionally, and 30% seldom or never. Economy According to the U.S. Census Bureau, the total number for employment in 2016 was 4,790,178. The total number of unique employer establishments was 252,201, while the total number of non-employer establishments was 785,833. In 2010, Ohio was ranked second in the country for best business climate by Site Selection magazine, based on a business-activity database. The state has also won three consecutive Governor's Cup awards from the magazine, based on business growth and developments. , Ohio's gross domestic product (GDP) was $626 billion. This ranks Ohio's economy as the seventh-largest of all fifty states and the District of Columbia. The Small Business & Entrepreneurship Council ranked the state No. 10 for best business-friendly tax systems in their Business Tax Index 2009, including a top corporate tax and capital gains rate that were both ranked No. 6 at 1.9%. Ohio was ranked No. 11 by the council for best friendly-policy states according to their Small Business Survival Index 2009. The Directorship's Boardroom Guide ranked the state No. 13 overall for best business climate, including No. 7 for best litigation climate. Forbes ranked the state No. 8 for best regulatory environment in 2009. Ohio has five of the top 115 colleges in the nation, according to U.S. News and World Report's 2010 rankings, and was ranked No. 8 by the same magazine in 2008 for best high schools. Ohio's unemployment rate stands at 4.5% as of February 2018, down from 10.7% in May 2010. The state still lacks 45,000 jobs compared to the pre-recession numbers of 2007. The labor force participation as of April 2015 is 63%, slightly above the national average. Ohio's per capita income stands at $34,874. , Ohio's median household income is $58,642, and 13.1% of the population is below the poverty line. The manufacturing and financial activities sectors each compose 18.3% of Ohio's GDP, making them Ohio's largest industries by percentage of GDP. Ohio has the third largest manufacturing workforce behind California and Texas. Ohio has the largest bioscience sector in the Midwest, and is a national leader in the "green" economy. Ohio is the largest producer in the country of plastics, rubber, fabricated metals, electrical equipment, and appliances. 5,212,000 Ohioans are currently employed by wage or salary. By employment, Ohio's largest sector is trade/transportation/utilities, which employs 1,010,000 Ohioans, or 19.4% of Ohio's workforce, while the health care and education sector employs 825,000 Ohioans (15.8%). Government employs 787,000 Ohioans (15.1%), manufacturing employs 669,000 Ohioans (12.9%), and professional and technical services employs 638,000 Ohioans (12.2%). Ohio's manufacturing sector is the third-largest of all fifty United States states in terms of gross domestic product. Fifty-nine of the United States' top 1,000 publicly traded companies (by revenue in 2008) are headquartered in Ohio, including Procter & Gamble, Goodyear Tire & Rubber, AK Steel, Timken, Abercrombie & Fitch, and Wendy's. Ohio is also one of 41 states with its own lottery, the Ohio Lottery. , the Ohio Lottery has contributed more than $26 billion to education beginning in 1974. Transportation Ground travel Many major east–west transportation corridors go through Ohio. One of those pioneer routes, known in the early 20th century as "Main Market Route 3", was chosen in 1913 to become part of the historic Lincoln Highway which was the first road across America, connecting New York City to San Francisco. In Ohio, the Lincoln Highway linked many towns and cities together, including Canton, Mansfield, Wooster, Lima, and Van Wert. The arrival of the Lincoln Highway to Ohio was a major influence on the development of the state. Upon the advent of the federal numbered highway system in 1926, the Lincoln Highway through Ohio became U.S. Route 30. Ohio also is home to of the Historic National Road, now U.S. Route 40. Ohio has a highly developed network of roads and interstate highways. Major east-west through routes include the Ohio Turnpike (I-80/I-90) in the north, I-76 through Akron to Pennsylvania, I-70 through Columbus and Dayton, and the Appalachian Highway (State Route 32) running from West Virginia to Cincinnati. Major north–south routes include I-75 in the west through Toledo, Dayton, and Cincinnati, I-71 through the middle of the state from Cleveland through Columbus and Cincinnati into Kentucky, and I-77 in the eastern part of the state from Cleveland through Akron, Canton, New Philadelphia and Marietta south into West Virginia. Interstate 75 between Cincinnati and Dayton is one of the heaviest traveled sections of interstate in Ohio. Ohio also has a highly developed network of signed state bicycle routes. Many of them follow rail trails, with conversion ongoing. The Ohio to Erie Trail (route 1) connects Cincinnati, Columbus, and Cleveland. U.S. Bicycle Route 50 traverses Ohio from Steubenville to the Indiana state line outside Richmond. Ohio has several long-distance hiking trails, the most prominent of which is the Buckeye Trail which extends in a loop around the state of Ohio. Part of it is on roads and part is on wooded trail. Additionally, the North Country Trail (the longest of the eleven National Scenic Trails authorized by Congress) and the American Discovery Trail (a system of recreational trails and roads that collectively form a coast-to-coast route across the mid-tier of the United States) pass through Ohio. Much of these two trails coincide with the Buckeye Trail. Transit Ohio has extensive railroads, though today most are only utilized by freight companies. Major cities in the north and south of Ohio lie on Amtrak intercity rail lines. The Capitol Limited and the Lake Shore Limited serve Toledo, Cleveland and other northern Ohio cities. The Cardinal serves Cincinnati. Columbus is the largest city in the United States without any form of passenger rail. Its Union Station last had an inter-city train in 1979 with the National Limited. Mass transit exists in many forms in Ohio cities, primarily through bus systems, though Cleveland has both light and heavy rail through the GCRTA, and Cincinnati reestablished a streetcar line in 2016. Air travel Ohio has four international airports, four commercial, and two military. The four international include Cleveland Hopkins International Airport, John Glenn Columbus International Airport, Dayton International Airport, and Rickenbacker International Airport (one of two military airfields). The other military airfield is Wright Patterson Air Force Base which is one of the largest Air Force bases in the United States. Other major airports are located in Toledo and Akron. Cincinnati's primary airport, Cincinnati/Northern Kentucky International Airport, is in Hebron, Kentucky, and therefore is not included in Ohio airport lists. Transportation lists List of Interstate Highways in Ohio List of U.S. Routes in Ohio List of state routes in Ohio List of Ohio train stations List of Ohio railroads List of rivers of Ohio Historic Ohio Canals Law and government The state government of Ohio consists of the executive, legislative, and judicial branches. Executive branch The executive branch is headed by the governor of Ohio. The current governor is Mike DeWine since 2019, a member of the Republican Party. A lieutenant governor succeeds the governor in the event of any removal from office, and performs any duties assigned by the governor. The current lieutenant governor is Jon Husted. The other elected constitutional offices in the executive branch are the secretary of state (Frank LaRose), auditor (Keith Faber), treasurer (Robert Sprague), and attorney general (Dave Yost). There are 21 state administrative departments in the executive branch. Legislative branch The Ohio General Assembly is a bicameral legislature consisting of the Senate and House of Representatives. The Senate is composed of 33 districts, each of which is represented by one senator. Each senator represents approximately 330,000 constituents. The House of Representatives is composed of 99 members. The Republican Party is the controlling party in both houses as of the 2020 election cycle. In order to be enacted into law, a bill must be adopted by both houses of the General Assembly and signed by the Governor. If the Governor vetoes a bill, the General Assembly can override the veto with a three-fifths supermajority of both houses. A bill will also become a law if the Governor fails to sign or veto it within 10 days of its being presented. The session laws are published in the official Law of Ohio. These in turn have been codified in the Ohio Revised Code. The General Assembly, with the approval of the Governor, draws the U.S. congressional district lines for Ohio's 16 seats in the United States House of Representatives. The Ohio Apportionment Board draws state legislative district lines in Ohio. Judicial branch There are three levels of the Ohio state judiciary. The lowest level is the court of common pleas: each county maintains its own constitutionally mandated court of common pleas, which maintain jurisdiction over "all justiciable matters". The intermediate-level court system is the district court system. Twelve courts of appeals exist, each retaining jurisdiction over appeals from common pleas, municipal, and county courts in a set geographical area. A case heard in this system is decided by a three-judge panel, and each judge is elected. The state's highest-ranking court is the Ohio Supreme Court. A seven-justice panel composes the court, which, by its own discretion, hears appeals from the courts of appeals, and retains original jurisdiction over limited matters. Local government There are also several levels of local government in Ohio: counties, municipalities (cities and villages), townships, special districts and school districts. Ohio is divided into 88 counties. Ohio law defines a structure for county government, although they may adopt charters for home rule. Summit County and Cuyahoga County have chosen an alternate form of government. The other counties have a government with a three-member board of county commissioners, a sheriff, coroner, auditor, treasurer, clerk of the court of common pleas prosecutor, engineer, and recorder. There are two kinds of incorporated municipalities, 251 cities and 681 villages. If a municipality has five thousand or more residents as of the last United States Census it is a city, otherwise it is a village. Municipalities have full home rule powers, may adopt a charter, ordinances and resolutions for self-government. Each municipality chooses its own form of government, but most have elected mayors and city councils or city commissions. City governments provide much more extensive services than county governments, such as police forces and paid (as opposed to volunteer) fire departments. The entire area of the state is encompassed by townships. When the boundaries of a township are coterminous with the boundaries of a city or village, the township ceases to exist as a separate government (called a paper township). Townships are governed by a three-member board of township trustees. Townships may have limited home rule powers. There are more than 600 city, local, and exempted village school districts providing K-12 education in Ohio, as well as about four dozen joint vocation school districts which are separate from the K-12 districts. Each city school district, local school district, or exempted village school district is governed by an elected board of education. A school district previously under state supervision (municipal school district) may be governed by a board whose members either are elected or appointed by the mayor of the municipality containing the greatest portion of the district's area. Politics "Mother of presidents" Six U.S. presidents hailed from Ohio at the time of their elections, giving rise to its nickname "mother of presidents", a sobriquet it shares with Virginia. It is also termed "modern mother of presidents", in contrast to Virginia's status as the origin of presidents earlier in American history. Seven presidents were born in Ohio, making it second to Virginia's eight. Virginia-born William Henry Harrison lived most of his life in Ohio and is also buried there. Harrison conducted his political career while living on the family compound, founded by his father-in-law, John Cleves Symmes, in North Bend, Ohio. The seven presidents born in Ohio were Ulysses S. Grant (elected from Illinois), Rutherford B. Hayes, James A. Garfield, Benjamin Harrison (grandson of William Henry Harrison & elected from Indiana), William McKinley, William Howard Taft and Warren G. Harding. All seven were Republicans. Swing state Ohio is considered a swing state, being won by either the Democratic or Republican candidates reasonably each election. As a swing state, Ohio is usually targeted by both major-party campaigns, especially in competitive elections. Pivotal in the election of 1888, Ohio has been a regular swing state since 1980. Additionally, Ohio is considered a bellwether. Historian R. Douglas Hurt asserts that not since Virginia "had a state made such a mark on national political affairs". The Economist notes that "This slice of the mid-west contains a bit of everything American—part north-eastern and part southern, part urban and part rural, part hardscrabble poverty and part booming suburb", Since 1896, Ohio has had only three misses in the general election (Thomas E. Dewey in 1944, Richard Nixon in 1960, and Donald Trump in 2020) and had the longest perfect streak of any state, voting for the winning presidential candidate in each election from 1964 to 2016, and in 33 of the 38 held since the Civil War. No Republican has ever won the presidency without winning Ohio. As of 2019, there are more than 7.8 million registered Ohioan voters, with 1.3 million Democrats and 1.9 million Republicans. They are disproportionate in age, with a million more over 65 than there are 18- to 24-year-olds. Since the 2010 midterm elections, Ohio's voter demographic has leaned towards the Republican Party. The governor, Mike DeWine, is Republican, as well as all other non-judicial statewide elected officials, including Lieutenant Governor Jon A. Husted, Attorney General Dave Yost, State Auditor Keith Faber, Secretary of State Frank LaRose and State Treasurer Robert Sprague. In the Ohio State Senate the Republicans are the majority, 25–8, and in the Ohio House of Representatives the Republicans control the delegation 64–35. Losing two seats in the U.S. House of Representatives following the 2010 census, Ohio has had 16 seats for
a bill passed. The capital was then moved back to Chillicothe, which was the capital from 1812 to 1816. Finally, the capital was moved to Columbus, to have it near the geographic center of the state. Although many Native Americans had migrated west to evade American encroachment, others remained settled in the state, sometimes assimilating in part. Starting around 1809, the Shawnee pressed resistance to encroachment again. Under Chief Tecumseh, Tecumseh's War officially began in Ohio in 1811. When the War of 1812 began, the English decided to attack from Upper Canada into Ohio and merge their forces with the Shawnee. This continued until Tecumseh was killed at the Battle of the Thames in 1813. Most of the Shawnee, excluding the Pekowi in Southwest Ohio, were forcibly relocated west. Ohio played a key role in the War of 1812, as it was on the front line in the Western theater and the scene of several notable battles both on land and in Lake Erie. On September 10, 1813, the Battle of Lake Erie, one of the major battles, took place near Put-in-Bay, Ohio. The British eventually surrendered to Oliver Hazard Perry. Ultimately, after the United States government used the Indian Removal Act of 1830 to force countless Native American tribes on the Trail of Tears, where all the southern states except for Florida were successfully emptied of Native peoples, the US government panicked because a majority of tribes did not want to be forced out of their own lands. Fearing further wars between Native tribes and American settlers, they pushed all remaining Native tribes in the East to migrate west against their own will, including all remaining tribes in Ohio. In 1835, Ohio fought with the Michigan Territory in the Toledo War, a mostly bloodless boundary war over the Toledo Strip. Only one person was injured in the conflict. Congress intervened, making Michigan's admittance as a state conditional on ending the conflict. In exchange for giving up its claim to the Toledo Strip, Michigan was given the western two-thirds of the Upper Peninsula, in addition to the eastern third which was already considered part of the territory. Civil War and industrialization Ohio's central position and its population gave it an important place during the Civil War. The Ohio River was a vital artery for troop and supply movements, as were Ohio's railroads. The industry of Ohio made the state one of the most important states in the Union during the Civil war. Ohio contributed more soldiers per capita than any other state in the Union. In 1862, the state's morale was badly shaken in the aftermath of the Battle of Shiloh, a costly victory in which Ohio forces suffered 2,000 casualties. Later that year, when Confederate troops under the leadership of Stonewall Jackson threatened Washington, D.C., Ohio governor David Tod still could recruit 5,000 volunteers to provide three months of service. From July 13 to 26, 1863, towns along the Ohio River were attacked and ransacked in Morgan's Raid, starting in Harrison in the west and culminating in the Battle of Salineville near West Point in the far east. While this raid was overall insignificant to the Confederacy, it aroused fear among people in Ohio and Indiana as it was the furthest advancement of troops from the South in the war. Almost 35,000 Ohioans died in the conflict, and 30,000 were physically wounded. By the end of the Civil War, the Union's top three generals – Ulysses S. Grant, William Tecumseh Sherman, and Philip Sheridan – were all from Ohio. Throughout much of the 19th century, industry was rapidly introduced to complement an existing agricultural economy. One of the first iron manufacturing plants opened near Youngstown in 1804 called Hopewell Furnace. By the mid-19th century, 48 blast furnaces were operating in the state, most in the southern portions of the state. Discovery of coal deposits aided the further development of the steel industry in the state, and by 1853 Cleveland was the third largest iron and steel producer in the country. The first Bessemer converter was purchased by the Cleveland Rolling Mill Company, which eventually became part of the U.S. Steel Corporation following the merger of Federal Steel Company and Carnegie Steel, the first billion-dollar American corporation. The first open-hearth furnace used for steel production was constructed by the Otis Steel Company in Cleveland, and by 1892, Ohio ranked as the 2nd-largest steel-producing state behind Pennsylvania. Republic Steel was founded in Youngstown in 1899 and was at one point the nation's third-largest producer. Armco, now AK Steel, was founded in Middletown also in 1899. 20th century The state legislature officially adopted the flag of Ohio on May 9, 1902. Dayton natives Orville and Wilbur Wright made four brief flights at Kitty Hawk, North Carolina on December 17, 1903, inventing the first successful airplane. Ohio was hit by its greatest natural disaster in the Great Flood of 1913, resulting in at least 428 fatalities and hundreds of millions of dollars in property damage, particularly around the Great Miami River basin. The National Football League was originally founded in Canton, Ohio in 1920 as the American Professional Football Conference. It included Ohio League teams in five Ohio cities (Akron, Canton, Cleveland, Columbus, and Dayton), although none of these teams still exist. The first official game occurred on October 3, 1920, when the Dayton Triangles beat the Columbus Panhandles 14-0 in Dayton. Canton would later be enshrined as the home of the Pro Football Hall of Fame in 1963. During the 1930s, the Great Depression struck the state hard. By 1933, more than 40% of factory workers and 67% of construction workers were unemployed in Ohio. Approximately 50% of industrial workers in Cleveland and 80% in Toledo became unemployed, with the state unemployment rate reaching a high of 37.3%. American Jews watched the rise of Nazi Germany with apprehension. Cleveland residents Jerry Siegel and Joe Shuster created the Superman comic character in the spirit of the Jewish golem. Many of their comics portrayed Superman fighting and defeating the Nazis. Approximately 839,000 Ohioans served in the U.S. armed forces during World War II, of which over 23,000 died or were missing in action. Artists, writers, musicians and actors developed in the state throughout the 20th century and often moved to other cities which were larger centers for their work. They included Zane Grey, Milton Caniff, George Bellows, Art Tatum, Roy Lichtenstein, and Roy Rogers. Alan Freed, who emerged from the swing dance culture in Cleveland, hosted the first live rock 'n roll concert in Cleveland in 1952. Famous filmmakers include Steven Spielberg, Chris Columbus and the original Warner Brothers, who set up their first movie theatre in Youngstown before that company later relocated to California. The state produced many popular musicians, including Dean Martin, Doris Day, The O'Jays, Marilyn Manson, Dave Grohl, Devo, Macy Gray and The Isley Brothers. Two Ohio astronauts completed significant milestones in the space race in the 1960s: John Glenn becoming the first American to orbit the Earth, and Neil Armstrong becoming the first human to walk on the Moon. In 1967, Carl Stokes was elected mayor of Cleveland and became the first African American mayor of one of the nation's 10 most populous cities. In 1970, an Ohio Army National Guard unit fired at students during an anti-war protest at Kent State University, killing four and wounding nine. The Guard had been called onto campus after several protests in and around campus had become violent, including a riot in downtown Kent and the burning of an ROTC building. The main cause of the protests was the United States' invasion of Cambodia during the Vietnam War. Beginning in the 1980s, the state entered into international economic and resource cooperation treaties and organizations with other Midwestern states, as well as New York, Pennsylvania, Ontario, and Quebec, including the Great Lakes Charter, Great Lakes Compact, and the Council of Great Lakes Governors. 21st century Ohio had become nicknamed the "fuel cell corridor" in being a contributing anchor for the region now called the "Green Belt," in reference to the growing renewable energy sector. Although the state experienced heavy manufacturing losses at the close of the 20th century and suffered from the Great Recession, it was rebounding by the second decade in being the country's 6th-fastest-growing economy through the first half of 2010. Ohio's transition into the 21st century was symbolized by the Third Frontier program, spearheaded by governor Bob Taft around the start of the century. This built on the agricultural and industrial pillars of the economy, dubbed the first and second frontiers, by aiding the growth of advanced technology industries, the third frontier. The results of this initiative were considered widely successful, attracting 637 new high-tech companies to the state and 55,000 new jobs, with an average of salary of $65,000, while having a $6.6 billion economic impact with an investment return ratio of 9:1. In 2010 the state won the International Economic Development Council's Excellence in Economic Development Award, celebrated as a national model of success. Many of the state's former industrial centers turned to new industries, including Akron as a center for polymer and biomedical research, Cincinnati as the state's largest mercantile hub, Columbus as a center for technological research and development, education, and insurance, Cleveland in regenerative medicine research and manufacturing, Dayton as an aerospace and defense hub, and Toledo as a national center for solar technology. Ohio was hit hard by the Great Recession and manufacturing employment losses entering the 2010s. The recession cost the state 376,500 jobs and it had 89,053 foreclosures in 2009, a record for the state. The median household income dropped 7% and the poverty rate ballooned to 13.5% by 2009. In 2015, Ohio gross domestic product was $608.1 billion, the seventh-largest economy among the 50 states. In 2015, Ohio's total GDP accounted for 3.4% of U.S. GDP and 0.8% of world GDP. Geography Ohio's geographic location has proven to be an asset for economic growth and expansion. Because Ohio links the Northeast to the Midwest, much cargo and business traffic passes through its borders along its well-developed highways. Ohio has the nation's 10th largest highway network and is within a one-day drive of 50% of North America's population and 70% of North America's manufacturing capacity. To the north, Ohio has of coastline with Lake Erie, which allows for numerous cargo ports such as Cleveland and Toledo. Ohio's southern border is defined by the Ohio River. Ohio's neighbors are Pennsylvania to the east, Michigan to the northwest, Lake Erie to the north, Indiana to the west, Kentucky on the south, and West Virginia on the southeast. Ohio's borders were defined by metes and bounds in the Enabling Act of 1802 as follows: Ohio is bounded by the Ohio River, but nearly all of the river itself belongs to Kentucky and West Virginia. In 1980, the U.S. Supreme Court held that, based on the wording of the cessation of territory by Virginia (which at the time included what is now Kentucky and West Virginia), the boundary between Ohio and Kentucky (and, by implication, West Virginia) is the northern low-water mark of the river as it existed in 1792. Ohio has only that portion of the river between the river's 1792 low-water mark and the present high-water mark. The border with Michigan has also changed, as a result of the Toledo War, to angle slightly northeast to the north shore of the mouth of the Maumee River. Much of Ohio features glaciated till plains, with an exceptionally flat area in the northwest being known as the Great Black Swamp. This glaciated region in the northwest and central state is bordered to the east and southeast first by a belt known as the glaciated Allegheny Plateau, and then by another belt known as the unglaciated Allegheny Plateau. Most of Ohio is of low relief, but the unglaciated Allegheny Plateau features rugged hills and forests. The rugged southeastern quadrant of Ohio, stretching in an outward bow-like arc along the Ohio River from the West Virginia Panhandle to the outskirts of Cincinnati, forms a distinct socio-economic unit. Geologically similar to parts of West Virginia and southwestern Pennsylvania, this area's coal mining legacy, dependence on small pockets of old manufacturing establishments, and distinctive regional dialect set this section off from the rest of the state. In 1965 the United States Congress passed the Appalachian Regional Development Act, an attempt to "address the persistent poverty and growing economic despair of the Appalachian Region". This act defines 29 Ohio counties as part of Appalachia. While 1/3 of Ohio's land mass is part of the federally defined Appalachian region, only 12.8% of Ohioans live there (1.476 million people.) Significant rivers within the state include the Cuyahoga River, Great Miami River, Maumee River, Muskingum River, and Scioto River. The rivers in the northern part of the state drain into the northern Atlantic Ocean via Lake Erie and the St. Lawrence River, and the rivers in the southern part of the state drain into the Gulf of Mexico via the Ohio River and then the Mississippi. The worst weather disaster in Ohio history occurred along the Great Miami River in 1913. Known as the Great Dayton Flood, the entire Miami River watershed flooded, including the downtown business district of Dayton. As a result, the Miami Conservancy District was created as the first major flood plain engineering project in Ohio and the United States. Grand Lake St. Marys in the west-central part of the state was constructed as a supply of water for canals in the canal-building era of 1820–1850. This body of water, over , was the largest artificial lake in the world when completed in 1845. Ohio's canal-building projects were not the economic fiasco that similar efforts were in other states. Some cities, such as Dayton, owe their industrial emergence to location on canals, and as late as 1910 interior canals carried much of the bulk freight of the state. Climate The climate of Ohio is a humid continental climate (Köppen climate classification Dfa/Dfb) throughout most of the state, except in the extreme southern counties of Ohio's Bluegrass region section, which are located on the northern periphery of the humid subtropical climate (Cfa) and Upland South region of the United States. Summers are typically hot and humid throughout the state, while winters generally range from cool to cold. Precipitation in Ohio is moderate year-round. Severe weather is not uncommon in the state, although there are typically fewer tornado reports in Ohio than in states located in what is known as the Tornado Alley. Severe lake effect snowstorms are also not uncommon on the southeast shore of Lake Erie, which is located in an area designated as the Snowbelt. Although predominantly not in a subtropical climate, some warmer-climate flora and fauna do reach well into Ohio. For instance, some trees with more southern ranges, such as the blackjack oak, Quercus marilandica, are found at their northernmost in Ohio just north of the Ohio River. Also evidencing this climatic transition from a subtropical to continental climate, several plants such as the Southern magnolia (Magnolia grandiflora), Albizia julibrissin (mimosa), Crape Myrtle, and even the occasional Needle Palm are hardy landscape materials regularly used as street, yard, and garden plantings in the Bluegrass region of Ohio; but these same plants will simply not thrive in much of the rest of the state. This interesting change may be observed while traveling through Ohio on Interstate 75 from Cincinnati to Toledo; the observant traveler of this diverse state may even catch a glimpse of Cincinnati's common wall lizard, one of the few examples of permanent "subtropical" fauna in Ohio. Due to flooding resulting in severely damaged highways, Governor Mike DeWine declared a state of emergency in 37 Ohio counties in 2019. Records The highest recorded temperature was , near Gallipolis on July 21, 1934. The lowest recorded temperature was , at Milligan on February 10, 1899, during the Great Blizzard of 1899. Earthquakes Although few have registered as noticeable to the average resident, more than 200 earthquakes with a magnitude of 2.0 or higher have occurred in Ohio since 1776. The Western Ohio Seismic Zone and a portion of the Southern Great Lakes Seismic Zone are located in the state, and numerous faults lie under the surface. The most substantial known earthquake in Ohio history was the Anna (Shelby County) earthquake, which occurred on March 9, 1937. It was centered in western Ohio, and had a magnitude of 5.4, and was of intensity VIII. Other significant earthquakes in Ohio include: one of magnitude 4.8 near Lima on September 19, 1884; one of magnitude 4.2 near Portsmouth on May 17, 1901; and one of 5.0 in LeRoy Township in Lake County on January 31, 1986, which continued to trigger 13 aftershocks of magnitude 0.5 to 2.4 for two months. Notable Ohio earthquakes in the 21st century include one occurring on December 31, 2011, approximately northwest of Youngstown, and one occurring on June 10, 2019, approximately north-northwest of Eastlake under Lake Erie; both registered a 4.0 magnitude. Major cities Ohio's three largest cities are Columbus, Cleveland, and Cincinnati, all three of which anchor major metropolitan areas. Columbus is the capital of state, located near the geographic center of the state and is well known for Ohio State University,. In 2019, the city had six corporations named to the U.S. Fortune 500 list: Alliance Data, Nationwide Mutual Insurance Company, American Electric Power, L Brands, Huntington Bancshares, and Cardinal Health in suburban Dublin. Other major employers include hospitals (among others, Wexner Medical Center and Nationwide Children's Hospital), hi-tech research and development including the Battelle Memorial Institute, information/library companies such as OCLC and Chemical Abstracts Service, steel processing and pressure cylinder manufacturer Worthington Industries, financial institutions such as JPMorgan Chase and Huntington Bancshares, as well as Owens Corning. Fast food chains Wendy's and White Castle are also headquartered in Columbus. Located in Northeast Ohio along the Lake Erie shore, Cleveland is characterized by its New England heritage, ethnic immigrant cultures, and history as a major American manufacturing and healthcare center. It anchors the Cleveland–Akron–Canton Combined Statistical Area, the largest CSA in the state, of which the cities of Akron, Canton, Mansfield, and Youngstown are constituent parts. Northeast Ohio is known for major industrial companies Goodyear Tire and Rubber and Timken, top-ranked colleges Case Western Reserve University, Oberlin College, and Kent State University, the Cleveland Clinic, and cultural attractions including the Cleveland Museum of Art, Big Five group Cleveland Orchestra, Playhouse Square, the Pro Football Hall of Fame, and the Rock and Roll Hall of Fame. Toledo and Lima are the major cities in Northwest Ohio, an area known for its glass-making industry. It is home to Owens Corning and Owens-Illinois, two Fortune 500 corporations. Cincinnati anchors Southwest Ohio and Metro Cincinnati, which also encompasses counties in the neighboring states of Kentucky and Indiana. The metropolitan area is home to Miami University and the University of Cincinnati, Cincinnati Union Terminal, Cincinnati Symphony Orchestra, and various Fortune 500 companies including Procter & Gamble, Kroger, Macy's, Inc., and Fifth Third Bank. Dayton and Springfield are located in the Miami Valley, which is home to the University of Dayton, the Dayton Ballet, and the extensive Wright-Patterson Air Force Base. Steubenville is the only metropolitan city in Appalachian Ohio, which is home to Hocking Hills State Park. Metropolitan areas The Cincinnati metropolitan area extends into Kentucky and Indiana, the Steubenville metropolitan area extends into West Virginia, and the Youngstown metropolitan area extends into Pennsylvania. Other metropolitan areas that contain cities in Ohio, but are primarily in other states include: Huntington-Ashland, WV-KY-OH Metropolitan Statistical Area (Lawrence County) Wheeling, WV Metropolitan Statistical Area (Belmont County) Additionally, 30 Ohio cities function as centers of micropolitan areas, urban clusters smaller than that of metropolitan areas. Many of these are included as part of larger combined statistical areas, as shown in the table above. Demographics Population From just over 45,000 residents in 1800, Ohio's population grew faster than 10% per decade (except for the 1940 census) until the 1970 census, which recorded just over 10.65 million Ohioans. Growth then slowed for the next four decades. The United States Census Bureau counted 11,808,848 in the 2020 census, a 2.4% increase since the 2010 United States census. Ohio's population growth lags that of the entire United States, and whites are found in a greater density than the US average. , Ohio's center of population is located in Morrow County, in the county seat of Mount Gilead. This is approximately south and west of Ohio's population center in 1990. As of 2011, 27.6% of Ohio's children under the age of 1 belonged to minority groups. 6.2% of Ohio's population is under five years of age, 23.7 percent under 18 years of age, and 14.1 percent were 65 or older. Females made up approximately 51.2 percent of the population. Birth data Note: Births in table do not add up because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number. Since 2016, data for births of White Hispanic origin are not collected, but included in one Hispanic group; persons of Hispanic origin may be of any race. Ancestry In 2010, there were 469,700 foreign-born residents in Ohio, corresponding to 4.1% of the total population. Of these, 229,049 (2.0%) were naturalized US citizens and 240,699 (2.1%) were not. The largest groups were: Mexico (54,166), India (50,256), China (34,901), Germany (19,219), Philippines (16,410), United Kingdom (15,917), Canada (14,223), Russia (11,763), South Korea (11,307), and Ukraine (10,681). Though predominantly white, Ohio has large black populations in all major metropolitan areas throughout the state, Ohio has a significant Hispanic population made up of Mexicans in Toledo and Columbus, and Puerto Ricans in Cleveland and Columbus, and also has a significant and diverse Asian population in Columbus. The largest ancestry groups (which the census defines as not including racial terms) in the state are: 26.5% German 14.1% Irish 9.0% English 6.4% Italian 3.8% Polish 2.5% French 1.9% Scottish 1.7% Hungarian 1.6% Dutch 1.5% Mexican 1.2% Slovak 1.1% Welsh 1.1% Scotch-Irish Ancestries claimed by less than 1% of the population include Sub-Saharan African, Puerto Rican, Swiss, Swedish, Arab, Greek, Norwegian, Romanian, Austrian, Lithuanian, Finnish, West Indian, Portuguese and Slovene. Languages About 6.7% of the population age 5 years and older reported speaking a language other than English, with 2.2% of the population speaking Spanish, 2.6% speaking other Indo-European languages, 1.1% speaking Asian and Austronesian languages, and 0.8% speaking other languages. Numerically: 10,100,586 spoke English, 239,229 Spanish, 55,970 German, 38,990 Chinese, 33,125 Arabic, and 32,019 French. In addition 59,881 spoke a Slavic language and 42,673 spoke another West Germanic language according to the 2010 census. Ohio also had the nation's largest population of Slovene speakers, second largest of Slovak speakers, second largest of Pennsylvania Dutch (German) speakers, and the third largest of Serbian speakers. Religion According to a Pew Forum poll, as of 2014, 73% of Ohioans identified as Christian. Specifically, 29% of Ohio's population identified as Evangelical Protestant, 17% as Mainline Protestant, 7% as Historically Black Protestant, and 18% as Catholic. 22% of the population is unaffiliated with any religious body. Small minorities of Jews (1%), Jehovah's Witnesses (1%), Muslims (1%), Hindus (<1%), Buddhists (1%), Mormons (1%), and other faiths (1-1.5%) exist. According to the Association of Religion Data Archives (ARDA), in 2010 the largest denominations by adherents were the Catholic Church with 1,992,567; the United Methodist Church with 496,232; the Evangelical Lutheran Church in America with 223,253, the Southern Baptist Convention with 171,000, the Christian Churches and Churches of Christ with 141,311, the United Church of Christ with 118,000, and the Presbyterian Church (USA) with 110,000. With about 80,000 adherents in 2020, Ohio has the second largest Amish population of all U.S. states, only behind neighboring Pennsylvania. According to the same data, a majority of Ohioans, 56%, feel religion is "very important", 25% that it is "somewhat important", and 19% that religion is "not too important/not important at all". 38% of Ohioans indicate that they attend religious services at least once weekly, 32% occasionally, and 30% seldom or never. Economy According to the U.S. Census Bureau, the total number for employment in 2016 was 4,790,178. The total number of unique employer establishments was 252,201, while the total number of non-employer establishments was 785,833. In 2010, Ohio was ranked second in the country for best business climate by Site Selection magazine, based on a business-activity database. The state has also won three consecutive Governor's Cup awards from the magazine, based on business growth and developments. , Ohio's gross domestic product (GDP) was $626 billion. This ranks Ohio's economy as the seventh-largest of all fifty states and the District of Columbia. The Small Business & Entrepreneurship Council ranked the state No. 10 for best business-friendly tax systems in their Business Tax Index 2009, including a top corporate tax and capital gains rate that were both ranked No. 6 at 1.9%. Ohio was ranked No. 11 by the council for best friendly-policy states according to their Small Business Survival Index 2009. The Directorship's Boardroom Guide ranked the state No. 13 overall for best business climate, including No. 7 for best litigation climate. Forbes ranked the state No. 8 for best regulatory environment in 2009. Ohio has five of the top 115 colleges in the nation, according to U.S. News and World Report's 2010 rankings, and was ranked No. 8 by the same magazine in 2008 for best high schools. Ohio's unemployment rate stands at 4.5% as of February 2018, down from 10.7% in May 2010. The state still lacks 45,000 jobs compared to the pre-recession numbers of 2007. The labor force participation as of April 2015 is 63%, slightly above the national average. Ohio's per capita income stands at $34,874. , Ohio's median household income is $58,642, and 13.1% of the population is below the poverty line. The manufacturing and financial activities sectors each compose 18.3% of Ohio's GDP, making them Ohio's largest industries by percentage of GDP. Ohio has the third largest manufacturing workforce behind California and Texas. Ohio has the largest bioscience sector in the Midwest, and is a national leader in the "green" economy. Ohio is the largest producer in the country of plastics, rubber, fabricated metals, electrical equipment, and appliances. 5,212,000 Ohioans are currently employed by wage or salary. By employment, Ohio's largest sector is trade/transportation/utilities, which employs 1,010,000 Ohioans, or 19.4% of Ohio's workforce, while the health care and education sector employs 825,000 Ohioans (15.8%). Government employs 787,000 Ohioans (15.1%), manufacturing employs 669,000 Ohioans (12.9%), and professional and technical services employs 638,000 Ohioans (12.2%). Ohio's manufacturing sector is the third-largest of all fifty United States states in terms of gross domestic product. Fifty-nine of the United States' top 1,000 publicly traded companies (by revenue in 2008) are headquartered in Ohio, including Procter & Gamble, Goodyear Tire & Rubber, AK Steel, Timken, Abercrombie & Fitch, and Wendy's. Ohio is also one of 41 states with its own lottery, the Ohio Lottery. , the Ohio Lottery has contributed more than $26 billion to education beginning in 1974. Transportation Ground travel Many major east–west transportation corridors go through Ohio. One of those pioneer routes, known in the early 20th century as "Main Market Route 3", was chosen in 1913 to become part of the historic Lincoln Highway which was the first road across America, connecting New York City to San Francisco. In Ohio, the Lincoln Highway linked many towns and cities together, including Canton, Mansfield, Wooster, Lima, and Van Wert. The arrival of the Lincoln Highway to Ohio was a major influence on the development of the state. Upon the advent of the federal numbered highway system in 1926, the Lincoln Highway through Ohio became U.S. Route 30. Ohio also is home to of the Historic National Road, now U.S. Route 40. Ohio has a highly developed network of roads and interstate highways. Major east-west through routes include the Ohio Turnpike (I-80/I-90) in the north, I-76 through Akron to Pennsylvania, I-70 through Columbus and Dayton, and the Appalachian Highway (State Route 32) running from West Virginia to Cincinnati. Major north–south routes include I-75 in the west through Toledo, Dayton, and Cincinnati, I-71 through the middle of the state from Cleveland through Columbus and Cincinnati into Kentucky, and I-77 in the eastern part of the state from Cleveland through Akron, Canton, New Philadelphia
city centre Orbital engine Other uses Orbital (The Culture), artificial worlds from Iain M. Banks's series of science fiction novels, the Culture Orbital (band), an English electronic dance music duo Orbital (1991 album) Orbital (1993 album) Orbital (comics), a Franco-Belgian science fiction comics
Orbital spaceflight Medicine and physiology Orbit (anatomy), also known as the orbital bone Orbitofrontal cortex, a part of the brain used for decision making Business Orbital Corporation, an Australian engine technology company Orbital Sciences Corporation, a U.S. satellite launch and defense systems corporation Orbital ATK, American aerospace manufacturer
For instance, almost all authorities would require the exclusion of alloys that contain carbon, including steel (which contains cementite, Fe3C), as well as other metal and semimetal carbides (including "ionic" carbides, e.g, Al4C3 and CaC2 and "covalent" carbides, e.g. B4C and SiC, and graphite intercalation compounds, e.g. KC8). Other compounds and materials that are considered 'inorganic' by most authorities include: metal carbonates, simple oxides (CO, CO2, and arguably, C3O2), the allotropes of carbon, cyanide derivatives not containing an organic residue (e.g., KCN, (CN)2, BrCN, CNO−, etc.), and heavier analogs thereof (e.g., CP− 'cyaphide anion', CSe2, COS; although CS2 'carbon disulfide' is often classed as an organic solvent). Halides of carbon without hydrogen (e.g., CF4 and CClF3), phosgene (COCl2), carboranes, metal carbonyls (e.g., nickel carbonyl), mellitic anhydride (C12O9), and other exotic oxocarbons are also considered inorganic by some authorities. Nickel carbonyl (Ni(CO)4) and other metal carbonyls are often volatile liquids, like many organic compounds, yet they contain only carbon bonded to a transition metal and to oxygen, and are often prepared directly from metal and carbon monoxide. Nickel carbonyl is typically classified as an organometallic compound as it satisfies the broad definition that organometallic chemistry covers all compounds that contain at least one carbon to metal covalent bond; it is debatable whether organometallic compounds form a subset of organic compounds, however. For example, the evidence of covalent Fe-C bonding in cementite, a major component of steel, places it within this broad definition of organometallic, yet steel and other carbon-containing alloys are seldom regarded as organic compounds. Thus, it is unclear whether the definition of organometallic should be narrowed, whether these considerations imply that organometallic compounds are not necessarily organic, or both. Metal complexes with organic ligands but no carbon-metal bonds (e.g., Cu(OAc)2) are not considered organometallic; instead they are classed as metalorganic. Likewise, it is also unclear whether metalorganic compounds should automatically be considered organic. The relatively narrow definition of organic compounds as those containing C-H bonds excludes compounds that are (historically and practically) considered organic. Neither urea nor oxalic acid is organic by this definition, yet they were two key compounds in the vitalism debate. The IUPAC Blue Book on organic nomenclature specifically mentions urea and oxalic acid. Other compounds lacking C-H bonds but traditionally considered organic include benzenehexol, mesoxalic acid, and carbon tetrachloride. Mellitic acid, which contains no C-H bonds, is considered a possible organic substance in Martian soil. Terrestrially, it, and its anhydride, mellitic anhydride, are associated with the mineral mellite (Al2C6(COO)6·16H2O). A slightly broader definition of organic compound includes all compounds bearing C-H or C-C bonds. This would still exclude urea. Moreover, this definition still leads to somewhat arbitrary divisions in sets of carbon-halogen compounds. For example, CF4 and CCl4 would be considered by this rule to be "inorganic", whereas CF3H, CHCl3, and C2Cl6 would be organic, though these compounds share many physical and chemical properties. Classification Organic compounds may be classified in a variety of ways. One major distinction is between natural and synthetic compounds. Organic compounds can also
crust, they are of central importance because all known life is based on organic compounds. Living things incorporate inorganic carbon compounds into organic compounds through a network of processes (the carbon cycle) that begins with the conversion of carbon dioxide and a hydrogen source like water into simple sugars and other organic molecules by autotrophic organisms using light (photosynthesis) or other sources of energy. Most synthetically-produced organic compounds are ultimately derived from petrochemicals consisting mainly of hydrocarbons, which are themselves formed from the high pressure and temperature degradation of organic matter underground over geological timescales. This ultimate derivation notwithstanding, organic compounds are no longer defined as compounds originating in living things, as they were historically. In chemical nomenclature, an organyl group, frequently represented by the letter R, refers to any monovalent substituent whose open valence is on a carbon atom. Definitions of organic vs inorganic For historical reasons discussed below, a few types of carbon-containing compounds, such as carbides, carbonates (excluding carbonate esters), simple oxides of carbon (for example, CO and CO2), and cyanides are considered inorganic. Different forms (allotropes) of pure carbon, such as diamond, graphite, fullerenes, and carbon nanotubes are also excluded because they are simple substances composed of only a single element and therefore are not generally considered to be chemical compounds. History Vitalism Vitalism was a widespread conception that substances found in organic nature are formed from the chemical elements by the action of a "vital force" or "life-force" (vis vitalis) that only living organisms possess. In the 1810, Jöns Jacob Berzelius stated that "living things work by means of some mysterious vital force". Berzelius also contended that compounds could be distinguished by whether they required any organisms in their synthesis (organic compounds) or whether they did not (inorganic compounds). Vitalism taught that formation of these "organic" compounds were fundamentally different from the "inorganic" compounds that could be obtained from the elements by chemical manipulations in laboratories. Vitalism survived for a while even after the formulation of modern ideas about the atomic theory and chemical elements. It first came under question in 1824, when Friedrich Wöhler synthesized oxalic acid, a compound known to occur only in living organisms, from cyanogen. A further experiment was Wöhler's 1828 synthesis of urea from the inorganic salts potassium cyanate and ammonium sulfate. Urea had long been considered an "organic" compound, as it was known to occur only in the urine of living organisms. Wöhler's experiments were followed by many others, in which increasingly complex "organic" substances were produced from "inorganic" ones without the involvement of any living organism, thus disproving Berzelius's type of vitalism. Modern classification and ambiguities Although vitalism has been discredited, scientific nomenclature retains the distinction between organic and inorganic compounds. The modern meaning of organic compound is any compound that contains a significant amount of carbon—even though many of the organic compounds known today have no connection to any substance found in living organisms. The term carbogenic has been proposed by E. J. Corey as a modern alternative to organic, but this neologism remains relatively obscure. The organic compound L-isoleucine molecule presents some features typical of organic compounds: carbon–carbon bonds, carbon–hydrogen bonds, as well as covalent bonds from carbon to oxygen and to nitrogen. As described in detail below, any definition of organic compound that uses simple, broadly-applicable criteria turns out to be unsatisfactory, to varying degrees. The modern, commonly accepted definition of organic compound essentially amounts to any carbon containing compound, excluding several classes of substances traditionally considered as 'inorganic'. However, the list of substances so excluded varies from author to author. Still, it is generally
efforts designed to maximize price control by minimizing the influence of competition. As a result of operating in countries with enforced antitrust laws, oligopolists will operate under tacit collusion, which is collusion through a mutual understanding among the competitors of a market without any direct communication or contact that by collectively raising prices, each participating competitor can achieve economic profits comparable to those achieved by a monopolist while avoiding the explicit breach of market regulations. Hence, the kinked demand curve for a joint profit-maximizing oligopoly industry can model the behaviors of oligopolists' pricing decisions other than that of the price leader (the price leader being the entity that all other entities follow in terms of pricing decisions). This is because if an entity unilaterally raises the prices of their good/service and competing entities do not follow, the entity that raised their price will lose a significant market as they face the elastic upper segment of the demand curve. As the joint profit-maximizing efforts achieve greater economic profits for all participating entities, there is an incentive for an individual entity to "cheat" by expanding output to gain greater market share and profit. In the case of oligopolist cheating, when the incumbent entity discovers this breach in collusion, competitors in the market will retaliate by matching or dropping prices lower than the original drop. Hence, the market share originally gained by having dropped the price will be minimized or eliminated. This is why on the kinked demand curve model the lower segment of the demand curve is inelastic. As a result, in such markets price rigidity prevails. Modeling There is no single model describing the operation of an oligopolistic market. The variety and complexity of the models exist because two to 10 firms can compete on the basis of price, quantity, technological innovations, marketing, and reputation. However, there are a series of simplified models that attempt to describe market behavior by considering certain circumstances. Some of the better-known models are the dominant firm model, the Cournot–Nash model, the Bertrand model and the kinked demand model. Cournot–Nash model The Cournot–Nash model is the simplest oligopoly model. The model assumes that there are two "equally positioned firms"; the firms compete on the basis of quantity rather than price and each firm makes an "output of decision assuming that the other firm's behavior is fixed." The market demand curve is assumed to be linear and marginal costs are constant. To find the Nash equilibrium one determines how each firm reacts to a change in the output of the other firm. The path to equilibrium is a series of actions and reactions. The pattern continues until a point is reached where neither firm desires "to change what it is doing, given how it believes the other firm will react to any change." The equilibrium is the intersection of the two firm's reaction functions. The reaction function shows how one firm reacts to the quantity choice of the other firm. For example, assume that the firm 1's demand function is P = (M − Q2) − Q1 where Q2 is the quantity produced by the other firm and Q1 is the amount produced by firm 1, and M=60 is the market. Assume that marginal cost is CM=12. Firm 1 wants to know its maximizing quantity and price. Firm 1 begins the process by following the profit maximization rule of equating marginal revenue to marginal costs. Firm 1's total revenue function is RT = Q1 P = Q1(M − Q2 − Q1) = MQ1 − Q1 Q2 − Q12. The marginal revenue function is . RM = CM M − Q2 − 2Q1 = CM 2Q1 = (M − CM) − Q2 Q1 = (M − CM)/2 − Q2/2 = 24 − 0.5 Q2 [1.1] Q2 = 2(M − CM) − 2Q1 = 96 − 2 Q1 [1.2] Equation 1.1 is the reaction function for firm 1. Equation 1.2 is the reaction function for firm 2. To determine the Nash equilibrium you can solve the equations simultaneously. The equilibrium quantities can also be determined graphically. The equilibrium solution would be at the intersection of the two reaction functions. Note that if you graph the functions the axes represent quantities. The reaction functions are not necessarily symmetric. The firms may face differing cost functions in which case the reaction functions would not be identical nor would the equilibrium quantities. Bertrand model The Bertrand model is essentially the Cournot–Nash model, except the strategic variable is price rather than quantity. The model assumptions are: There are two firms in the market They produce a homogeneous product They produce at a constant marginal cost Firms choose prices PA and PB simultaneously Firms outputs are perfect substitutes Sales are split evenly if PA = PB The only Nash equilibrium is PA = PB = MC. Neither firm has any reason to change strategy. If the firm raises prices, it will lose all its customers. If the firm lowers price P < MC then it will be losing money on every unit sold. The Bertrand equilibrium is the same as the competitive result. Each firm will produce where P = marginal costs and there will be zero profits. A generalization of the Bertrand model is the Bertrand–Edgeworth model that allows for capacity constraints and a more general cost function. Oligopolistic market: Kinked demand curve model According to this model, each firm faces a demand curve kinked at the existing price. The conjectural assumptions of the model are; if the firm raises its price above the current existing price, competitors will not follow and the acting firm will lose market share and second, if a firm lowers prices below the existing price then their competitors will follow to retain their market share and the firm's output will increase only marginally. In other words, oligopolist's pricing logic is that competitors will match and respond to any price cut - retaliating to obtain more market share, while they will stick with the current or initial price for any price rising among competitors. If the assumptions hold, then: The firm's marginal revenue curve is discontinuous (or rather, not differentiable), and has a gap at the kink For prices above the prevailing price the curve is relatively elastic For prices below the point the curve is relatively inelastic The gap in the marginal revenue curve means that marginal costs can fluctuate without changing equilibrium price and quantity, thus, prices tend to be rigid. Examples Many industries have been cited as oligopolistic, including civil aviation, agricultural pesticides, electricity, and platinum group metal mining. In most countries, the telecommunications sector is characterized by an oligopolistic market structure. Rail freight markets in the European Union have an oligopolistic structure. In the United States, industries that have identified as oligopolistic include food processing, funeral services, sugar refining, beer making, pulp and paper making, and automobile manufacturing. Market power and market concentration can be estimated or quantified using several different tools and measurements, including the Lerner index, stochastic frontier analysis, and New Empirical Industrial Organization (NEIO) modeling, as well as the Herfindahl-Hirschman index. Demand curve In an oligopoly, firms operate under imperfect competition. With the fierce price competitiveness created by this sticky-upward demand curve, firms use non-price competition in order to accrue greater revenue and market share. "Kinked" demand curves are similar to traditional demand curves, as they are downward-sloping. They are distinguished by a hypothesized convex bend with a discontinuity at the bend–"kink". Thus, the first derivative at that point is undefined and leads to a jump discontinuity in the marginal revenue curve. Classical economic theory assumes that a profit-maximizing producer with some market power (either due to oligopoly or monopolistic competition) will set marginal costs equal to marginal revenue. This idea can be envisioned graphically by the intersection of an upward-sloping marginal cost curve and a downward-sloping marginal revenue curve (because the more one sells, the lower the price must be, so the less a producer earns per unit). In classical theory, any change in the marginal cost structure (how much it costs to make each additional unit) or the marginal revenue structure (how much people will pay for each additional unit) will be immediately reflected in a new price and/or quantity sold of the item. This result does not occur if a "kink" exists. Because of this jump discontinuity in the marginal revenue curve, marginal cost, s could change without necessarily changing the price or quantity. The motivation behind this kink is the idea that in an oligopolistic or monopolistic competitive market, firms will not raise their prices because even a small price increase will lose many customers. This is because competitors will generally ignore price increases, with the hope of gaining a larger market share as a result of now having comparatively lower prices (price rigidity). However, even a large price decrease will gain only a few customers because such an action will begin a price war with other firms. The curve is, therefore, more price-elastic for price increases and less so for price decreases. Theory predicts that firms will enter the industry in the long run since market price for oligopolists is more stable or 'focal' in the long run under this kinked demand curve
characteristics, such as homogenous goods, stable demand, less existing participants, which are prone to cartel formation. While regarding behavioral one, is mainly implemented when a cartel formation or agreement has reached and subsequently authorities start to look into firms' data and figure out whether their price variance is low or has a significant price increase or decrease. Oligopolies in countries with competition laws Oligopolies become "mature" when competing entities realize they can maximize profits through joint efforts designed to maximize price control by minimizing the influence of competition. As a result of operating in countries with enforced antitrust laws, oligopolists will operate under tacit collusion, which is collusion through a mutual understanding among the competitors of a market without any direct communication or contact that by collectively raising prices, each participating competitor can achieve economic profits comparable to those achieved by a monopolist while avoiding the explicit breach of market regulations. Hence, the kinked demand curve for a joint profit-maximizing oligopoly industry can model the behaviors of oligopolists' pricing decisions other than that of the price leader (the price leader being the entity that all other entities follow in terms of pricing decisions). This is because if an entity unilaterally raises the prices of their good/service and competing entities do not follow, the entity that raised their price will lose a significant market as they face the elastic upper segment of the demand curve. As the joint profit-maximizing efforts achieve greater economic profits for all participating entities, there is an incentive for an individual entity to "cheat" by expanding output to gain greater market share and profit. In the case of oligopolist cheating, when the incumbent entity discovers this breach in collusion, competitors in the market will retaliate by matching or dropping prices lower than the original drop. Hence, the market share originally gained by having dropped the price will be minimized or eliminated. This is why on the kinked demand curve model the lower segment of the demand curve is inelastic. As a result, in such markets price rigidity prevails. Modeling There is no single model describing the operation of an oligopolistic market. The variety and complexity of the models exist because two to 10 firms can compete on the basis of price, quantity, technological innovations, marketing, and reputation. However, there are a series of simplified models that attempt to describe market behavior by considering certain circumstances. Some of the better-known models are the dominant firm model, the Cournot–Nash model, the Bertrand model and the kinked demand model. Cournot–Nash model The Cournot–Nash model is the simplest oligopoly model. The model assumes that there are two "equally positioned firms"; the firms compete on the basis of quantity rather than price and each firm makes an "output of decision assuming that the other firm's behavior is fixed." The market demand curve is assumed to be linear and marginal costs are constant. To find the Nash equilibrium one determines how each firm reacts to a change in the output of the other firm. The path to equilibrium is a series of actions and reactions. The pattern continues until a point is reached where neither firm desires "to change what it is doing, given how it believes the other firm will react to any change." The equilibrium is the intersection of the two firm's reaction functions. The reaction function shows how one firm reacts to the quantity choice of the other firm. For example, assume that the firm 1's demand function is P = (M − Q2) − Q1 where Q2 is the quantity produced by the other firm and Q1 is the amount produced by firm 1, and M=60 is the market. Assume that marginal cost is CM=12. Firm 1 wants to know its maximizing quantity and price. Firm 1 begins the process by following the profit maximization rule of equating marginal revenue to marginal costs. Firm 1's total revenue function is RT = Q1 P = Q1(M − Q2 − Q1) = MQ1 − Q1 Q2 − Q12. The marginal revenue function is . RM = CM M − Q2 − 2Q1 = CM 2Q1 = (M − CM) − Q2 Q1 = (M − CM)/2 − Q2/2 = 24 − 0.5 Q2 [1.1] Q2 = 2(M − CM) − 2Q1 = 96 − 2 Q1 [1.2] Equation 1.1 is the reaction function for firm 1. Equation 1.2 is the reaction function for firm 2. To determine the Nash equilibrium you can solve the equations simultaneously. The equilibrium quantities can also be determined graphically. The equilibrium solution would be at the intersection of the two reaction functions. Note that if you graph the functions the axes represent quantities. The reaction functions are not necessarily symmetric. The firms may face differing cost functions in which case the reaction functions would not be identical nor would the equilibrium quantities. Bertrand model The Bertrand model is essentially the Cournot–Nash model, except the strategic variable is price rather than quantity. The model assumptions are: There are two firms in the market They produce a homogeneous product They produce at a constant marginal cost Firms choose prices PA and PB simultaneously Firms outputs are perfect substitutes Sales are split evenly if PA = PB The only Nash equilibrium is PA = PB = MC. Neither firm has any reason to change strategy. If the firm raises prices, it will lose all its customers. If the firm lowers price P < MC then it will be losing money on every unit sold. The Bertrand equilibrium is the same as the competitive result. Each firm will produce where P = marginal costs and there will be zero profits. A generalization of the Bertrand model is the Bertrand–Edgeworth model that allows for capacity constraints and a more general cost function. Oligopolistic market: Kinked demand curve model According to this model, each firm faces a demand curve kinked at the existing price. The conjectural assumptions of the model are; if the firm raises its price above the current existing price, competitors will not follow and the acting firm will lose market share and second, if a firm lowers prices below the existing price then their competitors will follow to retain their market share and the firm's output will increase only marginally. In other words, oligopolist's pricing logic is that competitors will match and respond to any price cut - retaliating to obtain more market share, while they will stick with the current or initial price for any price rising among competitors. If the assumptions hold, then: The firm's marginal revenue curve is discontinuous (or rather, not differentiable), and has a gap at the kink For prices above the prevailing price the curve is relatively elastic For prices
and retain it in pockets, or on long faulting subsurface ridges or volcanic dikes water can collect and percolate to the surface. Any incidence of water is then used by migrating birds, which also pass seeds with their droppings which will grow at the water's edge forming an oasis. It can also be used to plant crops. Historical significance The location of oases has been of critical importance for trade and transportation routes in desert areas; caravans must travel via oases so that supplies of water and food can be replenished. Thus, political or military control of an oasis has in many cases meant control of trade on a particular route. For example, the oases of Awjila, Ghadames and Kufra, situated in modern-day Libya, have at various times been vital to both north–south and east–west trade in the Sahara Desert. The Silk Road across Central Asia also incorporated several oases. In North American history, oases have been less prominent because the desert regions are smaller; however, several areas in the deep southwestern United States have oases
Historical significance The location of oases has been of critical importance for trade and transportation routes in desert areas; caravans must travel via oases so that supplies of water and food can be replenished. Thus, political or military control of an oasis has in many cases meant control of trade on a particular route. For example, the oases of Awjila, Ghadames and Kufra, situated in modern-day Libya, have at various times been vital to both north–south and east–west trade in the Sahara Desert. The Silk Road across Central Asia also incorporated several oases. In North American history, oases have been less prominent because the desert regions are smaller; however, several areas in the deep southwestern United States have oases regions that served as important links through the hot deserts and vast rural areas. While present-day desert cities like Las Vegas, Phoenix, Palm Springs, and Tucson are large modern cities, many of these locations were once small, isolated farming areas at which travelers through the western desert stopped for food and supplies. Even today, there are several roads that go through western deserts like