text stringlengths 10 951k | source stringlengths 39 44 |
|---|---|
Hindi
Hindi (Devanagari: हिन्दी, IAST/ISO 15919: "Hindī", historically known as Hindui and Hindavi), or more precisely Modern Standard Hindi (Devanagari: मानक हिन्दी, IAST/ISO 15919: "Mānak Hindī"), is an Indo-Aryan language spoken in India. Modern Hindi is often described as a standardised and Sanskritised register of the Hindustani language, which itself is based primarily on the Khariboli dialect of Delhi and neighbouring areas of Northern India. Hindi, written in the Devanagari script, is one of the two official languages of the Government of India, along with the English language. It is an official language of 9 states and 3 Union Territories and additional official language of 3 states. It is one of the 22 scheduled languages of the Republic of India.
Hindi is the "lingua franca" of the Hindi belt and to a lesser extent other parts of India (usually in a simplified or pidginised variety such as Bazaar Hindustani or Haflong Hindi). Outside India, several other languages are recognised officially as "Hindi" but do not refer to the Standard Hindi language described here and instead descend from other dialects, such as Awadhi and Bhojpuri. Such languages include Fiji Hindi, which is official in Fiji, and Caribbean Hindustani, which is spoken in Trinidad and Tobago, Guyana, and Suriname. Apart from the script and formal vocabulary, standard Hindi is mutually intelligible with standard Urdu, another recognised register of Hindustani as both share a common colloquial base.
As a linguistic variety, Hindi is the fourth most-spoken first language in the world, after Mandarin, Spanish and English. Hindi alongside Urdu as Hindustani is the third most-spoken language in the world, after Mandarin and English.
The term "Hindī" originally was used to refer to inhabitants of the Indo-Gangetic Plain. It was borrowed from Classical Persian "Hindī" (Iranian Persian pronunciation: "Hendi"), meaning "of or belonging to "Hind" (India)" (hence, "Indian").
Another name "Hindavī" (हिंदवी) or "Hinduī" (हिंदुई) (from "of or belonging to the Hindu/Indian people") was often used in the past, for example by Amir Khusrow in his poetry.
The terms ""Hindi"" and ""Hindu"" trace back to Old Persian which derived these names from the Sanskrit name "Sindhu" (सिन्धु ), referring to the river Indus. The Greek cognates of the same terms are ""Indus"" (for the river) and ""India"" (for the land of the river).
Like other Indo-Aryan languages, Hindi is a direct descendant of an early form of Vedic Sanskrit, through Sauraseni Prakrit and Śauraseni Apabhraṃśa (from Sanskrit "apabhraṃśa" "corrupt"), which emerged in the 7th century CE. Afer the arrival of Islamic administrative rule in northern India, Hindi acquired many loanwords from Persian, as well as Arabic.
Before the standardisation of Hindi on the Delhi dialect, various dialects and languages of the Hindi belt attained prominence through literary standardisation, such as Avadhi and Braj Bhasha. Early Hindi literature came about in the 12th and 13th centuries CE. This body of work included the early epics such as renditions of the "Dhola Maru" in the Marwari of Marwar, the "Prithviraj Raso" in the Braj Bhasha of Braj, and the works of Amir Khusrow in the dialect of Delhi.
Modern Standard Hindi is based on the Delhi dialect, the vernacular of Delhi and the surrounding region, which came to replace earlier prestige dialects such as Awadhi, Maithili (sometimes regarded as separate from the Hindi dialect continuum) and Braj. "Urdu" – considered another form of Hindustani – acquired linguistic prestige in the latter part of the Mughal period (1800s), and underwent significant Persian influence. Modern Hindi and its literary tradition evolved towards the end of the 18th century.
John Gilchrist was principally known for his study of the Hindustani language, which was adopted as the lingua franca of northern India (including what is now present-day Pakistan) by British colonists and indigenous people. He compiled and authored "An English-Hindustani Dictionary", "A Grammar of the Hindoostanee Language", "The Oriental Linguist", and many more. His lexicon of Hindustani was published in the Perso-Arabic script, Nāgarī script, and in Roman transliteration. He is also known for his role in the foundation of University College London and for endowing the Gilchrist Educational Trust.
In the late 19th century, a movement to further develop Hindi as a standardised form of Hindustani separate from Urdu took form. In 1881, Bihar accepted Hindi as its sole official language, replacing Urdu, and thus became the first state of India to adopt Hindi.
After independence, the government of India instituted the following conventions:
On 14 September 1949, the Constituent Assembly of India adopted Hindi written in the Devanagari script as the official language of the Republic of India replacing Urdu's previous usage in British India. To this end, several stalwarts rallied and lobbied pan-India in favour of Hindi, most notably along with Hazari Prasad Dwivedi, Kaka Kalelkar, Maithili Sharan Gupt and Seth Govind Das who even debated in Parliament on this issue. As such, on the 50th birthday of Beohar Rajendra Simha on 14 September 1949, the efforts came to fruition following the adoption of Hindi as the official language. Now, it is celebrated as Hindi Day.
Part XVII of the Indian Constitution deals with the official language of the Indian Commonwealth. Under Article 343, the official languages of the Union has been prescribed, which includes Hindi in Devanagari script and English:
(1) The official language of the Union shall be Hindi in Devanagari script. The form of numerals to be used for the official purposes of the Union shall be the international form of Indian numerals.
(2) Notwithstanding anything in clause (1), for a period of fifteen years from the commencement of this Constitution, the English language shall continue to be used for all the official purposes of the Union for which it was being used immediately before such commencement: Provided that the President may, during the said period, by order authorise the use of the Hindi language in addition to the English language and of the Devanagari form of numerals in addition to the international form of Indian numerals for any of the official purposes of the Union.
It shall be the duty of the Union to promote the spread of the Hindi language, to develop it so that it may serve as a medium of expression for all the elements of the composite culture of India and to secure its enrichment by assimilating without interfering with its genius, the forms, style and expressions used in Hindustani and in the other languages of India specified in the Eighth Schedule, and by drawing, wherever necessary or desirable, for its vocabulary, primarily on Sanskrit and secondarily on other languages.
It was envisioned that Hindi would become the sole working language of the Union Government by 1965 (per directives in Article 344 (2) and Article 351), with state governments being free to function in the language of their own choice. However, widespread resistance to the imposition of Hindi on non-native speakers, especially in South India (such as the those in Tamil Nadu) led to the passage of the Official Languages Act of 1963, which provided for the continued use of English indefinitely for all official purposes, although the constitutional directive for the Union Government to encourage the spread of Hindi was retained and has strongly influenced its policies.
At the state level, Hindi is the official language of the following Indian states: Bihar, Chhattisgarh, Haryana, Himachal Pradesh, Jharkhand, Madhya Pradesh, Mizoram, Rajasthan, Uttar Pradesh and Uttarakhand. It is one of the additional official languages of West Bengal. Each may also designate a "co-official language"; in Uttar Pradesh, for instance, depending on the political formation in power, this language is generally Urdu. Similarly, Hindi is accorded the status of official language in the following Union Territories: National Capital Territory, Andaman and Nicobar Islands and Dadra and Nagar Haveli and Daman and Diu.
National language status for Hindi is a long-debated theme. In 2010, the Gujarat High Court clarified that Hindi is not the national language of India because the constitution does not mention it as such.
Hindi is spoken as a first language by about 77,569 people in Nepal according to the 2011 Nepal census, and further by 1,225,950 people as a second language.
Outside Asia, the Awadhi language (an Eastern Hindi dialect) with influence from Bhojpuri, Bihari languages, Fijian and English is spoken in Fiji. It is an official language in Fiji as per the 1997 Constitution of Fiji, where it referred to it as "Hindustani", however in the 2013 Constitution of Fiji, it is simply called "Fiji Hindi". It is spoken by 380,000 people in Fiji.
Hindi is the lingua franca of northern India (which contains the Hindi Belt), as well as an official language of the Government of India, along with English.
In Northeast India a pidgin known as Haflong Hindi has developed as a "lingua franca" for the people living in Haflong, Assam who speak other languages natively. In Arunachal Pradesh, Hindi emerged as a lingua franca among locals who speak over 50 dialects natively.
Hindi is quite easy to understand for many Pakistanis, who speak Urdu, which, like Hindi, is a standard register of the Hindustani language; additionally, the Indian media is widely viewed in Pakistan.
A sizeable population in Afghanistan, especially in Kabul, can also speak and understand Hindi-Urdu due to the popularity and influence of Bollywood films and songs in the region.
Hindi is also spoken by a large population of Madheshis (people having roots in north-India but have migrated to Nepal over hundreds of years) of Nepal. Apart from this, Hindi is spoken by the large Indian diaspora which hails from, or has its origin from the "Hindi Belt" of India. A substantially large North Indian diaspora lives in countries like the United States of America, the United Kingdom, the United Arab Emirates, Trinidad and Tobago, Guyana, Suriname, South Africa, Fiji and Mauritius, where it is natively spoken at home and among their own Hindustani-speaking communities. Outside India, Hindi speakers are 8 million in Nepal; 863,077 in United States of America; 450,170 in Mauritius; 380,000 in Fiji; 250,292 in South Africa; 150,000 in Suriname; 100,000 in Uganda; 45,800 in United Kingdom; 20,000 in New Zealand; 20,000 in Germany; 26,000 in Trinidad and Tobago; 3,000 in Singapore.
Linguistically, Hindi and Urdu are two registers of the same language and are mutually intelligible. Hindi is written in the Devanagari script and contains more Sanskrit-derived words than Urdu, whereas Urdu is written in the Perso-Arabic script and uses more Arabic and Persian loanwords than does Hindi. However, both share a core vocabulary of native Prakrit and Sanskrit-derived words, with large numbers of Arabic and Persian loanwords. Because of this, as well as the fact that the two registers share an identical grammar, a consensus of linguists consider them to be two standardised forms of the same language, Hindustani or Hindi-Urdu. Hindi is the most commonly used official language in India. Urdu is the and "lingua franca" of Pakistan and is one of 22 official languages of India, also having official status in Uttar Pradesh, Jammu and Kashmir, and Delhi.
The comparison of Hindi and Urdu as separate languages is largely motivated by politics, namely the Indo-Pakistani rivalry.
Hindi is written in the Devanagari script, an abugida. Devanagari consists of 11 vowels and 33 consonants and is written from left to right. Unlike for Sanskrit, Devanagari is not entirely phonetic for Hindi, especially failing to mark schwa dropping in spoken Standard Hindi.
The Government of India uses Hunterian transliteration as its official system of writing Hindi in the Latin script. Various other systems also exist, such as IAST, ITRANS and ISO 15919.
Traditionally, Hindi words are divided into five principal categories according to their etymology:
Hindi also makes extensive use of loan translation (calqueing) and occasionally phono-semantic matching of English.
Hindi has naturally inherited a large portion of its vocabulary from Śaurasenī Prākṛt, in the form of "tadbhava" words. This process usually involves compensatory lengthening of vowels preceding consonant clusters in Prakrit, e.g. Sanskrit "tīkṣṇa" > Prakrit "tikkha" > Hindi "tīkhā".
Much of Modern Standard Hindi's vocabulary is borrowed from Sanskrit as "tatsam" borrowings, especially in technical and academic fields. The formal Hindi standard, from which much of the Persian, Arabic and English vocabulary has been replaced by neologisms compounding "tatsam" words, is called "Śuddh Hindi" (pure Hindi), and is viewed as a more prestigious dialect over other more colloquial forms of Hindi.
Excessive use of "tatsam" words sometimes creates problems for native speakers. They may have Sanskrit consonant clusters which do not exist in native Hindi, causing difficulties in pronunciation.
As a part of the process of Sanskritization, new words are coined using Sanskrit components to be used as replacements for supposedly foreign vocabulary. Usually these neologisms are calques of English words already adopted into spoken Hindi. Some terms such as "dūrbhāṣ" "telephone", literally "far-speech" and "dūrdarśan" "television", literally "far-sight" have even gained some currency in formal Hindi in the place of the English borrowings "(ṭeli)fon" and "ṭīvī".
Hindi also features significant Persian influence, standardised from spoken Hindustani. Early borrowings, beginning in the mid-12th century, were specific to Islam (e.g. "Muhammad", "islām") and so Persian was simply an intermediary for Arabic. Later, under the Delhi Sultanate and Mughal Empire, Persian became the primary administrative language in the Hindi heartland. Persian borrowings reached a heyday in the 17th century, pervading all aspects of life. Even grammatical constructs, namely the izafat, were assimilated into Hindi.
Post-Partition the Indian government advocated for a policy of Sanskritization leading to a marginalisation of the Persian element in Hindi. However, many Persian words (e.g. "muśkil" "difficult", "bas" "enough", "havā" "air", "x(a)yāl" "thought") have remained entrenched in Modern Standard Hindi, and a larger amount are still used in Urdu poetry written in the Devanagari script.
Arabic also shows influence in Hindi, often via Persian but sometimes directly.
Hindi literature is broadly divided into four prominent forms or styles, being "Bhakti" (devotional – Kabir, Raskhan); "Śṛṇgār" (beauty – Keshav, Bihari); "Vīgāthā" (epic); and "Ādhunik" (modern).
Medieval Hindi literature is marked by the influence of Bhakti movement and the composition of long, epic poems. It was primarily written in other varieties of Hindi, particularly Avadhi and Braj Bhasha, but to a degree also in Delhavi, the basis for Modern Standard Hindi. During the British Raj, Hindustani became the prestige dialect.
"Chandrakanta", written by Devaki Nandan Khatri in 1888, is considered the first authentic work of prose in modern Hindi. The person who brought realism in the Hindi prose literature was Munshi Premchand, who is considered as the most revered figure in the world of Hindi fiction and progressive movement. Literary, or "Sāhityik", Hindi was popularised by the writings of Swami Dayananda Saraswati, Bhartendu Harishchandra and others. The rising numbers of newspapers and magazines made Hindustani popular with the educated people.
The "Dvivedī Yug" ("Age of Dwivedi") in Hindi literature lasted from 1900 to 1918. It is named after Mahavir Prasad Dwivedi, who played a major role in establishing Modern Standard Hindi in poetry and broadening the acceptable subjects of Hindi poetry from the traditional ones of religion and romantic love.
In the 20th century, Hindi literature saw a romantic upsurge. This is known as "Chāyāvād" ("shadow-ism") and the literary figures belonging to this school are known as "Chāyāvādī". Jaishankar Prasad, Suryakant Tripathi 'Nirala', Mahadevi Varma and Sumitranandan Pant, are the four major "Chāyāvādī" poets.
"Uttar Ādhunik" is the post-modernist period of Hindi literature, marked by a questioning of early trends that copied the West as well as the excessive ornamentation of the "Chāyāvādī" movement, and by a return to simple language and natural themes.
The Hindi Wikipedia was the first Indian-language wiki to reach 100,000 articles. Hindi literature, music, and film have all been disseminated via the internet. In 2015, Google reported a 94% increase in Hindi-content consumption year-on-year, adding that 21% of users in India prefer content in Hindi. Many Hindi newspapers also offer digital editions.
The following is a sample text in High Hindi, of the Article 1 of the Universal Declaration of Human Rights (by the United Nations): | https://en.wikipedia.org/wiki?curid=13652 |
Huginn and Muninn
In Norse mythology, Huginn (from Old Norse "thought") and Muninn (Old Norse "memory" or "mind") are a pair of ravens that fly all over the world, Midgard, and bring information to the god Odin. Huginn and Muninn are attested in the "Poetic Edda", compiled in the 13th century from earlier traditional sources: the "Prose Edda" and "Heimskringla", written in the 13th century by Snorri Sturluson; in the "Third Grammatical Treatise", compiled in the 13th century by Óláfr Þórðarson; and in the poetry of skalds. The names of the ravens are sometimes modernly anglicized as Hugin and Munin.
In the "Poetic Edda", a disguised Odin expresses that he fears that they may not return from their daily flights. The "Prose Edda" explains that Odin is referred to as "raven-god" due to his association with Huginn and Muninn. In the "Prose Edda" and the "Third Grammatical Treatise", the two ravens are described as perching on Odin's shoulders. "Heimskringla" details that Odin gave Huginn and Muninn the ability to speak.
Examples of artifacts that may depict Odin with one of the ravens include Migration Period golden bracteates, Vendel era helmet plates, a pair of identical Germanic Iron Age bird-shaped brooches, Viking Age objects depicting a moustached man wearing a helmet, and a portion of the 10th or 11th century . Huginn and Muninn's role as Odin's messengers has been linked to shamanic practices, the Norse raven banner, general raven symbolism among the Germanic peoples, and the Norse concepts of the fylgja and the hamingja.
In the "Poetic Edda" poem "Grímnismál", the god Odin (disguised as "Grímnir") provides the young Agnarr with information about Odin's companions. He tells the prince about Odin's wolves Geri and Freki, and, in the next stanza of the poem, states that Huginn and Muninn fly daily across the entire world, Midgard. Grímnir says that he worries Huginn may not come back, yet more does he fear for Muninn:
In the "Prose Edda" book "Gylfaginning" (chapter 38), the enthroned figure of High tells Gangleri (king Gylfi in disguise) that two ravens named Huginn and Muninn sit on Odin's shoulders. The ravens tell Odin everything they see and hear. Odin sends Huginn and Muninn out at dawn, and the birds fly all over the world before returning at dinner-time. As a result, Odin is kept informed of many events. High adds that it is from this association that Odin is referred to as "raven-god". The above-mentioned stanza from "Grímnismál" is then quoted.
In the "Prose Edda" book "Skáldskaparmál" (chapter 60), Huginn and Muninn appear in a list of poetic names for ravens. In the same chapter, excerpts from a work by the skald Einarr Skúlason are provided. In these excerpts Muninn is referenced in a common noun for 'raven' and Huginn is referenced in a kenning for 'carrion'.
In the "Heimskringla" book "Ynglinga saga", a euhemerized account of the life of Odin is provided. Chapter 7 describes that Odin had two ravens, and upon these ravens he bestowed the gift of speech. These ravens flew all over the land and brought him information, causing Odin to become "very wise in his lore."
In the "Third Grammatical Treatise" an anonymous verse is recorded that mentions the ravens flying from Odin's shoulders; Huginn seeking hanged men, and Muninn slain bodies. The verse reads:
Migration Period (5th and 6th centuries CE) gold bracteates (types A, B, and C) feature a depiction of a human figure above a horse, holding a spear and flanked by one or more often two birds. The presence of the birds has led to the iconographic identification of the human figure as the god Odin, flanked by Huginn and Muninn. Like Snorri's "Prose Edda" description of the ravens, a bird is sometimes depicted at the ear of the human, or at the ear of the horse. Bracteates have been found in Denmark, Sweden, Norway and, in smaller numbers, England and areas south of Denmark. Austrian Germanist Rudolf Simek states that these bracteates may depict Odin and his ravens healing a horse and may indicate that the birds were originally not simply his battlefield companions but also "Odin's helpers in his veterinary function."
Vendel era helmet plates (from the 6th or 7th century) found in a grave in Sweden depict a helmeted figure holding a spear and a shield while riding a horse, flanked by two birds. The plate has been interpreted as Odin accompanied by two birds: his ravens.
A pair of identical Germanic Iron Age bird-shaped brooches from Bejsebakke in northern Denmark may be depictions of Huginn and Muninn. The back of each bird features a mask motif, and the feet of the birds are shaped like the heads of animals. The feathers of the birds are also composed of animal heads. Together, the animal heads on the feathers form a mask on the back of the bird. The birds have powerful beaks and fan-shaped tails, indicating that they are ravens. The brooches were intended to be worn on each shoulder, after Germanic Iron Age fashion. Archaeologist Peter Vang Petersen comments that while the symbolism of the brooches is open to debate, the shape of the beaks and tail feathers confirm that the brooch depictions are ravens. Petersen notes that "raven-shaped ornaments worn as a pair, after the fashion of the day, one on each shoulder, makes one's thoughts turn towards Odin's ravens and the cult of Odin in the Germanic Iron Age." Petersen says that Odin is associated with disguise and that the masks on the ravens may be portraits of Odin.
The Oseberg tapestry fragments, discovered within the Viking Age Oseberg ship burial in Norway, feature a scene containing two black birds hovering over a horse, possibly originally leading a wagon (as a part of a procession of horse-led wagons on the tapestry). In her examination of the tapestry, scholar Anne Stine Ingstad interprets these birds as Huginn and Muninn flying over a covered cart containing an image of Odin, drawing comparison with the images of Nerthus attested by Tacitus in 1 CE.
Excavations in Ribe in Denmark have recovered a Viking Age lead metal-caster's mould and 11 identical casting-moulds. These objects depict a moustached man wearing a helmet that features two head-ornaments. Archaeologist Stig Jensen proposes that these ornaments should be interpreted as Huginn and Muninn, and the wearer as Odin. He notes that "similar depictions occur everywhere the Vikings went—from eastern England to Russia and naturally also in the rest of Scandinavia."
A portion of (a partly surviving runestone erected at Kirk Andreas on the Isle of Man) depicts a bearded human holding a spear downward at a wolf, his right foot in its mouth, and a large bird on his shoulder. Andy Orchard comments that this bird may be either Huginn or Muninn. Rundata dates the cross to 940, while Pluskowski dates it to the 11th century. This depiction has been interpreted as Odin, with a raven or eagle at his shoulder, being consumed by the monstrous wolf Fenrir during the events of Ragnarök.
In November 2009, the Roskilde Museum announced the discovery and subsequent display of a niello-inlaid silver figurine found in Lejre, Denmark, which they dubbed "Odin from Lejre". The silver object depicts a person sitting on a throne. The throne features the heads of animals and is flanked by two birds. The Roskilde Museum identifies the figure as Odin sitting on his throne Hliðskjálf, flanked by the ravens Huginn and Muninn.
Scholars have linked Odin's relation to Huginn and Muninn to shamanic practice. John Lindow relates Odin's ability to send his "thought" (Huginn) and "mind" (Muninn) to the trance-state journey of shamans. Lindow says the "Grímnismál" stanza where Odin worries about the return of Huginn and Muninn "would be consistent with the danger that the shaman faces on the trance-state journey."
Rudolf Simek is critical of the approach, stating that "attempts have been made to interpret Odin's ravens as a personification of the god's intellectual powers, but this can only be assumed from the names Huginn and Muninn themselves which were unlikely to have been invented much before the 9th or 10th centuries" yet that the two ravens, as Odin's companions, appear to derive from much earlier times. Instead, Simek connects Huginn and Muninn with wider raven symbolism in the Germanic world, including the raven banner (described in English chronicles and Scandinavian sagas), a banner which was woven in a method that allowed it, when fluttering in the wind, to appear as if the raven depicted upon it was beating its wings.
Anthony Winterbourne connects Huginn and Muninn to the Norse concepts of the fylgja—a concept with three characteristics; shape-shifting abilities, good fortune, and the guardian spirit—and the hamingja—the ghostly double of a person that may appear in the form of an animal. Winterbourne states that "The shaman's journey through the different parts of the cosmos is symbolized by the "hamingja" concept of the shape-shifting soul, and gains another symbolic dimension for the Norse soul in the account of Oðin's ravens, Huginn and Muninn." In response to Simek's criticism of attempts to interpret the ravens "philosophically", Winterbourne says that "such speculations [...] simply strengthen the conceptual significance made plausible by other features of the mythology" and that the names "Huginn" and "Muninn" "demand more explanation than is usually provided."
The "Heliand", an Old Saxon adaptation of the New Testament from the 9th century, differs from the New Testament in that an explicit reference is made to a dove sitting on the shoulder of Christ. Regarding this, G. Ronald Murphy says "In placing the powerful white dove not just above Christ, but right on his shoulder, the "Heliand" author has portrayed Christ, not only as the Son of the All-Ruler, but also as a new Woden. This deliberate image of Christ triumphantly astride the land with the magnificent bird on his shoulders (the author is perhaps a bit embarrassed that the bird is an unwarlike dove!) is an image intended to calm the fears and longings of those who mourn the loss of Woden and who want to return to the old religion's symbols and ways. With this image, Christ becomes a Germanic god, one into whose ears the Spirit of the Almighty whispers".
Bernd Heinrich theorizes that Huginn and Muninn, along with Odin and his wolves Geri and Freki, reflect a symbiosis observed in the natural world among ravens, wolves, and humans on the hunt: | https://en.wikipedia.org/wiki?curid=13653 |
Heat engine
In thermodynamics and engineering, a heat engine is a system that converts heat or thermal energy—and chemical energy—to mechanical energy, which can then be used to do mechanical work. It does this by bringing a working substance from a higher state temperature to a lower state temperature. A heat source generates thermal energy that brings the working substance to the high temperature state. The working substance generates work in the working body of the engine while transferring heat to the colder sink until it reaches a low temperature state. During this process some of the thermal energy is converted into work by exploiting the properties of the working substance. The working substance can be any system with a non-zero heat capacity, but it usually is a gas or liquid. During this process, some heat is normally lost to the surroundings and is not converted to work. Also, some energy is unusable because of friction and drag.
In general an engine converts energy to mechanical work. Heat engines distinguish themselves from other types of engines by the fact that their efficiency is fundamentally limited by Carnot's theorem. Although this efficiency limitation can be a drawback, an advantage of heat engines is that most forms of energy can be easily converted to heat by processes like exothermic reactions (such as combustion), absorption of light or energetic particles, friction, dissipation and resistance. Since the heat source that supplies thermal energy to the engine can thus be powered by virtually any kind of energy, heat engines cover a wide range of applications.
Heat engines are often confused with the cycles they attempt to implement. Typically, the term "engine" is used for a physical device and "cycle" for the models.
In thermodynamics, heat engines are often modeled using a standard engineering model such as the Otto cycle. The theoretical model can be refined and augmented with actual data from an operating engine, using tools such as an indicator diagram. Since very few actual implementations of heat engines exactly match their underlying thermodynamic cycles, one could say that a thermodynamic cycle is an ideal case of a mechanical engine. In any case, fully understanding an engine and its efficiency requires a good understanding of the (possibly simplified or idealised) theoretical model, the practical nuances of an actual mechanical engine and the discrepancies between the two.
In general terms, the larger the difference in temperature between the hot source and the cold sink, the larger is the potential thermal efficiency of the cycle. On Earth, the cold side of any heat engine is limited to being close to the ambient temperature of the environment, or not much lower than 300 Kelvin, so most efforts to improve the thermodynamic efficiencies of various heat engines focus on increasing the temperature of the source, within material limits. The maximum theoretical efficiency of a heat engine (which no engine ever attains) is equal to the temperature difference between the hot and cold ends divided by the temperature at the hot end, each expressed in absolute temperature (Kelvin).
The efficiency of various heat engines proposed or used today has a large range:
The efficiency of these processes is roughly proportional to the temperature drop across them. Significant energy may be consumed by auxiliary equipment, such as pumps, which effectively reduces efficiency.
It is important to note that although some cycles have a typical combustion location (internal or external), they often can be implemented with the other. For example, John Ericsson developed an external heated engine running on a cycle very much like the earlier Diesel cycle. In addition, externally heated engines can often be implemented in open or closed cycles.
Everyday examples of heat engines include the thermal power station, internal combustion engine and steam locomotive. All of these heat engines are powered by the expansion of heated gases.
Earth's atmosphere and hydrosphere—Earth's heat engine—are coupled processes that constantly even out solar heating imbalances through evaporation of surface water, convection, rainfall, winds and ocean circulation, when distributing heat around the globe.
A Hadley cell is an example of a heat engine. It involves the rising of warm and moist air in the earth's equatorial region and the descent of colder air in the subtropics creating a thermally driven direct circulation, with consequent net production of kinetic energy.
In these cycles and engines, the working fluids are gases and liquids. The engine converts the working fluid from a gas to a liquid, from liquid to gas, or both, generating work from the fluid expansion or compression.
In these cycles and engines the working fluid is always a gas (i.e., there is no phase change):
In these cycles and engines the working fluid are always like liquid:
A domestic refrigerator is an example of a heat pump: a heat engine in reverse. Work is used to create a heat differential. Many cycles can run in reverse to move heat from the cold side to the hot side, making the cold side cooler and the hot side hotter. Internal combustion engine versions of these cycles are, by their nature, not reversible.
Refrigeration cycles include:
The Barton evaporation engine is a heat engine based on a cycle producing power and cooled moist air from the evaporation of water into hot dry air.
Mesoscopic heat engines are nanoscale devices that may serve the goal of processing heat fluxes and perform useful work at small scales. Potential applications include e.g. electric cooling devices.
In such mesoscopic heat engines, work per cycle of operation fluctuates due to thermal noise.
There is exact equality that relates average of exponents of work performed by any heat engine and the heat transfer from the hotter heat bath. This relation transforms the Carnot's inequality into exact equality. This relation is also a Carnot cycle equality
The efficiency of a heat engine relates how much useful work is output for a given amount of heat energy input.
From the laws of thermodynamics, after a completed cycle:
In other words, a heat engine absorbs heat energy from the high temperature heat source, converting part of it to useful work and delivering the rest to the cold temperature heat sink.
In general, the efficiency of a given heat transfer process (whether it be a refrigerator, a heat pump or an engine) is defined informally by the ratio of "what is taken out" to "what is put in".
In the case of an engine, one desires to extract work and puts in a heat transfer.
The "theoretical" maximum efficiency of any heat engine depends only on the temperatures it operates between. This efficiency is usually derived using an ideal imaginary heat engine such as the Carnot heat engine, although other engines using different cycles can also attain maximum efficiency. Mathematically, this is because in reversible processes, the change in entropy of the cold reservoir is the negative of that of the hot reservoir (i.e., formula_8), keeping the overall change of entropy zero. Thus:
where formula_10 is the absolute temperature of the hot source and formula_11 that of the cold sink, usually measured in kelvins. Note that formula_12 is positive while formula_13 is negative; in any reversible work-extracting process, entropy is overall not increased, but rather is moved from a hot (high-entropy) system to a cold (low-entropy one), decreasing the entropy of the heat source and increasing that of the heat sink.
The reasoning behind this being the maximal efficiency goes as follows. It is first assumed that if a more efficient heat engine than a Carnot engine is possible, then it could be driven in reverse as a heat pump. Mathematical analysis can be used to show that this assumed combination would result in a net decrease in entropy. Since, by the second law of thermodynamics, this is statistically improbable to the point of exclusion, the Carnot efficiency is a theoretical upper bound on the reliable efficiency of "any" thermodynamic cycle.
Empirically, no heat engine has ever been shown to run at a greater efficiency than a Carnot cycle heat engine.
Figure 2 and Figure 3 show variations on Carnot cycle efficiency. Figure 2 indicates how efficiency changes with an increase in the heat addition temperature for a constant compressor inlet temperature. Figure 3 indicates how the efficiency changes with an increase in the heat rejection temperature for a constant turbine inlet temperature.
By its nature, any maximally efficient Carnot cycle must operate at an infinitesimal temperature gradient; this is because any transfer of heat between two bodies of differing temperatures is irreversible, therefore the Carnot efficiency expression applies only to the infinitesimal limit. The major problem is that the objective of most heat-engines is to output power, and infinitesimal power is seldom desired.
A different measure of ideal heat-engine efficiency is given by considerations of endoreversible thermodynamics, where the cycle is identical to the Carnot cycle except that the two processes of heat transfer are "not" reversible (Callen 1985):
This model does a better job of predicting how well real-world heat-engines can do (Callen 1985, see also endoreversible thermodynamics):
As shown, the endo-reversible efficiency much more closely models that observed.
Heat engines have been known since antiquity but were only made into useful devices at the time of the industrial revolution in the 18th century. They continue to be developed today.
Engineers have studied the various heat-engine cycles to improve the amount of usable work they could extract from a given power source. The Carnot cycle limit cannot be reached with any gas-based cycle, but engineers have found at least two ways to bypass that limit and one way to get better efficiency without bending any rules:
Each process is one of the following: | https://en.wikipedia.org/wiki?curid=13654 |
Heimdallr
In Norse mythology, Heimdallr is a god who possesses the resounding horn Gjallarhorn, owns the golden-maned horse Gulltoppr, is called the shining god and the whitest of the gods, has gold teeth, and is the son of Nine Mothers (who may represent personified waves). Heimdallr is attested as possessing foreknowledge, keen eyesight and hearing, and keeps watch for invaders and the onset of Ragnarök while drinking fine mead in his dwelling Himinbjörg, located where the burning rainbow bridge Bifröst meets the sky. Heimdallr is said to be the originator of social classes among humanity and once regained Freyja's treasured possession Brísingamen while doing battle in the shape of a seal with Loki. Heimdallr and Loki are foretold to kill one another during the events of Ragnarök. Heimdallr is additionally referred to as Rig, Hallinskiði, Gullintanni, and Vindlér or Vindhlér.
Heimdallr is attested in the "Poetic Edda", compiled in the 13th century from earlier traditional material; in the "Prose Edda" and "Heimskringla", both written in the 13th century by Snorri Sturluson; in the poetry of skalds; and on an Old Norse runic inscription found in England. Two lines of an otherwise lost poem about the god, "Heimdalargaldr", survive. Due to the problematic and enigmatic nature of these attestations, scholars have produced various theories about the nature of the god, including his apparent relation to rams, that he may be a personification of or connected to the world tree Yggdrasil, and potential Indo-European cognates.
The etymology of the name is obscure, but 'the one who illuminates the world' has been proposed. "Heimdallr" may be connected to "Mardöll", one of Freyja's names. "Heimdallr" and its variants are sometimes modernly anglicized as Heimdall (; with the nominative "-r" dropped).
Heimdallr is attested as having three other names; "Hallinskiði", "Gullintanni", and "Vindlér" or "Vindhlér". The name "Hallinskiði" is obscure, but has resulted in a series of attempts at deciphering it. "Gullintanni" literally means 'the one with the golden teeth'. "Vindlér" (or "Vindhlér") translates as either 'the one protecting against the wind' or 'wind-sea'. All three have resulted in numerous theories about the god.
A lead spindle whorl bearing an Old Norse Younger Futhark inscription that mentions Heimdallr was discovered in Saltfleetby, England on September 1, 2010. The spindle whorl itself is dated from the year 1000 to 1100 AD. On the inscription, the god Heimdallr is mentioned alongside the god Odin and Þjálfi, a name of one of the god Thor's servants. Regarding the inscription reading, John Hines of Cardiff University comments that there is "quite an essay to be written over the uncertainties of translation and identification here; what are clear, and very important, are the names of two of the Norse gods on the side, Odin and Heimdallr, while Þjalfi (masculine, not the feminine in -a) is the recorded name of a servant of the god Thor."
In the "Poetic Edda", Heimdallr is attested in six poems; "Völuspá", "Grímnismál", "Lokasenna", "Þrymskviða", "Rígsþula", and "Hrafnagaldr Óðins".
Heimdallr is mentioned thrice in "Völuspá". In the first stanza of the poem, the undead völva reciting the poem calls out for listeners to be silent and refers to Heimdallr:
This stanza has led to various scholarly interpretations. The "holy races" have been considered variously as either humanity or the gods. The notion of humanity as "Heimdallr's sons" is otherwise unattested and has also resulted in various interpretations. Some scholars have pointed to the prose introduction to the poem "Rígsþula", where Heimdallr is said to have once gone about people, slept between couples, and so doled out classes among them (see "Rígsthula" section below).
Later in "Völuspá", the völva foresees the events of Ragnarök and the role in which Heimdallr and Gjallarhorn will play at its onset; Heimdallr will raise his horn and blow loudly. Due to manuscript differences, translations of the stanza vary:
Regarding this stanza, scholar Andy Orchard comments that the name "Gjallarhorn" may here mean "horn of the river Gjöll" as "Gjöll is the name of one of the rivers of the Underworld, whence much wisdom is held to derive", but notes that in the poem "Grímnismál" Heimdallr is said to drink fine mead in his heavenly home Himinbjörg.
Earlier in the same poem, the völva mentions a scenario involving the hearing or horn (depending on translation of the Old Norse noun "hljóð"—translations bolded below for the purpose of illustration) of the god Heimdallr:
Scholar Paul Schach comments that the stanzas in this section of " Völuspá" are "all very mysterious and obscure, as it was perhaps meant to be". Schach details that ""Heimdallar hljóð" has aroused much speculation. Snorri [in the "Prose Edda"] seems to have confused this word with "gjallarhorn", but there is otherwise no attestation of the use of "hljóð" in the sense of 'horn' in Icelandic. Various scholars have read this as "hearing" rather than "horn".
Scholar Carolyne Larrington comments that if "hearing" rather than "horn" is understood to appear in this stanza, the stanza indicates that Heimdallr, like Odin, has left a body part in the well; his ear. Larrington says that "Odin exchanged one of his eyes for wisdom from Mimir, guardian of the well, while Heimdall seems to have forfeited his ear."
In the poem "Grímnismál", Odin (disguised as "Grímnir"), tortured, starved and thirsty, tells the young Agnar of a number of mythological locations. The eighth location he mentions is Himinbjörg, where he says that Heimdallr drinks fine mead:
Regarding the above stanza, Henry Adams Bellows comments that "in this stanza the two functions of Heimdall—as father of humanity [ . . . ] and as warder of the gods—seem both to be mentioned, but the second line in the manuscripts is apparently in bad shape, and in the editions it is more or less conjecture".
In the poem "Lokasenna", Loki flyts with various gods who have met together to feast. At one point during the exchanges, the god Heimdallr says that Loki is drunk and witless, and asks Loki why he won't stop speaking. Loki tells Heimdallr to be silent, that he was fated a "hateful life", that Heimdallr must always have a muddy back, and that he must serve as watchman of the gods. The goddess Skaði interjects and the flyting continues in turn.
The poem "Þrymskviða" tells of Thor's loss of his hammer, Mjöllnir, to the jötnar and quest to get it back. At one point in the tale, the gods gather at the thing and debate how to get Thor's hammer back from the jötnar, who demand the beautiful goddess Freyja in return for it. Heimdallr advises that they simply dress Thor up as Freyja, during which he is described as "hvítastr ása" (translations of the phrase vary below) and is said to have foresight like the Vanir, a group of gods:
Regarding Heimdallr's status as "hvítastr ása" (variously translated above as "brightest" (Thorpe), "whitest" (Bellows), and "most glittering" (Dodds)) and the comparison to the Vanir, scholar John Lindow comments that there are no other indications of Heimdallr being considered among the Vanir, and that Heimdallr's status as ""hvítastr ása "" has not been explained.
The introductory prose to the poem "Rígsþula" says that "people say in the old stories" that Heimdallr, described as a god among the Æsir, once fared on a journey. Heimdallr wandered along a seashore, and referred to himself as "Rígr". In the poem, Rígr, who is described as a wise and powerful god, walks in the middle of roads on his way to steads, where he meets a variety of couples and dines with them, giving them advice and spending three nights at a time between them in their bed. The wives of the couples become pregnant, and from them come the various classes of humanity. Eventually a warrior home produces a promising boy, and as the boy grows older, Rígr comes out of a thicket, teaches the boy runes, gives him a name, and proclaims him to be his son. Rígr tells him to strike out and get land for himself. The boy does so, and so becomes a great war leader with many estates. He marries a beautiful woman and the two have many children and are happy. One of the children eventually becomes so skilled that he is able to share in runic knowledge with Heimdallr, and so earns the title of "Rígr" himself. The poem continues without further mention of the god.
In the "Prose Edda", Heimdallr is mentioned in the books "Gylfaginning", "Skáldskaparmál", and "Háttatal". In "Gylfaginning", the enthroned figure of High tells the disguised mythical king Gangleri of various gods, and, in chapter 25, mentions Heimdallr. High says that Heimdallr is known as "the white As", is "great and holy", and that nine maidens, all sisters, gave birth to him. Heimdallr is called "Hallinskiði" and "Gullintanni", and he has gold teeth. High continues that Heimdallr lives in "a place" called Himinbjörg and that it is near Bifröst. Heimdallr is the watchman of the gods, and he sits on the edge of heaven to guard the Bifröst bridge from the berg jötnar. Heimdallr requires less sleep than a bird, can see at night just as well as if it were day, and for over a hundred leagues. Heimdallr's hearing is also quite keen; he can hear grass as it grows on the earth, wool as it grows on sheep, and anything louder. Heimdallr possesses a trumpet, Gjallarhorn, that, when blown, can be heard in all worlds, and "the head is referred to as Heimdall's sword". High then quotes the above-mentioned "Grímnismál" stanza about Himinbjörg and provides two lines from the otherwise lost poem about Heimdallr, "Heimdalargaldr", in which Heimdallr proclaims himself to be the son of Nine Mothers.
In chapter 49, High tells of the god Baldr's funeral procession. Various deities are mentioned as having attended, including Heimdallr, who there rode his horse Gulltopr.
In chapter 51, High foretells the events of Ragnarök. After the enemies of the gods will gather at the plain Vígríðr, Heimdallr will stand and mightily blow into Gjallarhorn. The gods will awake and assemble together at the thing. At the end of the battle between various gods and their enemies, Heimdallr will face Loki and they will kill one another. After, the world will be engulfed in flames. High then quotes the above-mentioned stanza regarding Heimdallr raising his horn in "Völuspá".
At the beginning of "Skáldskaparmál", Heimdallr is mentioned as having attended a banquet in Asgard with various other deities. Later in the book, "Húsdrápa", a poem by 10th century skald Úlfr Uggason, is cited, during which Heimdallr is described as having ridden to Baldr's funeral pyre.
In chapter 8, means of referring to Heimdallr are provided; "son of nine mothers", "guardian of the gods", "the white As" (see "Poetic Edda" discussion regarding "hvítastr ása" above), "Loki's enemy", and "recoverer of Freyja's necklace". The section adds that the poem "Heimdalargaldr" is about him, and that, since the poem, "the head has been called Heimdall's doom: man's doom is an expression for sword". Hiemdallr is the owner of Gulltoppr, is also known as Vindhlér, and is a son of Odin. Heimdallr visits Vágasker and Singasteinn and there vied with Loki for Brísingamen. According to the chapter, the skald Úlfr Uggason composed a large section of his "Húsdrápa" about these events and that "Húsdrápa" says that the two were in the shape of seals. A few chapters later, ways of referring to Loki are provided, including "wrangler with Heimdall and Skadi", and section of Úlfr Uggason's "Húsdrápa" is then provided in reference:
The chapter points out that in the above "Húsdrápa" section Heimdallr is said to be the son of nine mothers.
Heimdallr is mentioned once in "Háttatal". There, in a composition by Snorri Sturluson, a sword is referred to as "Vindhlér's helmet-filler", meaning "Heimdallr's head".
In "Ynglinga saga" compiled in "Heimskringla", Snorri presents a euhemerized origin of the Norse gods and rulers descending from them. In chapter 5, Snorri asserts that the Æsir settled in what is now Sweden and built various temples. Snorri writes that Odin settled in Lake Logrin "at a place which formerly was called Sigtúnir. There he erected a large temple and made sacrifices according to the custom of the Æsir. He took possession of the land as far as he had called it Sigtúnir. He gave dwelling places to the temple priests." Snorri adds that, after this, Njörðr dwelt in Nóatún, Freyr dwelt in Uppsala, Heimdall at Himinbjörg, Thor at Þrúðvangr, Baldr at Breiðablik and that to everyone Odin gave fine estates.
A figure holding a large horn to his lips and clasping a sword on his hip appears on a stone cross from the Isle of Man. Some scholars have theorized that this figure is a depiction of Heimdallr with Gjallarhorn.
A 9th or 10th century Gosforth Cross in Cumbria, England depicts a figure holding a horn and a sword standing defiantly before two open-mouthed beasts. This figure has been often theorized as depicting Heimdallr with Gjallarhorn.
Heimdallr's attestations have proven troublesome and enigmatic to interpret for scholars. Scholar Georges Dumézil summarizes the difficulties as follows: | https://en.wikipedia.org/wiki?curid=13655 |
House of Lords
The House of Lords, also known as the House of Peers and domestically usually referred to simply as the Lords, is the upper house of the Parliament of the United Kingdom. Membership is granted by appointment or else by heredity or official function. Like the House of Commons, it meets in the Palace of Westminster.
Unlike the elected House of Commons, members of the House of Lords (excluding 90 hereditary peers elected among themselves and two peers who are "ex officio" members) are appointed. The membership of the House of Lords is drawn from the peerage and is made up of Lords Spiritual and Lords Temporal. The Lords Spiritual are 26 bishops in the established Church of England. Of the Lords Temporal, the majority are life peers who are appointed by the monarch on the advice of the Prime Minister, or on the advice of the House of Lords Appointments Commission. However, they also include some hereditary peers including four dukes.
Membership was once an entitlement of all hereditary peers, other than those in the peerage of Ireland, but under the House of Lords Act 1999, the right to membership was restricted to 92 hereditary peers. From 2008 to 2020, only one of them was female (Countess of Mar); most hereditary peerages can be inherited only by men.
While the House of Commons has a defined number of members, the number of members in the House of Lords is not fixed. The House of Lords is the only upper house of any bicameral parliament in the world to be larger than its lower house.
The House of Lords scrutinises bills that have been approved by the House of Commons. It regularly reviews and amends Bills from the Commons. While it is unable to prevent Bills passing into law, except in certain limited circumstances, it can delay Bills and force the Commons to reconsider their decisions. In this capacity, the House of Lords acts as a check on the House of Commons that is independent from the electoral process. Bills can be introduced into either the House of Lords or the House of Commons. While members of the Lords may also take on roles as government ministers, high-ranking officials such as cabinet ministers are usually drawn from the Commons. The House of Lords has its own support services, separate from the Commons, including the House of Lords Library.
The Queen's Speech is delivered in the House of Lords during the State Opening of Parliament. In addition to its role as the upper house, until the establishment of the Supreme Court in 2009, the House of Lords, through the Law Lords, acted as the final court of appeal in the United Kingdom judicial system. The House also has a Church of England role, in that Church Measures must be tabled within the House by the Lords Spiritual.
Today's Parliament of the United Kingdom largely descends, in practice, from the Parliament of England, through the Treaty of Union of 1706 and the Acts of Union that ratified the Treaty in 1707 and created a new Parliament of Great Britain to replace the Parliament of England and the Parliament of Scotland. This new parliament was, in effect, the continuation of the Parliament of England with the addition of 45 MPs and 16 Peers to represent Scotland.
The House of Lords developed from the "Great Council" ("Magnum Concilium") that advised the King during medieval times. This royal council came to be composed of ecclesiastics, noblemen, and representatives of the counties of England and Wales (afterwards, representatives of the boroughs as well). The first English Parliament is often considered to be the "Model Parliament" (held in 1295), which included archbishops, bishops, abbots, earls, barons, and representatives of the shires and boroughs.
The power of Parliament grew slowly, fluctuating as the strength of the monarchy grew or declined. For example, during much of the reign of Edward II (1307–1327), the nobility was supreme, the Crown weak, and the shire and borough representatives entirely powerless. In 1569, the authority of Parliament was for the first time recognised not simply by custom or royal charter, but by an authoritative statute, passed by Parliament itself.
During the reign of Edward II's successor, Edward III, Parliament clearly separated into two distinct chambers: the House of Commons (consisting of the shire and borough representatives) and the House of Lords (consisting of the bishops, abbots and peers). The authority of Parliament continued to grow, and during the early 15th century both Houses exercised powers to an extent not seen before. The Lords were far more powerful than the Commons because of the great influence of the great landowners and the prelates of the realm.
The power of the nobility declined during the civil wars of the late 15th century, known as the Wars of the Roses. Much of the nobility was killed on the battlefield or executed for participation in the war, and many aristocratic estates were lost to the Crown. Moreover, feudalism was dying, and the feudal armies controlled by the barons became obsolete. Henry VII (1485–1509) clearly established the supremacy of the monarch, symbolised by the "Crown Imperial". The domination of the Sovereign continued to grow during the reigns of the Tudor monarchs in the 16th century. The Crown was at the height of its power during the reign of Henry VIII (1509–1547).
The House of Lords remained more powerful than the House of Commons, but the Lower House continued to grow in influence, reaching a zenith in relation to the House of Lords during the middle 17th century. Conflicts between the King and the Parliament (for the most part, the House of Commons) ultimately led to the English Civil War during the 1640s. In 1649, after the defeat and execution of King Charles I, the Commonwealth of England was declared, but the nation was effectively under the overall control of Oliver Cromwell, Lord Protector of England, Scotland and Ireland.
The House of Lords was reduced to a largely powerless body, with Cromwell and his supporters in the Commons dominating the Government. On 19 March 1649, the House of Lords was abolished by an Act of Parliament, which declared that "The Commons of England [find] by too long experience that the House of Lords is useless and dangerous to the people of England." The House of Lords did not assemble again until the Convention Parliament met in 1660 and the monarchy was restored. It returned to its former position as the more powerful chamber of Parliament—a position it would occupy until the 19th century.
The 19th century was marked by several changes to the House of Lords. The House, once a body of only about 50 members, had been greatly enlarged by the liberality of George III and his successors in creating peerages. The individual influence of a Lord of Parliament was thus diminished.
Moreover, the power of the House as a whole decreased, whilst that of the House of Commons grew. Particularly notable in the development of the Lower House's superiority was the Reform Bill of 1832. The electoral system of the House of Commons was far from democratic: property qualifications greatly restricted the size of the electorate, and the boundaries of many constituencies had not been changed for centuries.
Entire cities such as Manchester had not even one representative in the House of Commons, while the 11 voters living in Old Sarum retained their ancient right to elect two MPs. A small borough was susceptible to bribery, and was often under the control of a patron, whose nominee was guaranteed to win an election. Some aristocrats were patrons of numerous "pocket boroughs", and therefore controlled a considerable part of the membership of the House of Commons.
When the House of Commons passed a Reform Bill to correct some of these anomalies in 1831, the House of Lords rejected the proposal. The popular cause of reform, however, was not abandoned by the ministry, despite a second rejection of the bill in 1832. Prime Minister Charles Grey, 2nd Earl Grey advised the King to overwhelm opposition to the bill in the House of Lords by creating about 80 new pro-Reform peers. William IV originally balked at the proposal, which effectively threatened the opposition of the House of Lords, but at length relented.
Before the new peers were created, however, the Lords who opposed the bill admitted defeat and abstained from the vote, allowing the passage of the bill. The crisis damaged the political influence of the House of Lords but did not altogether end it. A vital reform was effected by the Lords themselves in 1868, when they changed their standing orders to abolish proxy voting, preventing Lords from voting without taking the trouble to attend. Over the course of the century the powers of the upper house were further reduced stepwise, culminating in the 20th century with the Parliament Act 1911; the Commons gradually became the stronger House of Parliament.
The status of the House of Lords returned to the forefront of debate after the election of a Liberal Government in 1906. In 1909 the Chancellor of the Exchequer, David Lloyd George, introduced into the House of Commons the "People's Budget", which proposed a land tax targeting wealthy landowners. The popular measure, however, was defeated in the heavily Conservative House of Lords.
Having made the powers of the House of Lords a primary campaign issue, the Liberals were narrowly re-elected in January 1910. Prime Minister H. H. Asquith then proposed that the powers of the House of Lords be severely curtailed. After a further general election in December 1910, and with an undertaking by King George V to create sufficient new Liberal peers to overcome Lords' opposition to the measure if necessary, the Asquith Government secured the passage of a bill to curtail the powers of the House of Lords.
The Parliament Act 1911 effectively abolished the power of the House of Lords to reject legislation, or to amend it in a way unacceptable to the House of Commons: most bills could be delayed for no more than three parliamentary sessions or two calendar years. It was not meant to be a permanent solution; more comprehensive reforms were planned. Neither party, however, pursued the matter with much enthusiasm, and the House of Lords remained primarily hereditary. The Parliament Act 1949 reduced the delaying power of the House of Lords further to two sessions or one year.
In 1958 the predominantly hereditary nature of the House of Lords was changed by the Life Peerages Act 1958, which authorised the creation of life baronies, with no numerical limits. The number of Life Peers then gradually increased, though not at a constant rate.
The Labour Party had, for most of the 20th century, a commitment, based on the party's historic opposition to class privilege, to abolish the House of Lords, or at least expel the hereditary element. In 1968 the Labour Government of Harold Wilson attempted to reform the House of Lords by introducing a system under which hereditary peers would be allowed to remain in the House and take part in debate, but would be unable to vote. This plan, however, was defeated in the House of Commons by a coalition of traditionalist Conservatives (such as Enoch Powell), and Labour members who continued to advocate the outright abolition of the Upper House (such as Michael Foot).
When Michael Foot became leader of the Labour Party in 1980, abolition of the House of Lords became a part of the party's agenda; under his successor, Neil Kinnock, however, a reformed Upper House was proposed instead. In the meantime, the creation of hereditary peerages (except for members of the Royal Family) has been arrested, with the exception of three creations during the administration of the Conservative Margaret Thatcher in the 1980s.
Whilst some hereditary peers were at best apathetic, the Labour Party's clear commitments were not lost on Merlin Hanbury-Tracy, 7th Baron Sudeley, who for decades was considered an expert on the House of Lords. In December 1979 the Conservative Monday Club published his extensive paper entitled "Lords Reform – Why tamper with the House of Lords?" and in July 1980 "The Monarchist" carried another article by Sudeley entitled "Why Reform or Abolish the House of Lords?". In 1990 he wrote a further booklet for the Monday Club entitled "The Preservation of the House of Lords".
There were no women sitting in the House of Lords until 1958, when a small number came into the chamber as a result of the Life Peerages Act 1958. One of these was Irene Curzon, 2nd Baroness Ravensdale, who had inherited her father's peerage in 1925 and was made a life peer to enable her to sit. After a campaign stretching back in some cases to the 1920s, another twelve women who held hereditary peerages in their own right were admitted by the Peerage Act 1963.
The Labour Party included in its 1997 general election manifesto a commitment to remove the hereditary peerage from the House of Lords. Their subsequent election victory in 1997 under Tony Blair led to the denouement of the traditional House of Lords. The Labour Government introduced legislation to expel all hereditary peers from the Upper House as a first step in Lords reform. As a part of a compromise, however, it agreed to permit 92 hereditary peers to remain until the reforms were complete. Thus all but 92 hereditary peers were expelled under the House of Lords Act 1999 (see below for its provisions), making the House of Lords predominantly an appointed house.
Since 1999, however, no further reform has taken place. The Wakeham Commission proposed introducing a 20% elected element to the Lords, but this plan was widely criticised. A parliamentary Joint Committee was established in 2001 to resolve the issue, but it reached no conclusion and instead gave Parliament seven options to choose from (fully appointed, 20% elected, 40% elected, 50% elected, 60% elected, 80%, and fully elected). In a confusing series of votes in February 2003, all of these options were defeated, although the 80% elected option fell by just three votes in the Commons. Socialist MPs favouring outright abolition voted against all the options.
In 2005, a cross-party group of senior MPs (Kenneth Clarke, Paul Tyler, Tony Wright, George Young and Robin Cook) published a report proposing that 70% of members of the House of Lords should be elected—each member for a single long term—by the single transferable vote system. Most of the remainder were to be appointed by a Commission to ensure a mix of "skills, knowledge and experience". This proposal was also not implemented. A cross-party campaign initiative called "Elect the Lords" was set up to make the case for a predominantly elected Second Chamber in the run up to the 2005 general election.
At the 2005 election, the Labour Party proposed further reform of the Lords, but without specific details. The Conservative Party, which had, prior to 1997, opposed any tampering with the House of Lords, favoured an 80% elected Second Chamber, while the Liberal Democrats called for a fully elected Senate. During 2006, a cross-party committee discussed Lords reform, with the aim of reaching a consensus: its findings were published in early 2007.
On 7 March 2007, members of the House of Commons voted ten times on a variety of alternative compositions for the upper chamber. Outright abolition, a wholly appointed house, a 20% elected house, a 40% elected house, a 50% elected house and a 60% elected house were all defeated in turn. Finally the vote for an 80% elected chamber was won by 305 votes to 267, and the vote for a wholly elected chamber was won by an even greater margin: 337 to 224. Significantly this last vote represented an overall majority of MPs.
Furthermore, examination of the names of MPs voting at each division shows that, of the 305 who voted for the 80% elected option, 211 went on to vote for the 100% elected option. Given that this vote took place after the vote on 80% – whose result was already known when the vote on 100% took place – this showed a clear preference for a fully elected upper house among those who voted for the only other option that passed. But this was nevertheless only an indicative vote and many political and legislative hurdles remained to be overcome for supporters of an elected second chamber. The House of Lords, soon after, rejected this proposal and voted for an entirely appointed House of Lords.
In July 2008, Jack Straw, the Secretary of State for Justice and Lord Chancellor, introduced a white paper to the House of Commons proposing to replace the House of Lords with an 80–100% elected chamber, with one third being elected at each general election, for a term of approximately 12–15 years. The white paper stated that as the peerage would be totally separated from membership of the upper house, the name "House of Lords" would no longer be appropriate: it went on to explain that there is cross-party consensus for the new chamber to be titled the "Senate of the United Kingdom"; however, to ensure the debate remains on the role of the upper house rather than its title, the white paper was neutral on the title of the new house.
On 30 November 2009, a "Code of Conduct for Members of the House of Lords" was agreed by them; certain amendments were agreed by them on 30 March 2010 and on 12 June 2014. The scandal over expenses in the Commons was at its highest pitch only six months before, and the Labourite leadership under Baroness Royall of Blaisdon determined that something sympathetic should be done.
In Meg Russell's article "Is the House of Lords already reformed?", she states three essential features of a legitimate House of Lords. The first is that it must have adequate powers over legislation to make the government think twice before making a decision. The House of Lords, she argues, currently has enough power to make it relevant. (During Tony Blair's first year, he was defeated 38 times in the Lords—but that was before the major reform with the House of Lords Act 1999) Secondly, as to the composition of the Lords, Meg Russell suggests that the composition must be distinct from the Commons, otherwise it would render the Lords useless. The third feature is the perceived legitimacy of the Lords. She writes, "In general legitimacy comes with election."
The Conservative–Liberal Democrat coalition agreed, after the 2010 general election, to outline clearly a provision for a wholly or mainly elected second chamber, elected by proportional representation. These proposals sparked a debate on 29 June 2010. As an interim measure, appointment of new peers would reflect the shares of the vote secured by the political parties in the last general election.
Detailed proposals for Lords reform, including a draft House of Lords Reform Bill, were published on 17 May 2011. These included a 300-member hybrid house, of whom 80% would be elected. A further 20% would be appointed, and reserve space would be included for some Church of England bishops. Under the proposals, members would also serve single non-renewable terms of 15 years. Former MPs would be allowed to stand for election to the Upper House, but members of the Upper House would not be immediately allowed to become MPs.
The details of the proposal were:
The proposals were considered by a Joint Committee on House of Lords Reform made up of both MPs and Peers, which issued its final report on 23 April 2012, making the following suggestions:
Deputy Prime Minister Nick Clegg introduced the House of Lords Reform Bill 2012 on 27 June 2012 which built on proposals published on 17 May 2011. However, this Bill was abandoned by the Government on 6 August 2012 following opposition from within the Conservative Party.
A private members bill to introduce some reforms was introduced by Dan Byles in 2013. The House of Lords Reform Act 2014 received the Royal Assent in 2014. Under the new law:
The House of Lords (Expulsion and Suspension) Act 2015 authorised the House to expel or suspend members.
This act makes provision to preferentially admit bishops of the Church of England who are women to the Lords Spiritual in the 10 years following its commencement.
In 2015, Rachel Treweek, Bishop of Gloucester, became the first woman to sit as a Lord Spiritual in the House of Lords. As of 2019, five women bishops sit as Lords Spiritual, four of them due to this act.
In 2019, a seven-month enquiry by Naomi Ellenbogen QC found that one in five staff of the house had experienced bullying or harassment which they did not report for fear of reprisals. This was proceeded by several cases, including Liberal Democrat Lord Lester, of lords who used their position to sexually harass or abuse women.
On 19 January 2020, it was announced that House of Lords may be moved from London to a city in Northern England, likely York, or Birmingham, in the Midlands, in an attempt to "reconnect" the area. It is unclear how the Queen's Speech will be conducted in the event of a move. The idea was received negatively by many peers.
The size of the House of Lords has varied greatly throughout its history. The English House of Lords—then comprising 168 members—was joined at Westminster by 16 Scottish peers to represent the peerage of Scotland—a total of 184 nobles—in 1707's first Parliament of Great Britain. A further 28 Irish members to represent the peerage of Ireland were added in 1801 to the first Parliament of the United Kingdom. From about 220 peers in the eighteenth century, the house saw continued expansion; with the increasing numbers of life peers after the Life Peerages Act 1958 and the inclusion of all Scottish peers and the first female peers in the Peerage Act 1963, it increased to a record size of 1,330 in October 1999, before Lords reform reduced it to 669, mostly life peers, by March 2000. The chamber's membership again expanded in the following decades, increasing to above eight hundred active members in 2014 and prompting further reforms in the House of Lords Reform Act that year. A cap of 600 members was subsequently proposed by the Lords, though the current figure is more than this.
In April 2011, a cross-party group of former leading politicians, including many senior members of the House of Lords, called on the Prime Minister David Cameron to stop creating new peers. He had created 117 new peers since becoming prime minister in May 2010, a faster rate of elevation than any PM in British history. The expansion occurred while his government had tried (in vain) to reduce the size of the House of Commons by 50 members, from 650 to 600.
In August 2014, despite there being a seating capacity of only around 230 to 400 on the benches in the Lords chamber, the House had 774 active members (plus 54 who were not entitled to attend or vote, having been suspended or granted leave of absence). This made the House of Lords the largest parliamentary chamber in any democracy. In August 2014, former Speaker of the House of Commons Baroness Betty Boothroyd requested that "older peers should retire gracefully" to ease the overcrowding in the House of Lords. She also criticised successive prime ministers for filling the second chamber with "lobby fodder" in an attempt to help their policies become law. She made her remarks days before a new batch of peers were due to be created and several months after the passage of the House of Lords Reform Act 2014 which enabled peers to retire or resign their seats in the House, which had previously been impossible.
In August 2015, following the creation of a further 45 peers in the Dissolution Honours, the total number of eligible members of the Lords increased to 826. In a report entitled "Does size matter?" the BBC said: "Increasingly, yes. Critics argue the House of Lords is the second largest legislature after the Chinese National People's Congress and dwarfs upper houses in other bicameral democracies such as the United States (100 senators), France (348 senators), Australia (76 senators), Canada (105 appointed senators) and India (250 members). The Lords is also larger than the Supreme People's Assembly of North Korea (687 members). [...] Peers grumble that there is not enough room to accommodate all of their colleagues in the Chamber, where there are only about 400 seats, and say they are constantly jostling for space – particularly during high-profile sittings", but added, "On the other hand, defenders of the Lords say that it does a vital job scrutinising legislation, a lot of which has come its way from the Commons in recent years". In late 2016, a Lord Speaker's committee formed to examine the issue of overcrowding, with fears membership could swell to above 1,000, and in October 2017 the committee presented its findings. In December 2017, the Lords debated and broadly approved its report, which proposed a cap on membership at 600 peers, with a fifteen-year term limit for new peers and a "two-out, one-in" limit on new appointments. By October 2018, the Lord Speaker's committee commended the reduction in peers' numbers, noting that the rate of departures had been greater than expected, with the House of Commons's Public Administration and Constitutional Affairs Select Committee approving the progress achieved without legislation. By April 2019, with the retirement of nearly one hundred peers since the passage of the House of Lords Reform Act 2014, the number of active peers had been reduced to a total of 782, of whom 665 were life peers. This total however, remains greater than the membership of 669 peers in March 2000, after implementation of the House of Lords Act 1999 removed the bulk of the hereditary peers from their seats, remains well above the 600-member cap, and is still larger than the House of Commons's 650 members.
Legislation, with the exception of money bills, may be introduced in either House.
The House of Lords debates legislation, and has power to amend or reject bills. However, the power of the Lords to reject a bill passed by the House of Commons is severely restricted by the Parliament Acts. Under those Acts, certain types of bills may be presented for the Royal Assent without the consent of the House of Lords (i.e. the Commons can override the Lords' veto). The House of Lords cannot delay a money bill (a bill that, in the view of the Speaker of the House of Commons, solely concerns national taxation or public funds) for more than one month.
Other public bills cannot be delayed by the House of Lords for more than two parliamentary sessions, or one calendar year. These provisions, however, only apply to public bills that originate in the House of Commons, and cannot have the effect of extending a parliamentary term beyond five years. A further restriction is a constitutional convention known as the Salisbury Convention, which means that the House of Lords does not oppose legislation promised in the Government's election manifesto.
By a custom that prevailed even before the Parliament Acts, the House of Lords is further restrained insofar as financial bills are concerned. The House of Lords may neither originate a bill concerning taxation or Supply (supply of treasury or exchequer funds), nor amend a bill so as to insert a taxation or Supply-related provision. (The House of Commons, however, often waives its privileges and allows the Upper House to make amendments with financial implications.) Moreover, the Upper House may not amend any Supply Bill. The House of Lords formerly maintained the absolute power to reject a bill relating to revenue or Supply, but this power was curtailed by the Parliament Acts.
The House of Lords does not control the term of the prime minister or of the government. Only the lower house may force the prime minister to resign or call elections by passing a motion of no-confidence or by withdrawing supply. Thus, the House of Lords' oversight of the government is limited.
Most Cabinet ministers are from the House of Commons rather than the House of Lords. In particular, all prime ministers since 1902 have been members of the lower house. (Alec Douglas-Home, who became prime minister in 1963 whilst still an earl, disclaimed his peerage and was elected to the Commons soon after his term began.) In recent history, it has been very rare for major cabinet positions (except Lord Chancellor and Leader of the House of Lords) to have been filled by peers.
Exceptions include Lord Carrington, who was the Secretary of State for Defence from 1970 to 1974, Secretary of State for Energy briefly for two months in early 1974 and Secretary of State for Foreign and Commonwealth Affairs between 1979 and 1982, Lord Cockfield, who served as Secretary of State for Trade and President of the Board of Trade, Lord Young of Graffham (Minister without Portfolio, then Secretary of State for Employment and then Secretary of State for Trade and Industry and President of the Board of Trade from 1984 to 1989), Baroness Amos, who served as Secretary of State for International Development, Lord Adonis, who served as Secretary of State for Transport and Lord Mandelson, who served as First Secretary of State, Secretary of State for Business, Innovation and Skills and President of the Board of Trade. Lord Robertson of Port Ellen was briefly a peer whilst serving as Secretary of State for Defence before resigning to take up the post of Secretary General of NATO. From 1999 to 2010 the Attorney General for England and Wales was a member of the House of Lords; the most recent was Patricia Scotland.
The House of Lords remains a source for junior ministers and members of government. Like the House of Commons, the Lords also has a Government Chief Whip as well as several Junior Whips. Where a government department is not represented by a minister in the Lords or one is not available, government whips will act as spokesmen for them.
Historically, the House of Lords held several judicial functions. Most notably, until 2009 the House of Lords served as the court of last resort for most instances of UK law. Since 1 October 2009 this role is now held by the Supreme Court of the United Kingdom.
The Lords' judicial functions originated from the ancient role of the Curia Regis as a body that addressed the petitions of the King's subjects. The functions were exercised not by the whole House, but by a committee of "Law Lords". The bulk of the House's judicial business was conducted by the twelve Lords of Appeal in Ordinary, who were specifically appointed for this purpose under the Appellate Jurisdiction Act 1876.
The judicial functions could also be exercised by Lords of Appeal (other members of the House who happened to have held high judicial office). No Lord of Appeal in Ordinary or Lord of Appeal could sit judicially beyond the age of seventy-five. The judicial business of the Lords was supervised by the Senior Lord of Appeal in Ordinary and their deputy, the Second Senior Lord of Appeal in Ordinary.
The jurisdiction of the House of Lords extended, in civil and in criminal cases, to appeals from the courts of England and Wales, and of Northern Ireland. From Scotland, appeals were possible only in civil cases; Scotland's High Court of Justiciary is the highest court in criminal matters. The House of Lords was not the United Kingdom's only court of last resort; in some cases, the Judicial Committee of the Privy Council performs such a function. The jurisdiction of the Privy Council in the United Kingdom, however, is relatively restricted; it encompasses appeals from ecclesiastical courts, disputes under the House of Commons Disqualification Act 1975, and a few other minor matters. Issues related to devolution were transferred from the Privy Council to the Supreme Court in 2009.
The twelve Law Lords did not all hear every case; rather, after World War II cases were heard by panels known as Appellate Committees, each of which normally consisted of five members (selected by the Senior Lord). An Appellate Committee hearing an important case could consist of more than five members. Though Appellate Committees met in separate committee rooms, judgement was given in the Lords Chamber itself. No further appeal lay from the House of Lords, although the House of Lords could refer a "preliminary question" to the European Court of Justice in cases involving an element of European Union law, and a case could be brought at the European Court of Human Rights if the House of Lords did not provide a satisfactory remedy in cases where the European Convention on Human Rights was relevant.
A distinct judicial function—one in which the whole House used to participate—is that of trying impeachments. Impeachments were brought by the House of Commons, and tried in the House of Lords; a conviction required only a majority of the Lords voting. Impeachments, however, are to all intents and purposes obsolete; the last impeachment was that of Henry Dundas, 1st Viscount Melville, in 1806.
Similarly, the House of Lords was once the court that tried peers charged with high treason or felony. The House would be presided over not by the Lord Chancellor, but by the Lord High Steward, an official especially appointed for the occasion of the trial. If Parliament was not in session, then peers could be tried in a separate court, known as the Lord High Steward's Court. Only peers, their wives, and their widows (unless remarried) were entitled to such trials; the Lords Spiritual were tried in ecclesiastical courts. In 1948, the right of peers to be tried in such special courts was abolished; now, they are tried in the regular courts. The last such trial in the House was of Edward Russell, 26th Baron de Clifford, in 1935. An illustrative dramatisation circa 1928 of a trial of a peer (the fictional Duke of Denver) on a charge of murder (a felony) is portrayed in the 1972 BBC Television adaption of Dorothy L. Sayers' Lord Peter Wimsey mystery "Clouds of Witness".
The Constitutional Reform Act 2005 resulted in the creation of a separate Supreme Court of the United Kingdom, to which the judicial function of the House of Lords, and some of the judicial functions of the Judicial Committee of the Privy Council, were transferred. In addition, the office of Lord Chancellor was reformed by the act, removing his ability to act as both a government minister and a judge. This was motivated in part by concerns about the historical admixture of legislative, judicial, and executive power. The new Supreme Court is located at Middlesex Guildhall.
Members of the House of Lords who sit by virtue of their ecclesiastical offices are known as Lords Spiritual. Formerly, the Lords Spiritual were the majority in the English House of Lords, comprising the church's archbishops, (diocesan) bishops, abbots, and those priors who were entitled to wear a mitre. After the English Reformation's highpoint in 1539, only the archbishops and bishops continued to attend, as the Dissolution of the Monasteries had just disproved of and suppressed the positions of abbot and prior. In 1642, during the few Lords' gatherings convened during English Interregnum which saw periodic war, the Lords Spiritual were excluded altogether, but they returned under the Clergy Act 1661.
The number of Lords Spiritual was further restricted by the Bishopric of Manchester Act 1847, and by later Acts. The Lords Spiritual can now number no more than 26; these are the Archbishop of Canterbury, the Archbishop of York, the Bishop of London, the Bishop of Durham, the Bishop of Winchester (who sit by right regardless of seniority) and the 21 longest-serving bishops from other dioceses in the Church of England (excluding the dioceses of Sodor and Man and Gibraltar in Europe, as these lie entirely outside the United Kingdom). Following a change to the law in 2014 to allow women to be ordained bishops, the Lords Spiritual (Women) Act 2015 was passed, which provides that whenever a vacancy arises among the Lords Spiritual during the ten years following the Act coming into force, the vacancy has to be filled by a woman, if one is eligible. This does not apply to the five bishops who sit by right.
The current Lords Spiritual represent only the Church of England. Bishops of the Church of Scotland historically sat in the Parliament of Scotland but were finally excluded in 1689 (after a number of previous exclusions) when the Church of Scotland became permanently Presbyterian. There are no longer bishops in the Church of Scotland in the traditional sense of the word, and that Church has never sent members to sit in the Westminster House of Lords. The Church of Ireland did obtain representation in the House of Lords after the union of Ireland and Great Britain in 1801.
Of the Church of Ireland's ecclesiastics, four (one archbishop and three bishops) were to sit at any one time, with the members rotating at the end of every parliamentary session (which normally lasted about one year). The Church of Ireland, however, was disestablished in 1871, and thereafter ceased to be represented by Lords Spiritual. Bishops of Welsh sees in the Church of England originally sat in the House of Lords (after 1847, only if their seniority within the church entitled them to), but the Church in Wales ceased to be a part of the Church of England in 1920 and was simultaneously disestablished in Wales. Accordingly, bishops of the Church in Wales were no longer eligible to be appointed to the House as bishops of the Church of England, but those already appointed remained.
Other ecclesiastics have sat in the House of Lords as Lords Temporal in recent times: Chief Rabbi Immanuel Jakobovits was appointed to the House of Lords (with the consent of the Queen, who acted on the advice of Prime Minister Margaret Thatcher), as was his successor Chief Rabbi Jonathan Sacks. Julia Neuberger is the senior rabbi to the West London Synagogue. In recognition of his work at reconciliation and in the peace process in Northern Ireland, the Archbishop of Armagh (the senior Anglican bishop in Northern Ireland), Robin Eames, was appointed to the Lords by John Major. Other clergy appointed include Donald Soper, Timothy Beaumont, and some Scottish clerics.
There have been no Roman Catholic clergy appointed, though it was rumoured that Cardinal Basil Hume and his successor Cormac Murphy O'Connor were offered peerages by James Callaghan, Margaret Thatcher and Tony Blair respectively, but declined. Hume later accepted the Order of Merit, a personal appointment of the Queen, shortly before his death. O'Connor said he had his maiden speech ready, but Roman Catholics who have received holy orders are prohibited by canon law from holding major offices connected with any government other than the Holy See.
Former Archbishops of Canterbury, having reverted to the status of a regular bishop but no longer diocesans, are invariably given life peerages and sit as Lords Temporal.
By custom at least one of the bishops reads prayers in each legislative day (a role taken by the chaplain in the Commons). They often speak in debates; in 2004 Rowan Williams, the Archbishop of Canterbury, opened a debate into sentencing legislation. Measures (proposed laws of the Church of England) must be put before the Lords, and the Lords Spiritual have a role in ensuring that this takes place.
Since the Dissolution of the Monasteries, the Lords Temporal have been the most numerous group in the House of Lords. Unlike the Lords Spiritual, they may be publicly partisan, aligning themselves with one or another of the political parties that dominate the House of Commons. Publicly non-partisan Lords are called crossbenchers. Originally, the Lords Temporal included several hundred hereditary peers (that is, those whose peerages may be inherited), who ranked variously as dukes, marquesses, earls, viscounts, and barons (as well as Scottish Lords of Parliament). Such hereditary dignities can be created by the Crown; in modern times this is done on the advice of the Prime Minister of the day (except in the case of members of the Royal Family).
Holders of Scottish and Irish peerages were not always permitted to sit in the Lords. When Scotland united with England to form Great Britain in 1707, it was provided that the Scottish hereditary peers would only be able to elect 16 representative peers to sit in the House of Lords; the term of a representative was to extend until the next general election. A similar provision was enacted when Ireland merged with Great Britain in 1801 to form the United Kingdom; the Irish peers were allowed to elect 28 representatives, who were to retain office for life. Elections for Irish representatives ended in 1922, when most of Ireland became an independent state; elections for Scottish representatives ended with the passage of the Peerage Act 1963, under which all Scottish peers obtained seats in the Upper House.
In 1999, the Labour government brought forward the House of Lords Act removing the right of several hundred hereditary peers to sit in the House. The Act provided, as a measure intended to be temporary, that 92 people would continue to sit in the Lords by virtue of hereditary peerages, and this is still in effect.
Of the 92, two remain in the House of Lords because they hold royal offices connected with Parliament: the Earl Marshal and the Lord Great Chamberlain. Of the remaining ninety peers sitting in the Lords by virtue of a hereditary peerage, 15 are elected by the whole House and 75 are chosen by fellow hereditary peers in the House of Lords, grouped by party. (If a hereditary peerage holder is given a life peerage, he or she becomes a member of the House of Lords without a need for a by-election.) The exclusion of other hereditary peers removed Charles, Prince of Wales (who is also Earl of Chester) and all other Royal Peers, including Prince Philip, Duke of Edinburgh; Prince Andrew, Duke of York; Prince Edward, Earl of Wessex; Prince Richard, Duke of Gloucester; and Prince Edward, Duke of Kent.
The number of hereditary peers to be chosen by a political group reflects the proportion of hereditary peers that belonged to that group (see current composition below) in 1999. When an elected hereditary peer dies, a by-election is held, with a variant of the Alternative Vote system being used. If the recently deceased hereditary peer had been elected by the whole House, then so is his or her replacement; a hereditary peer elected by a specific political group (including the non-aligned crossbenchers) is replaced by a vote of the hereditary peers already elected to the Lords belonging to that political group (whether elected by that group or by the whole house).
Until 2009, the Lords Temporal also included the Lords of Appeal in Ordinary, more commonly known as Law Lords, a group of individuals appointed to the House of Lords so that they could exercise its judicial functions. Lords of Appeal in Ordinary were first appointed under the Appellate Jurisdiction Act 1876. They were selected by the Prime Minister of the day, but were formally appointed by the Sovereign. A Lord of Appeal in Ordinary had to retire at the age of 70, or, if his or her term was extended by the government, at the age of 75; after reaching such an age, the Law Lord could not hear any further cases in the House of Lords.
The number of Lords of Appeal in Ordinary (excluding those who were no longer able to hear cases because of age restrictions) was limited to twelve, but could be changed by statutory instrument. By a convention of the House, Lords of Appeal in Ordinary did not take part in debates on new legislation, so as to maintain judicial independence. Lords of Appeal in Ordinary held their seats in the House of Lords for life, remaining as members even after reaching the judicial retirement age of 70 or 75. Former Lord Chancellors and holders of other high judicial office could also sit as Law Lords under the Appellate Jurisdiction Act, although in practice this right was only rarely exercised.
Under the Constitutional Reform Act 2005, the Lords of Appeal in Ordinary when the Act came into effect in 2009 became judges of the new Supreme Court of the United Kingdom and were then barred from sitting or voting in the House of Lords until they had retired as judges. One of the main justifications for the new Supreme Court was to establish a separation of powers between the judiciary and the legislature. It is therefore unlikely that future appointees to the Supreme Court of the United Kingdom will be made Lords of Appeal in Ordinary.
The largest group of Lords Temporal, and indeed of the whole House, are life peers. As of June 2019 there are 661 life peers. Life peerages rank only as barons or baronesses, and are created under the Life Peerages Act 1958. Like all other peers, life peers are created by the Sovereign, who acts on the advice of the Prime Minister or the House of Lords Appointments Commission. By convention, however, the Prime Minister allows leaders of other parties to nominate some life peers, so as to maintain a political balance in the House of Lords. Moreover, some non-party life peers (the number being determined by the Prime Minister) are nominated by the independent House of Lords Appointments Commission.
In 2000, the government announced it would set up an Independent Appointments Commission, under Lord Stevenson of Coddenham, to select fifteen so-called "people's peers" for life peerages. However, when the choices were announced in April 2001, from a list of 3,000 applicants, the choices were treated with criticism in the media, as all were distinguished in their field, and none were "ordinary people" as some had originally hoped.
Several different qualifications apply for membership of the House of Lords. No person may sit in the House of Lords if under the age of 21. Furthermore, only United Kingdom, Irish and Commonwealth citizens may sit in the House of Lords. The nationality restrictions were previously more stringent: under the Act of Settlement 1701, and prior to the British Nationality Act 1948, only natural-born subjects qualified.
Additionally, some bankruptcy-related restrictions apply to members of the Upper House. A person may not sit in the House of Lords if he or she is the subject of a Bankruptcy Restrictions Order (applicable in England and Wales only), if he or she is adjudged bankrupt (in Northern Ireland), or if his or her estate is sequestered (in Scotland). A final restriction bars an individual convicted of high treason from sitting in the House of Lords until completing his or her full term of imprisonment. An exception applies, however, if the individual convicted of high treason receives a full pardon. Note that an individual serving a prison sentence for an offence other than high treason is "not" automatically disqualified.
Women were excluded from the House of Lords until the Life Peerages Act 1958, passed to address the declining number of active members, made possible the creation of peerages for life. Women were immediately eligible and four were among the first life peers appointed. However, hereditary peeresses continued to be excluded until the passage of the Peerage Act 1963. Since the passage of the House of Lords Act 1999, hereditary peeresses remain eligible for election to the Upper House; until her resignation on 1 May 2020, there was one (Margaret of Mar, 31st Countess of Mar) among the 90 hereditary peers who continue to sit.
The Honours (Prevention of Abuses) Act 1925 made it illegal for a peerage, or other honour, to be bought or sold. Nonetheless, there have been repeated allegations that life peerages (and thus membership of the House of Lords) have been made available to major political donors in exchange for donations. The most prominent case, the 2006 Cash for Honours scandal, saw a police investigation, with no charges being brought. A 2015 study found that of 303 people nominated for peerages in the period 2005–14, a total of 211 were former senior figures within politics (including former MPs), or were non-political appointments. Of the remaining 92 political appointments from outside public life, 27 had made significant donations to political parties. The authors concluded firstly that nominees from outside public life were much more likely to have made large gifts than peers nominated after prior political or public service. They also found that significant donors to parties were far more likely to be nominated for peerages than other party members.
Traditionally there was no mechanism by which members could resign or be removed from the House of Lords (compare the situation as regards resignation from the House of Commons). The Peerage Act 1963 permitted a person to disclaim their newly inherited peerage (within certain time limits); this meant that such a person could effectively renounce their membership of the Lords. This might be done in order to remain or become qualified to sit in the House of Commons, as in the case of Tony Benn (formerly the second Viscount Stansgate), who had campaigned for such a change.
The House of Lords Reform Act 2014 made provision for members' resignation from the House, removal for non-attendance, and automatic expulsion upon conviction for a serious criminal offence (if resulting in a jail sentence of at least one year). In June 2015, under the House of Lords (Expulsion and Suspension) Act 2015, the House's Standing Orders may provide for the expulsion or suspension of a member upon a resolution of the House.
Traditionally the House of Lords did not elect its own speaker, unlike the House of Commons; rather, the "ex officio" presiding officer was the Lord Chancellor. With the passage of the Constitutional Reform Act 2005, the post of Lord Speaker was created, a position to which a peer is elected by the House and subsequently appointed by the Crown. The first Lord Speaker, elected on 4 May 2006, was Baroness Hayman, a former Labour peer. As the Speaker is expected to be an impartial presiding officer, Hayman resigned from the Labour Party. In 2011, Baroness D'Souza was elected as the second Lord Speaker, replacing Hayman in September 2011. D'Souza was in turn succeeded by Lord Fowler in September 2016, the incumbent Lord Speaker.
This reform of the post of Lord Chancellor was made due to the perceived constitutional anomalies inherent in the role. The Lord Chancellor was not only the Speaker of the House of Lords, but also a member of the Cabinet; his or her department, formerly the Lord Chancellor's Department, is now called the Ministry of Justice. The Lord Chancellor is no longer the head of the judiciary of England and Wales. Hitherto, the Lord Chancellor was part of all three branches of government: the legislative, the executive, and the judicial.
The overlap of the legislative and executive roles is a characteristic of the Westminster system, as the entire cabinet consists of members of the House of Commons or the House of Lords; however, in June 2003, the Blair Government announced its intention to abolish the post of Lord Chancellor because of the office's mixed executive and judicial responsibilities. The abolition of the office was rejected by the House of Lords, and the Constitutional Reform Act 2005 was thus amended to preserve the office of Lord Chancellor. The Act no longer guarantees that the office holder of Lord Chancellor is the presiding officer of the House of Lords, and therefore allows the House of Lords to elect a speaker of their own.
The Lord Speaker may be replaced as presiding officer by one of his or her deputies. The Chairman of Committees, the Principal Deputy Chairman of Committees, and several Chairmen are all deputies to the Lord Speaker, and are all appointed by the House of Lords itself at the beginning of each session. By custom, the Crown appoints each Chairman, Principal Deputy Chairman and Deputy Chairman to the additional office of Deputy Speaker of the House of Lords. There was previously no legal requirement that the Lord Chancellor or a Deputy Speaker be a member of the House of Lords (though the same has long been customary).
Whilst presiding over the House of Lords, the Lord Chancellor traditionally wore ceremonial black and gold robes. Robes of black and gold are now worn by the Lord Chancellor and Secretary of State for Justice in the House of Commons, on ceremonial occasions. This is no longer a requirement for the Lord Speaker except for State occasions outside of the chamber. The Speaker or Deputy Speaker sits on the Woolsack, a large red seat stuffed with wool, at the front of the Lords Chamber.
When the House of Lords resolves itself into committee (see below), the Chairman of Committees or a Deputy Chairman of Committees presides, not from the Woolsack, but from a chair at the Table of the House. The presiding officer has little power compared to the Speaker of the House of Commons. He or she only acts as the mouthpiece of the House, performing duties such as announcing the results of votes. This is because, unlike in the House of Commons where all statements are directed to "Mr/Madam Speaker", in the House of Lords they are directed to "My Lords"; i.e., the entire body of the House.
The Lord Speaker or Deputy Speaker cannot determine which members may speak, or discipline members for violating the rules of the House; these measures may be taken only by the House itself. Unlike the politically neutral Speaker of the House of Commons, the Lord Chancellor and Deputy Speakers originally remained members of their respective parties, and were permitted to participate in debate; however, this is no longer true of the new role of Lord Speaker.
Another officer of the body is the Leader of the House of Lords, a peer selected by the Prime Minister. The Leader of the House is responsible for steering Government bills through the House of Lords, and is a member of the Cabinet. The Leader also advises the House on proper procedure when necessary, but such advice is merely informal, rather than official and binding. A Deputy Leader is also appointed by the Prime Minister, and takes the place of an absent or unavailable leader.
The Clerk of the Parliaments is the chief clerk and officer of the House of Lords (but is not a member of the House itself). The Clerk, who is appointed by the Crown, advises the presiding officer on the rules of the House, signs orders and official communications, endorses bills, and is the keeper of the official records of both Houses of Parliament. Moreover, the Clerk of the Parliaments is responsible for arranging by-elections of hereditary peers when necessary. The deputies of the Clerk of the Parliaments (the Clerk Assistant and the Reading Clerk) are appointed by the Lord Speaker, subject to the House's approval.
The Gentleman Usher of the Black Rod is also an officer of the House; he takes his title from the symbol of his office, a black rod. Black Rod (as the Gentleman Usher is normally known) is responsible for ceremonial arrangements, is in charge of the House's doorkeepers, and may (upon the order of the House) take action to end disorder or disturbance in the Chamber. Black Rod also holds the office of Serjeant-at-Arms of the House of Lords, and in this capacity attends upon the Lord Speaker. The Gentleman Usher of the Black Rod's duties may be delegated to the Yeoman Usher of the Black Rod or to the Assistant Serjeant-at-Arms.
The House of Lords and the House of Commons assemble in the Palace of Westminster. The Lords Chamber is lavishly decorated, in contrast with the more modestly furnished Commons Chamber. Benches in the Lords Chamber are coloured red. The Woolsack is at the front of the Chamber; the Government sit on benches on the right of the Woolsack, while members of the Opposition sit on the left. Crossbenchers, sit on the benches immediately opposite the Woolsack.
The Lords Chamber is the site of many formal ceremonies, the most famous of which is the State Opening of Parliament, held at the beginning of each new parliamentary session. During the State Opening, the Sovereign, seated on the Throne in the Lords Chamber and in the presence of both Houses of Parliament, delivers a speech outlining the Government's agenda for the upcoming parliamentary session.
In the House of Lords, members need not seek the recognition of the presiding officer before speaking, as is done in the House of Commons. If two or more Lords simultaneously rise to speak, the House decides which one is to be heard by acclamation, or, if necessary, by voting on a motion. Often, however, the Leader of the House will suggest an order, which is thereafter generally followed. Speeches in the House of Lords are addressed to the House as a whole ("My Lords") rather than to the presiding officer alone (as is the custom in the Lower House). Members may not refer to each other in the second person (as "you"), but rather use third person forms such as "the noble Duke", "the noble Earl", "the noble Lord", "my noble friend", "The most Reverend Primate", etc.
Each member may make no more than one speech on a motion, except that the mover of the motion may make one speech at the beginning of the debate and another at the end. Speeches are not subject to any time limits in the House; however, the House may put an end to a speech by approving a motion "that the noble Lord be no longer heard". It is also possible for the House to end the debate entirely, by approving a motion "that the Question be now put". This procedure is known as Closure, and is extremely rare. Six closure motions were passed on 4 April 2019 to significant media attention as part of consideration of a private member's bill concerning the United Kingdom's withdrawal from the European Union.
Once all speeches on a motion have concluded, or Closure invoked, the motion may be put to a vote. The House first votes by voice vote; the Lord Speaker or Deputy Speaker puts the question, and the Lords respond either "content" (in favour of the motion) or "not content" (against the motion). The presiding officer then announces the result of the voice vote, but if his assessment is challenged by any Lord, a recorded vote known as a division follows.
Members of the House enter one of two lobbies (the "content" lobby or the "not-content" lobby) on either side of the Chamber, where their names are recorded by clerks. At each lobby are two Tellers (themselves members of the House) who count the votes of the Lords. The Lord Speaker may not take part in the vote. Once the division concludes, the Tellers provide the results thereof to the presiding officer, who then announces them to the House.
If there is an equality of votes, the motion is decided according to the following principles: legislation may proceed in its present form, unless there is a majority in favour of amending or rejecting it; any other motions are rejected, unless there is a majority in favour of approving it. The quorum of the House of Lords is just three members for a general or procedural vote, and 30 members for a vote on legislation. If fewer than three or 30 members (as appropriate) are present, the division is invalid.
By contrast with the House of Commons, the House of Lords has not until recently had an established procedure for imposing sanctions on its members. When a cash for influence scandal was referred to the Committee of Privileges in January 2009, the Leader of the House of Lords also asked the Privileges Committee to report on what sanctions the House had against its members. After seeking advice from the Attorney General for England and Wales and the former Lord Chancellor Lord Mackay of Clashfern, the committee decided that the House "possessed an inherent power" to suspend errant members, although not to withhold a writ of summons nor to expel a member permanently. When the House subsequently suspended Lord Truscott and Lord Taylor of Blackburn for their role in the scandal, they were the first to meet this fate since 1642.
Recent changes have expanded the disciplinary powers of the House. Section 3 of the House of Lords Reform Act 2014 now provides that any member of the House of Lords convicted of a crime and sentenced to imprisonment for more than one year loses their seat. The House of Lords (Expulsion and Suspension) Act 2015 allows the House to set up procedures to suspend, and to expel, its members.
There are two motions which have grown up through custom and practice and which govern questionable conduct within the House. They are brought into play by a member standing up, possibly intervening on another member, and moving the motion without notice. When the debate is getting excessively heated, it is open to a member to move "that the Standing Order on Asperity of Speech be read by the Clerk". The motion can be debated, but if agreed by the House, the Clerk of the Parliaments will read Standing Order 33 which provides "That all personal, sharp, or taxing speeches be forborn". The Journals of the House of Lords record only four instances on which the House has ordered the Standing Order to be read since the procedure was invented in 1871.
For more serious problems with an individual Lord, the option is available to move "That the noble Lord be no longer heard". This motion also is debatable, and the debate which ensues has sometimes offered a chance for the member whose conduct has brought it about to come to order so that the motion can be withdrawn. If the motion is passed, its effect is to prevent the member from continuing their speech on the motion then under debate. The Journals identify eleven occasions on which this motion has been moved since 1884; four were eventually withdrawn, one was voted down, and six were passed.
In 1958, to counter criticism that some peers only appeared at major decisions in the House and thereby particular votes were swayed, the Standing Orders of the House of Lords were enhanced. Peers who did not wish to attend meetings regularly or were prevented by ill health, age or further reasons, were now able to request Leave of Absence. During the granted time a peer is expected not to visit the House's meetings until either its expiration or termination, announced at least a month prior to their return.
Members of the House of Lords can, since 2010, opt to receive a £300 per day attendance allowance (increased in 2017 to £310), plus limited travel expenses. Peers can elect to receive a reduced attendance allowance of £150 per day instead. Prior to 2010 peers from outside London could claim an overnight allowance of £174.
Unlike in the House of Commons, when the term committee is used to describe a stage of a bill, this committee does not take the form of a public bill committee, but what is described as Committee of the Whole House. It is made up of all Members of the House of Lords allowing any Member to contribute to debates if he or she chooses to do so and allows for more flexible rules of procedure. It is presided over by the Chairman of Committees.
The term committee is also used to describe Grand Committee, where the same rules of procedure apply as in the main chamber, except that no divisions may take place. For this reason, business that is discussed in Grand Committee is usually uncontroversial and likely to be agreed unanimously.
Public bills may also be committed to pre-legislative committees. A pre-legislative Committee is specifically constituted for a particular bill. These committees are established in advance of the bill being laid before either the House of Lords or the House of Commons and can take evidence from the public. Such committees are rare and do not replace any of the usual stages of a bill, including committee stage.
The House of Lords also has 15 Select committees. Typically, these are "sessional committees", meaning that their members are appointed by the House at the beginning of each session, and continue to serve until the next parliamentary session begins. In practice, these are often permanent committees, which are re-established during every session. These committees are typically empowered to make reports to the House "from time to time", that is, whenever they wish. Other committees are "ad-hoc committees", which are set up to investigate a specific issue. When they are set up by a motion in the House, the motion will set a deadline by which the Committee must report. After this date, the Committee will cease to exist unless it is granted an extension. One example of this is the Committee on Public Service and Demographic Change. The House of Lords may appoint a chairman for a committee; if it does not do so, the Chairman of Committees or a Deputy Chairman of Committees may preside instead. Most of the Select Committees are also granted the power to co-opt members, such as the European Union Committee. The primary function of Select Committees is to scrutinise and investigate Government activities; to fulfil these aims, they are permitted to hold hearings and collect evidence. Bills may be referred to Select Committees, but are more often sent to the Committee of the Whole House and Grand Committees.
The committee system of the House of Lords also includes several Domestic Committees, which supervise or consider the House's procedures and administration. One of the Domestic Committees is the Committee of Selection, which is responsible for assigning members to many of the House's other committees.
There are currently sitting members of the House of Lords. As of June 2019, 661 are life peers. An additional Lords are ineligible from participation, including eight peers who are constitutionally disqualified as members of the Judiciary.
The House of Lords Act 1999 allocated 75 of the 92 hereditary peers to the parties based on the proportion of hereditary peers that belonged to that party in 1999:
Of the initial 42 hereditary peers elected as Conservatives, one, Lord Willoughby de Broke, defected to UKIP, though he left the party in 2018.
Fifteen hereditary peers are elected by the whole House, and the remaining hereditary peers are the two royal office-holders, the Earl Marshal and the Lord Great Chamberlain, both of whom are currently on leave of absence.
A report in 2007 stated that many members of the Lords (particularly the life peers) do not attend regularly; the average daily attendance was around 408.
While the number of hereditary peers is limited to 92, and that of Lords spiritual to 26, there is no maximum limit to the number of life peers who may be members of the House of Lords at any time.
Special arrangements were made during the Coronavirus pandemic in 2020 to allow some duties online.
Baroness Vere of Norbiton – Parliamentary Under-Secretary of State for Transport | https://en.wikipedia.org/wiki?curid=13658 |
Homeomorphism
In the mathematical field of topology, a homeomorphism, topological isomorphism, or bicontinuous function is a continuous function between topological spaces that has a continuous inverse function. Homeomorphisms are the isomorphisms in the category of topological spaces—that is, they are the mappings that preserve all the topological properties of a given space. Two spaces with a homeomorphism between them are called homeomorphic, and from a topological viewpoint they are the same. The word "homeomorphism" comes from the Greek words "ὅμοιος" ("homoios") = similar or same and "μορφή" ("morphē") = shape, form, introduced to mathematics by Henri Poincaré in 1895.
Very roughly speaking, a topological space is a geometric object, and the homeomorphism is a continuous stretching and bending of the object into a new shape. Thus, a square and a circle are homeomorphic to each other, but a sphere and a torus are not. However, this description can be misleading. Some continuous deformations are not homeomorphisms, such as the deformation of a line into a point. Some homeomorphisms are not continuous deformations, such as the homeomorphism between a trefoil knot and a circle.
An often-repeated mathematical joke is that topologists can't tell the difference between a coffee cup and a donut, since a sufficiently pliable donut could be reshaped to the form of a coffee cup by creating a dimple and progressively enlarging it, while preserving the donut hole in the cup's handle.
A function formula_1 between two topological spaces is a homeomorphism if it has the following properties:
A homeomorphism is sometimes called a bicontinuous function. If such a function exists, formula_6 and formula_7 are homeomorphic. A self-homeomorphism is a homeomorphism from a topological space onto itself. "Being homeomorphic" is an equivalence relation on topological spaces. Its equivalence classes are called homeomorphism classes.
The third requirement, that formula_22 be continuous, is essential. Consider for instance the function formula_23 (the unit circle in formula_24) defined byformula_25. This function is bijective and continuous, but not a homeomorphism (formula_26 is compact but formula_27 is not). The function formula_22 is not continuous at the point formula_29, because although formula_22 maps formula_29 to formula_32, any neighbourhood of this point also includes points that the function maps close to formula_33, but the points it maps to numbers in between lie outside the neighbourhood.
Homeomorphisms are the isomorphisms in the category of topological spaces. As such, the composition of two homeomorphisms is again a homeomorphism, and the set of all self-homeomorphisms formula_34 forms a group, called the homeomorphism group of "X", often denoted formula_35. This group can be given a topology, such as the compact-open topology, which under certain assumptions makes it a topological group.
For some purposes, the homeomorphism group happens to be too big, but by means of the isotopy relation, one can reduce this group to the mapping class group.
Similarly, as usual in category theory, given two spaces that are homeomorphic, the space of homeomorphisms between them, formula_36 is a torsor for the homeomorphism groups formula_35 and formula_38, and, given a specific homeomorphism between formula_6 and formula_7, all three sets are identified.
The intuitive criterion of stretching, bending, cutting and gluing back together takes a certain amount of practice to apply correctly—it may not be obvious from the description above that deforming a line segment to a point is impermissible, for instance. It is thus important to realize that it is the formal definition given above that counts. In this case, for example, the line segment possesses infinitely many points, and therefore cannot be put into a bijection with a set containing only a finite number of points, including a single point.
This characterization of a homeomorphism often leads to a confusion with the concept of homotopy, which is actually "defined" as a continuous deformation, but from one "function" to another, rather than one space to another. In the case of a homeomorphism, envisioning a continuous deformation is a mental tool for keeping track of which points on space "X" correspond to which points on "Y"—one just follows them as "X" deforms. In the case of homotopy, the continuous deformation from one map to the other is of the essence, and it is also less restrictive, since none of the maps involved need to be one-to-one or onto. Homotopy does lead to a relation on spaces: homotopy equivalence.
There is a name for the kind of deformation involved in visualizing a homeomorphism. It is (except when cutting and regluing are required) an isotopy between the identity map on "X" and the homeomorphism from "X" to "Y". | https://en.wikipedia.org/wiki?curid=13660 |
Hausdorff maximal principle
In mathematics, the Hausdorff maximal principle is an alternate and earlier formulation of Zorn's lemma proved by Felix Hausdorff in 1914 (Moore 1982:168). It states that in any partially ordered set, every totally ordered subset is contained in a maximal totally ordered subset.
The Hausdorff maximal principle is one of many statements equivalent to the axiom of choice over ZF (Zermelo–Fraenkel set theory without the axiom of choice). The principle is also called the Hausdorff maximality theorem or the Kuratowski lemma (Kelley 1955:33).
The Hausdorff maximal principle states that, in any partially ordered set, every totally ordered subset is contained in a maximal totally ordered subset. Here a maximal totally ordered subset is one that, if enlarged in any way, does not remain totally ordered. The maximal set produced by the principle is not unique, in general; there may be many maximal totally ordered subsets containing a given totally ordered subset.
An equivalent form of the principle is that in every partially ordered set there exists a maximal totally ordered subset.
To prove that it follows from the original form, let "A" be a poset. Then formula_1 is a totally ordered subset of "A", hence there exists a maximal totally ordered subset containing formula_1, in particular "A" contains a maximal totally ordered subset.
For the converse direction, let "A" be a partially ordered set and "T" a totally ordered subset of "A". Then
is partially ordered by set inclusion formula_4, therefore it contains a maximal totally ordered subset "P". Then the set formula_5 satisfies the desired properties.
The proof that the Hausdorff maximal principle is equivalent to Zorn's lemma is very similar to this proof.
EXAMPLE 1. If "A" is any collection of sets, the relation "is a proper subset of" is a strict partial order on "A". Suppose that "A" is the collection of all circular regions (interiors of circles) in the plane. One maximal totally ordered sub-collection of "A" consists of all circular regions with centers at the origin. Another maximal totally ordered sub-collection consists of all circular regions bounded by circles tangent from the right to the y-axis at the origin.
EXAMPLE 2. If (x0, y0) and (x1, y1) are two points of the plane ℝ2, define (x0, y0) < (x1, y1)
if y0 = y1 and x0 < x1. This is a partial ordering of ℝ2 under which two points are comparable only if they lie on the same horizontal line. The maximal totally ordered sets are horizontal lines in ℝ2. | https://en.wikipedia.org/wiki?curid=13665 |
Hel (being)
Hel is a legendary being in Norse mythology who is said to preside over a realm of the same name, where she receives a portion of the dead. Hel is attested in the "Poetic Edda", compiled in the 13th century from earlier traditional sources, and the "Prose Edda", written in the 13th century by Snorri Sturluson. In addition, she is mentioned in poems recorded in "Heimskringla" and "Egils saga" that date from the 9th and 10th centuries, respectively. An episode in the Latin work "Gesta Danorum", written in the 12th century by Saxo Grammaticus, is generally considered to refer to Hel, and Hel may appear on various Migration Period bracteates.
In the "Poetic Edda", "Prose Edda", and "Heimskringla", Hel is referred to as a daughter of Loki. In the "Prose Edda" book "Gylfaginning", Hel is described as having been appointed by the god Odin as ruler of a realm of the same name, located in Niflheim. In the same source, her appearance is described as half blue and half flesh-coloured and further as having a gloomy, downcast appearance. The "Prose Edda" details that Hel rules over vast mansions with many servants in her underworld realm and plays a key role in the attempted resurrection of the god Baldr.
Scholarly theories have been proposed about Hel's potential connections to figures appearing in the 11th-century "Old English Gospel of Nicodemus" and Old Norse "Bartholomeus saga postola", that she may have been considered a goddess with potential Indo-European parallels in Bhavani, Kali, and Mahakali or that Hel may have become a being only as a late personification of the location of the same name.
The Old Norse divine name "Hel" is identical to the name of the location over which she rules. It stems from the Proto-Germanic feminine noun "*haljō-" ('concealed place, the underworld'; compare with Gothic "halja", Old English "hel", Old Frisian "helle", Old Saxon "hellia", Old High German "hella"), itself a derivative of "*helan-" ('to cover > conceal, hide'; compare with OE "helan", OF "hela", OS "helan", OHG "helan"). It derives, ultimately, from the Proto-Indo-European verbal root "*ḱel-" ('to conceal, cover, protect'; compare with Latin "cēlō", Old Irish "ceilid", Greek "kalúptō"). The Old Irish masculine noun "cel" ('dissolution, extinction, death') is also related.
Other related early Germanic terms and concepts include the compounds "*halja-rūnō(n)" and *"halja-wītjan".' The feminine noun "*halja-rūnō(n)" is formed with "*haljō-" ('hell') attached to "*rūno" ('mystery, secret' > runes). It has descendant cognates in the Old English "helle-rúne" ('possessed woman, sorceress, diviner'), the Old High German "helli-rūna" ('magic'), and perhaps in the Latinized Gothic form "haliurunnae",' although its second element may derive instead from "rinnan" 'to run, go', leading to Gothic "*haljurunna" as the 'one who travels to the netherworld'. The neutral noun *"halja-wītjan" is composed of the same root "*haljō-" attached to *"wītjan" (compare with Goth. "un-witi" 'foolishness, understanding', OE "witt" 'right mind, wits', OHG "wizzi" 'understanding'), with descendant cognates in Old Norse "hel-víti" ('hell'), Old English "helle-wíte" ('hell-torment, hell'), Old Saxon helli-wīti ('hell'), or Middle High German "helle-wīzi" ('hell').""
"Hel" is also etymologically related–although distantly that time–to the Old Norse word "Valhöll" (Valhalla, 'hall of the slain') and to the English word "hall", both likewise deriving from Proto-Indo-European "*ḱel-" via the Proto-Germanic root *"hallō-" ('covered place, hall').
The "Poetic Edda", compiled in the 13th century from earlier traditional sources, features various poems that mention Hel. In the "Poetic Edda" poem "Völuspá", Hel's realm is referred to as the "Halls of Hel." In stanza 31 of "Grímnismál", Hel is listed as living beneath one of three roots growing from the world tree Yggdrasil. In "Fáfnismál", the hero Sigurd stands before the mortally wounded body of the dragon Fáfnir, and states that Fáfnir lies in pieces, where "Hel can take" him. In "Atlamál", the phrases "Hel has half of us" and "sent off to Hel" are used in reference to death, though it could be a reference to the location and not the being, if not both. In stanza 4 of "Baldrs draumar", Odin rides towards the "high hall of Hel."
Hel may also be alluded to in "Hamðismál". Death is periphrased as "joy of the troll-woman" (or "ogress") and ostensibly it is Hel being referred to as the troll-woman or the ogre ("flagð"), although it may otherwise be some unspecified "dís".
Hel is referred to in the "Prose Edda", written in the 13th century by Snorri Sturluson. In chapter 34 of the book "Gylfaginning", Hel is listed by High as one of the three children of Loki and Angrboða; the wolf Fenrir, the serpent Jörmungandr, and Hel. High continues that, once the gods found that these three children are being brought up in the land of Jötunheimr, and when the gods "traced prophecies that from these siblings great mischief and disaster would arise for them" then the gods expected a lot of trouble from the three children, partially due to the nature of the mother of the children, yet worse so due to the nature of their father.
High says that Odin sent the gods to gather the children and bring them to him. Upon their arrival, Odin threw Jörmungandr into "that deep sea that lies round all lands," Odin threw Hel into Niflheim, and bestowed upon her authority over nine worlds, in that she must "administer board and lodging to those sent to her, and that is those who die of sickness or old age." High details that in this realm Hel has "great Mansions" with extremely high walls and immense gates, a hall called Éljúðnir, a dish called "Hunger," a knife called "Famine," the servant Ganglati (Old Norse "lazy walker"), the serving-maid Ganglöt (also "lazy walker"), the entrance threshold "Stumbling-block," the bed "Sick-bed," and the curtains "Gleaming-bale." High describes Hel as "half black and half flesh-coloured," adding that this makes her easily recognizable, and furthermore that Hel is "rather downcast and fierce-looking."
In chapter 49, High describes the events surrounding the death of the god Baldr. The goddess Frigg asks who among the Æsir will earn "all her love and favour" by riding to Hel, the location, to try to find Baldr, and offer Hel herself a ransom. The god Hermóðr volunteers and sets off upon the eight-legged horse Sleipnir to Hel. Hermóðr arrives in Hel's hall, finds his brother Baldr there, and stays the night. The next morning, Hermóðr begs Hel to allow Baldr to ride home with him, and tells her about the great weeping the Æsir have done upon Baldr's death. Hel says the love people have for Baldr that Hermóðr has claimed must be tested, stating:
If all things in the world, alive or dead, weep for him, then he will be allowed to return to the Æsir. If anyone speaks against him or refuses to cry, then he will remain with Hel.
Later in the chapter, after the female jötunn Þökk refuses to weep for the dead Baldr, she responds in verse, ending with "let Hel hold what she has." In chapter 51, High describes the events of Ragnarök, and details that when Loki arrives at the field Vígríðr "all of Hel's people" will arrive with him.
In chapter 5 of the "Prose Edda" book "Skáldskaparmál", Hel is mentioned in a kenning for Baldr ("Hel's companion"). In chapter 16, "Hel's [...] relative or father" is given as a kenning for Loki. In chapter 50, Hel is referenced ("to join the company of the quite monstrous wolf's sister") in the skaldic poem "Ragnarsdrápa".
In the "Heimskringla" book "Ynglinga saga", written in the 13th century by Snorri Sturluson, Hel is referred to, though never by name. In chapter 17, the king Dyggvi dies of sickness. A poem from the 9th-century "Ynglingatal" that forms the basis of "Ynglinga saga" is then quoted that describes Hel's taking of Dyggvi:
In chapter 45, a section from "Ynglingatal" is given which refers to Hel as "howes'-warder" (meaning "guardian of the graves") and as taking King Halfdan Hvitbeinn from life. In chapter 46, King Eystein Halfdansson dies by being knocked overboard by a sail yard. A section from "Ynglingatal" follows, describing that Eystein "fared to" Hel (referred to as "Býleistr's-brother's-daughter"). In chapter 47, the deceased Eystein's son King Halfdan dies of an illness, and the excerpt provided in the chapter describes his fate thereafter, a portion of which references Hel:
In a stanza from "Ynglingatal" recorded in chapter 72 of the "Heimskringla" book "Saga of Harald Sigurdsson", "given to Hel" is again used as a phrase to referring to death.
The Icelanders' saga "Egils saga" contains the poem "Sonatorrek". The saga attributes the poem to 10th century skald Egill Skallagrímsson, and writes that it was composed by Egill after the death of his son Gunnar. The final stanza of the poem contains a mention of Hel, though not by name:
In the account of Baldr's death in Saxo Grammaticus' early 13th century work "Gesta Danorum", the dying Baldr has a dream visitation from Proserpina (here translated as "the goddess of death"):
The following night the goddess of death appeared to him in a dream standing at his side, and declared that in three days time she would clasp him in her arms. It was no idle vision, for after three days the acute pain of his injury brought his end.
Scholars have assumed that Saxo used Proserpina as a goddess equivalent to the Norse Hel.
It has been suggested that several imitation medallions and bracteates of the Migration Period (ca. first centuries AD) feature depictions of Hel. In particular the bracteates IK 14 and IK 124 depict a rider traveling down a slope and coming upon a female being holding a scepter or a staff. The downward slope may indicate that the rider is traveling towards the realm of the dead and the woman with the scepter may be a female ruler of that realm, corresponding to Hel.
Some B-class bracteates showing three godly figures have been interpreted as depicting Baldr's death, the best known of these is the Fakse bracteate. Two of the figures are understood to be Baldr and Odin while both Loki and Hel have been proposed as candidates for the third figure. If it is Hel she is presumably greeting the dying Baldr as he comes to her realm.
The "Old English Gospel of Nicodemus", preserved in two manuscripts from the 11th century, contains a female figure referred to as "Seo hell" who engages in flyting with Satan and tells him to leave her dwelling (Old English "ut of mynre onwununge"). Regarding Seo Hell in the "Old English Gospel of Nicodemus", Michael Bell states that "her vivid personification in a dramatically excellent scene suggests that her gender is more than grammatical, and invites comparison with the Old Norse underworld goddess Hel and the Frau Holle of German folklore, to say nothing of underworld goddesses in other cultures" yet adds that "the possibility that these genders "are" merely grammatical is strengthened by the fact that an Old Norse version of Nicodemus, possibly translated under English influence, personifies Hell in the neutral (Old Norse "þat helvíti")."
The Old Norse "Bartholomeus saga postola", an account of the life of Saint Bartholomew dating from the 13th century, mentions a "Queen Hel." In the story, a devil is hiding within a pagan idol, and bound by Bartholomew's spiritual powers to acknowledge himself and confess, the devil refers to Jesus as the one which "made war on Hel our queen" (Old Norse "heriaði a Hel drottning vara"). "Queen Hel" is not mentioned elsewhere in the saga.
Michael Bell says that while Hel "might at first appear to be identical with the well-known pagan goddess of the Norse underworld" as described in chapter 34 of "Gylfaginning", "in the combined light of the Old English and Old Norse versions of "Nicodemus" she casts quite a different a shadow," and that in "Bartholomeus saga postola" "she is clearly the queen of the Christian, not pagan, underworld."
Jacob Grimm theorized that Hel (whom he refers to here as "Halja", the theorized Proto-Germanic form of the term) is essentially an "image of a greedy, unrestoring, female deity" and that "the higher we are allowed to penetrate into our antiquities, the less hellish and more godlike may "Halja" appear. Of this we have a particularly strong guarantee in her affinity to the Indian Bhavani, who travels about and bathes like Nerthus and Holda, but is likewise called "Kali" or "Mahakali", the great "black" goddess. In the underworld she is supposed to sit in judgment on souls. This office, the similar name and the black hue [...] make her exceedingly like "Halja". And "Halja" is one of the oldest and commonest conceptions of our heathenism."
Grimm theorizes that the Helhest, a three legged-horse that roams the countryside "as a harbinger of plague and pestilence" in Danish folklore, was originally the steed of the goddess Hel, and that on this steed Hel roamed the land "picking up the dead that were her due." In addition, Grimm says that a wagon was once ascribed to Hel, with which Hel made journeys. Grimm says that Hel is an example of a "half-goddess;" "one who cannot be shown to be either wife or daughter of a god, and who stands in a dependent relation to higher divinities" and that "half-goddesses" stand higher than "half-gods" in Germanic mythology.
Hilda Ellis Davidson (1948) states that Hel "as a goddess" in surviving sources seems to belong to a genre of literary personification, that the word "hel" is generally "used simply to signify death or the grave," and that the word often appears as the equivalent to the English 'death,' which Davidson states "naturally lends itself to personification by poets." Davidson explains that "whether this personification has originally been based on a belief in a goddess of death called Hel is another question," but that she does not believe that the surviving sources give any reason to believe so. Davidson adds that, on the other hand, various other examples of "certain supernatural women" connected with death are to be found in sources for Norse mythology, that they "seem to have been closely connected with the world of death, and were pictured as welcoming dead warriors," and that the depiction of Hel "as a goddess" in "Gylfaginning" "might well owe something to these."
In a later work (1998), Davidson states that the description of Hel found in chapter 33 of "Gylfaginning" "hardly suggests a goddess." Davidson adds that "yet this is not the impression given in the account of Hermod's ride to Hel later in "Gylfaginning" (49)" and points out that here Hel "[speaks] with authority as ruler of the underworld" and that from her realm "gifts are sent back to Frigg and Fulla by Balder's wife Nanna as from a friendly kingdom." Davidson posits that Snorri may have "earlier turned the goddess of death into an allegorical figure, just as he made Hel, the underworld of shades, a place 'where wicked men go,' like the Christian Hell ("Gylfaginning" 3)." Davidson continues that:
On the other hand, a goddess of death who represents the horrors of slaughter and decay is something well known elsewhere; the figure of Kali in India is an outstanding example. Like Snorri's Hel, she is terrifying to in appearance, black or dark in colour, usually naked, adorned with severed heads or arms or the corpses of children, her lips smeared with blood. She haunts the battlefield or cremation ground and squats on corpses. Yet for all this she is "the recipient of ardent devotion from countless devotees who approach her as their mother" [...].
Davidson further compares to early attestations of the Irish goddesses Badb (Davidson points to the description of Badb from "The Destruction of Da Choca's Hostel" where Badb is wearing a dusky mantle, has a large mouth, is dark in color, and has gray hair falling over her shoulders, or, alternatively, "as a red figure on the edge of the ford, washing the chariot of a king doomed to die") and The Morrígan. Davidson concludes that, in these examples, "here we have the fierce destructive side of death, with a strong emphasis on its physical horrors, so perhaps we should not assume that the gruesome figure of Hel is wholly Snorri's literary invention."
John Lindow states that most details about Hel, as a figure, are not found outside of Snorri's writing in "Gylfaginning", and says that when older skaldic poetry "says that people are 'in' rather than 'with' Hel, we are clearly dealing with a place rather than a person, and this is assumed to be the older conception," that the noun and place "Hel" likely originally simply meant "grave," and that "the personification came later." He also draws a parallel between the personified Hel's banishment to the underworld and the binding of Fenrir as part of a recurring theme of the bound monster, where an enemy of the gods is bound but destined to break free at Ragnarok. Rudolf Simek theorizes that the figure of Hel is "probably a very late personification of the underworld Hel," and says that "the first scriptures using the goddess Hel are found at the end of the 10th and in the 11th centuries." Simek states that the allegorical description of Hel's house in "Gylfaginning" "clearly stands in the Christian tradition," and that "on the whole nothing speaks in favour of there being a belief in Hel in pre-Christian times." However, Simek also cites Hel as possibly appearing as one of three figures appearing together on Migration Period B-bracteates.
In January 2017, the Icelandic Naming Committee ruled that parents could not name their child "Hel" "on the grounds that the name would cause the child significant distress and trouble as it grows up". | https://en.wikipedia.org/wiki?curid=13666 |
Hans-Dietrich Genscher
Hans-Dietrich Genscher (21 March 1927 – 31 March 2016) was a German statesman and a member of the liberal Free Democratic Party (FDP), who served as the Federal Minister of the Interior from 1969 to 1974, and as the Federal Minister of Foreign Affairs and Vice Chancellor of Germany from 1974 to 1992 (except for a two-week break in 1982, after the FDP had left the Third Schmidt cabinet), making him the longest-serving occupant of either post and the only person, holding one of these posts under two different Chancellors of the Federal Republic of Germany. In 1991 he was chairman of the Organization for Security and Co-operation in Europe (OSCE).
A proponent of Realpolitik, Genscher has been called "a master of diplomacy." He is widely regarded as having been a principal "architect of German reunification." In 1991, he played a pivotal role in international diplomacy surrounding the breakup of Yugoslavia by successfully pushing for international recognition of Croatia, Slovenia and other republics declaring independence, in an effort to halt "a trend towards a Greater Serbia." After leaving office, he worked as a lawyer and international consultant. He was President of the German Council on Foreign Relations and was involved with several international organisations, and with former Czech President Václav Havel, he called for a Cold War museum to be built in Berlin.
Genscher was born on 21 March 1927 in Reideburg (Province of Saxony), now a part of Halle, in what later became East Germany. He was the son of Hilda Kreime and Kurt Genscher. His father, a lawyer, died when Genscher was nine years old. In 1943, he was drafted to serve as a member of the Air Force Support Personnel ("Luftwaffenhelfer") at the age of 16. At age 17, close to the end of the war, he and his fellow soldiers became members of the Nazi Party due to a collective application ("Sammelantrag") by his Wehrmacht unit. He later said he was unaware of it at the time.
Late in the war, Genscher was deployed as a soldier in General Walther Wenck's 12th Army, which ostensibly was directed to relieve the siege of Berlin. After the German surrender he was an American and British prisoner of war, but was released after two months. Following World War II, he studied law and economics at the universities of Halle and Leipzig (1946–1949) and joined the East German Liberal Democratic Party (LDPD) in 1946.
In 1952, Genscher fled to West Germany, where he joined the Free Democratic Party (FDP). He passed his second state examination in law in Hamburg in 1954 and became a solicitor in Bremen. During these early years after the war, Genscher continuously struggled with illness. From 1956 to 1959 he was a research assistant of the FDP parliamentary group in Bonn. From 1959 to 1965 he was the FDP group managing director, while from 1962 to 1964 he was National Secretary of the FDP.
In 1965 Genscher was elected on the North Rhine-Westphalian FDP list to the West German parliament and remained a member of parliament until his retirement in 1998. He was elected deputy national chairman in 1968. From 1969 he served as minister of the interior in the SPD-FDP coalition government led by Chancellor Willy Brandt.
In 1974 he became foreign minister and vice chancellor, both posts he would hold for 18 years. From 1 October 1974 to 23 February 1985 he was Chairman of the FDP. It was during his tenure as party chairman that the FDP switched from being the junior member of social-liberal coalition to being the junior member of the 1982 coalition with the CDU/CSU. In 1985 he gave up the post of national chairman. After his resignation as Foreign Minister, Genscher was appointed honorary chairman of the FDP in 1992.
After the federal election of 1969 Genscher was instrumental in the formation of the social-liberal coalition of chancellor Willy Brandt and was on 22 October 1969 appointed as federal minister of the interior.
In 1972, while minister for the interior, Genscher rejected Israel's offer to send an Israeli special forces unit to Germany to deal with the Munich Olympics hostage crisis. A flawed rescue attempt by German police forces at Fürstenfeldbruck air base resulted in a bloody shootout, which left all eleven hostages, five terrorists, and one German policeman dead. Genscher's popularity with Israel declined further when he endorsed the release of the three captured attackers following the hijacking of a Lufthansa aircraft on 29 October 1972.
In the SPD–FDP coalition, Genscher helped shape Brandt's policy of deescalation with the communist East, commonly known as "Ostpolitik", which was continued under chancellor Helmut Schmidt after Brandt's resignation in 1974. He would later be a driving factor in continuing this policy in the new conservative-liberal coalition under Helmut Kohl.
In the negotiations on a coalition government of SPD and FDP following the 1976 elections, it took Genscher 73 days to reach agreement with Chancellor Helmut Schmidt.
As Foreign Minister, Genscher stood for a policy of compromise between East and West, and developed strategies for an active policy of détente and the continuation of the East-West dialogue with the USSR. He was widely regarded a strong advocate of negotiated settlements to international problems. As a popular story on Genscher's preferred method of shuttle diplomacy has it, "two Lufthansa jets crossed over the Atlantic, and Genscher was on both."
Genscher was a major player in the negotiations on the text of the Helsinki Accords. In December 1976, the General Assembly of the United Nations in New York City accepted Genscher's proposal of an anti-terrorism convention in New York, which was set among other things, to respond to demands from hostage-takers under any circumstances.
Genscher was one of the FDP's driving forces when, in 1982, the party switched sides from its coalition with the SPD to support the CDU/CSU in their Constructive vote of no confidence to have incumbent Helmut Schmidt replaced with opposition leader Helmut Kohl as Chancellor. The reason for this was the increase in the differences between the coalition partners, particularly in economic and social policy. The switch was controversial, not least in his own party.
At several points in his tenure, he irritated the governments of the United States and other allies of Germany by appearing not to support Western initiatives fully. "During the Cold War, his penchant to seek the middle ground at times exasperated United States policy-makers who wanted a more decisive, less equivocal Germany," according to Tyler Marshall. Genscher's perceived quasi-neutralism was dubbed "Genscherism". "Fundamental to "Genscherism" was said to be the belief that Germany could play a role as a bridge between East and West without losing its status as a reliable NATO ally." In the 1980s, Genscher opposed the deployment of new short-range NATO missiles in Germany. At the time, the Reagan Administration questioned whether Germany was straying from the Western alliance and following a program of its own.
In 1984, Genscher became the first Western foreign minister to visit Tehran since the Iranian Revolution of 1979. In 1988, he appointed Jürgen Hellner as West Germany's new ambassador to Libya, a post that had been vacant since the 1986 Berlin discotheque bombing, a tragedy which U.S. officials blamed on the government of Muammar Gaddafi.
Genscher's proposals frequently set the tone and direction of foreign affairs among Western Europe's democracies. He was also an active participant in the further development of the European Union, taking an active part in the Single European Act Treaty negotiations in the mid-1980s, as well as the joint publication of the Genscher-Colombo plan with Italian Minister of Foreign Affairs Emilio Colombo which advocated further integration and deepening of relations in the European Union towards a more federal Europe. He later was among the politicians who pushed hard for monetary union alongside Edouard Balladur, France's finance minister, and Giuliano Amato, circulating a memorandum to that effect.
Genscher retained his posts as foreign minister and vice chancellor through German reunification and until 1992 when he stepped down for health reasons.
Genscher is most respected for his efforts that helped spell the end of the Cold War, in the late 1980s when Communist eastern European governments toppled, and which led to German reunification. During his time in office, he focused on maintaining stability and balance between the West and the Soviet bloc. From the beginning, he argued that the West should seek cooperation with Communist governments rather than treat them as implacably hostile; this policy was embraced by many Germans and other Europeans.
Genscher had great interest in European integration and the success of German reunification. He soon pushed for effective support of political reform processes in Poland and Hungary. For this purpose, he visited Poland to meet the chairman of Solidarity Lech Wałęsa as early as January 1980. Especially from 1987 he campaigned for an "active relaxation" policy response by the West to the Soviet efforts. In the years before German reunification, he made a point of maintaining strong ties with his birthplace Halle, which was regarded as significant by admirers and critics alike.
When thousands of East Germans sought refuge in West German embassies in Czechoslovakia and Poland, Genscher held discussions on the refugee crisis at the United Nations in New York with the foreign ministers of Czechoslovakia, Poland, East Germany and the Soviet Union in September 1989. Genscher's 30 September 1989 speech from the balcony of the German embassy in Prague was an important milestone on the road to the end of the GDR. In the embassy courtyard thousands of East German citizens had assembled. They were trying to travel to West Germany, but were being denied permission to travel by the Czechoslovak government at the request of East Germany. He announced that he had reached an agreement with the Communist Czechoslovak government that the refugees could leave: "We have come to you to tell you that today, your departure ..." (German: "Wir sind zu Ihnen gekommen, um Ihnen mitzuteilen, dass heute Ihre Ausreise ..."). After these words, the speech was drowned in cheers.
With his fellow foreign ministers James Baker of the United States and Eduard Shevardnadze of the Soviet Union, Genscher is widely credited with securing Germany's subsequent peaceful unification and the withdrawal of Soviet forces. He negotiated the German reunification in 1990 with his counterpart from the GDR, Markus Meckel. On 12 September 1990 he signed the Treaty on the Final Settlement with Respect to Germany on behalf of West Germany. In November 1990, Genscher and his Polish counterpart Krzysztof Skubiszewski signed the German-Polish Border Treaty on the establishment of the Oder–Neisse line as Poland's western border. Meanwhile, he strongly endorsed the plans of the Bush Administration to assure continued U.S. influence in a post-Cold War Europe.
In 1991, Genscher successfully pushed for Germany's recognition of the Republic of Croatia in the Croatian War of Independence shortly after JNA entered Vukovar. After Croatia and Slovenia had declared independence, Genscher concluded that Yugoslavia could not be held together, and that republics that wanted to break from the Serbian-dominated federation deserved quick diplomatic recognition. He hoped that such recognition would stop the fighting. The rest of the European Union was subsequently pressured to follow suit soon afterward. The UN Secretary-General Javier Pérez de Cuéllar had warned the German Government, that a recognition of Slovenia and Croatia would lead to an increase in aggression in the former Yugoslavia.
At a meeting of the European Community's foreign ministers in 1991, Genscher proposed to press for a war crimes trial for President Saddam Hussein of Iraq, accusing him of aggression against Kuwait, using chemical weapons against civilians and condoning genocide against the Kurds.
During the Gulf War, Genscher sought to deal with Iraq after other Western leaders had decided to go to war to force it out of Kuwait. Germany made a substantial financial contribution to the allied cause but, citing constitutional restrictions on the use of its armed forces, provided almost no military assistance. In January 1991, Germany sent Genscher on a state visit to Israel and followed up with an agreement to provide the Jewish state with $670 million in military aid, including financing for two submarines long coveted by Israel, a battery of Patriot missiles to defend against Iraqi missiles, 58 armored vehicles specially fitted to detect chemical and biological attacks, and a shipment of gas masks. When, in the aftermath of the war, a far-reaching political debate broke out over how Germany should fulfill its global responsibilities, Genscher responded that if foreign powers expect Germany to assume greater responsibility in the world, they should give it a chance to express its views "more strongly" in the United Nations Security Council. He also famously held that "whatever floats is fine, whatever rolls is not" to sum up Germany's military export policy for restless countries – based on a navy's unsuitability for use against a country's own people.
In 1992, Genscher, together with his Danish colleague Uffe Ellemann-Jensen, took the initiative to create the Council of the Baltic Sea States (CBSS) and the EuroFaculty.
More than half a century after Nazi leaders assembled their infamous exhibition "Degenerate Art," a sweeping condemnation of the work of the avant-garde, Genscher opened a re-creation of the show at the Altes Museum in March 1992, describing Nazi attempts to restrict artistic expression as "a step toward the catastrophe that produced the mass murder of European Jews and the war of extermination against Germany's neighbors." "The paintings in this exhibition have survived oppression and censorship," he asserted in his opening remarks. "They are not only a monument but also a sign of hope. They stand for the triumph of creative freedom over barbarism."
On 18 May 1992 Genscher retired at his own request from the federal government, which he had been member of for a total of 23 years. At the time, he was the world's longest-serving foreign minister and Germany's most popular politician. He had announced his decision three weeks earlier, on 27 April 1992. Genscher did not specify his reasons for quitting; however, he had suffered two heart attacks by that time. His resignation took effect in May, but he remained a member of parliament and continued to be influential in the Free Democratic Party.
Following Genscher's resignation, Chancellor Helmut Kohl and FDP chairman Otto Graf Lambsdorff named Irmgard Schwaetzer, a former aide to Genscher, to be the new Foreign Minister. In a surprise decision, however, a majority of the FDP parliamentary group rejected her nomination and voted instead to name Justice Minister Klaus Kinkel to head the Foreign Ministry.
Ahead of the German presidential election in 1994, Genscher proclaimed his lack of interest in the position, but was nonetheless widely considered a leading contender. After a poll taken for "Stern" magazine showed him to be the favored candidate of 48 percent of German voters, he reiterated in 1993 that he would "in no case" accept the presidency.
Having finished his political career, Genscher remained active as a lawyer and in international organizations. In late 1992, Genscher was appointed chairman of a newly established donors' board of the Berlin State Opera. Between 1997 and 2010, Genscher was affiliated with the law firm Büsing, Müffelmann & Theye. He founded his own consulting firm, Hans-Dietrich Genscher Consult GmbH, in 2000. Between 2001 and 2003, he served as president of the German Council on Foreign Relations. In 2001, Genscher headed an arbitration that ended a monthlong battle between German airline Lufthansa and its pilots' union and resulted in an agreement on increasing wages by more than 15 percent by the end of the following year.
In 2008, Genscher joined former Czech President Václav Havel, former United States Ambassador to Germany John Kornblum and several other well-known political figures in calling for a Cold War museum to be built at Checkpoint Charlie in Berlin. In 2009 Genscher expressed public concern at Pope Benedict XVI's lifting of excommunication of the bishops of the Society of Saint Pius X. Genscher wrote in the "Mitteldeutsche Zeitung": "Poles can be proud of Pope John Paul II. At the last papal election, we said We are the pope! But please—not like this." He argued that Pope Benedict XVI was making a habit of offending non-Catholics. "This is a deep moral and political question. It is about respect for the victims of crimes against humanity", Genscher said.
On 20 December 2013, it was revealed that Genscher played a key role in coordinating the release and flight to Germany of Mikhail Khodorkovsky, the former head of Yukos. Genscher had first met Khodorkovsky in 2002 and had chaired a conference at which Khodorkovsky blasted Russian President Vladimir Putin's pursuit of his oil company. Khodorkovsky asked his lawyers during a 2011 prison visit to let Genscher help mediate early release. Once Putin was re-elected in 2012, German Chancellor Angela Merkel instructed her officials to lobby for the president to meet Genscher. The subsequent negotiations involved two meetings between Genscher and Putin — one at Berlin Tegel Airport at the end of Putin's first visit to Germany after he was re-elected in 2012, the other in Moscow. While keeping the chancellor informed, Khodorkovsky's attorneys and Genscher spent the ensuing months developing a variety of legal avenues that could allow Putin to release his former rival early, ranging from amendments to existing laws to clemency. When Khodorkovsky's mother was in a Berlin hospital with cancer in November 2013, Genscher passed a message to Khodorkovsky suggesting the prisoner should write a pardon letter to Putin emphasizing his mother's ill health. Following Putin's pardoning of Khodorkovsky "for humanitarian reasons" in December 2013, a private plane provided by Genscher brought Khodorkovsky to Berlin for a family reunion at the Hotel Adlon.
Genscher signed on in 2014 to be a member of the Southern Corridor Advisory Panel, a BP-led consortium which includes former British Prime Minister Tony Blair and Peter Sutherland, chairman of Goldman Sachs International. The panel's purpose is to facilitate the expansion of a vast natural-gas field in the Caspian Sea and the building of two pipelines across Europe. The $45 billion enterprise, championed by the Azerbaijani president, Ilham Aliyev, has been called by critics "the Blair Rich Project."
Genscher died at his home outside Bonn in Wachtberg on 31 March 2016 from heart failure, one week and three days after his 89th birthday.
Genscher has been awarded honorary citizenship by his birthplace Halle (Saale) (in 1991) and the city of Berlin (in 1993). | https://en.wikipedia.org/wiki?curid=13669 |
Hindus
Hindus () are persons who regard themselves as culturally, ethnically, or religiously adhering to aspects of Hinduism. Historically, the term has also been used as a geographical, cultural, and later religious identifier for people living in the Indian subcontinent.
The historical meaning of the term "Hindu" has evolved with time. Starting with the Persian and Greek references to the land of the Indus in the 1st millennium BCE through the texts of the medieval era, the term Hindu implied a geographic, ethnic or cultural identifier for people living in the Indian subcontinent around or beyond the Sindhu (Indus) river. By the 16th century, the term began to refer to residents of the subcontinent who were not Turkic or Muslims. Hindoo is an archaic spelling variant, whose use today may be considered derogatory.
The historical development of Hindu self-identity within the local South Asian population, in a religious or cultural sense, is unclear. Competing theories state that Hindu identity developed in the British colonial era, or that it may have developed post-8th century CE after the Islamic invasion and medieval Hindu-Muslim wars. A sense of Hindu identity and the term "Hindu" appears in some texts dated between the 13th and 18th century in Sanskrit and Bengali. The 14th- and 18th-century Indian poets such as Vidyapati, Kabir and Eknath used the phrase "Hindu dharma" (Hinduism) and contrasted it with "Turaka dharma" (Islam). The Christian friar Sebastiao Manrique used the term 'Hindu' in religious context in 1649. In the 18th century, the European merchants and colonists began to refer to the followers of Indian religions collectively as "Hindus", in contrast to "Mohamedans" for Mughals and Arabs following Islam. By the mid-19th century, colonial orientalist texts further distinguished Hindus from Buddhists, Sikhs and Jains, but the colonial laws continued to consider all of them to be within the scope of the term "Hindu" until about mid-20th century. Scholars state that the custom of distinguishing between Hindus, Buddhists, Jains and Sikhs is a modern phenomenon.
At more than 1.2 billion, Hindus are the world's third largest group after Christians and Muslims. The vast majority of Hindus, approximately 966 million, live in India, according to India's 2011 census. After India, the next 9 countries with the largest Hindu populations are, in decreasing order: Nepal, Bangladesh, Indonesia, Pakistan, Sri Lanka, United States, Malaysia, United Arab Emirates and United Kingdom. These together accounted for 99% of the world's Hindu population, and the remaining nations of the world together had about 6 million Hindus in 2010.
The word "Hindu" is an exonym. This word "Hindu" is derived from the Indo-Aryan and Sanskrit word "Sindhu", which means "a large body of water", covering "river, ocean". It was used as the name of the Indus River and also referred to its tributaries. The actual term " first occurs, states Gavin Flood, as "a Persian geographical term for the people who lived beyond the river Indus (Sanskrit: "Sindhu")", more specifically in the 6th-century BCE inscription of Darius I. The Punjab region, called Sapta Sindhu in the Vedas, is called "Hapta Hindu" in Zend Avesta. The 6th-century BCE inscription of Darius I mentions the province of "Hi[n]dush", referring to northwestern India. The people of India were referred to as "Hinduvān" (Hindus) and "hindavī" was used as the adjective for Indian in the 8th century text "Chachnama". The term 'Hindu' in these ancient records is an ethno-geographical term and did not refer to a religion. The Arabic equivalent "Al-Hind" likewise referred to the country of India.
Among the earliest known records of 'Hindu' with connotations of religion may be in the 7th-century CE Chinese text "Record of the Western Regions" by the Buddhist scholar Xuanzang. Xuanzang uses the transliterated term "In-tu" whose "connotation overflows in the religious" according to Arvind Sharma. While Xuanzang suggested that the term refers to the country named after the moon, another Buddhist scholar I-tsing contradicted the conclusion saying that "In-tu" was not a common name for the country.
Al-Biruni's 11th-century text "Tarikh Al-Hind", and the texts of the Delhi Sultanate period use the term 'Hindu', where it includes all non-Islamic people such as Buddhists, and retains the ambiguity of being "a region or a religion". The 'Hindu' community occurs as the amorphous 'Other' of the Muslim community in the court chronicles, according to Romila Thapar. Wilfred Cantwell Smith notes that 'Hindu' retained its geographical reference initially: 'Indian', 'indigenous, local', virtually 'native'. Slowly, the Indian groups themselves started using the term, differentiating themselves and their "traditional ways" from those of the invaders.
The text "Prithviraj Raso", by Chanda Baradai, about the 1192 CE defeat of Prithviraj Chauhan at the hands of Muhammad Ghori, is full of references to "Hindus" and "Turks", and at one stage, says "both the religions have drawn their curved swords;" however, the date of this text is unclear and considered by most scholars to be more recent. In Islamic literature, 'Abd al-Malik Isami's Persian work, "Futuhu's-salatin", composed in the Deccan in 1350, uses the word " to mean Indian in the ethno-geographical sense and the word " to mean 'Hindu' in the sense of a follower of the Hindu religion". The poet Vidyapati's poem "Kirtilata" contrasts the cultures of Hindus and Turks (Muslims) in a city and concludes "The Hindus and the Turks live close together; Each makes fun of the other's religion ("dhamme")." One of the earliest uses of word 'Hindu' in religious context in a European language (Spanish), was the publication in 1649 by Sebastiao Manrique.
Other prominent mentions of 'Hindu' include the epigraphical inscriptions from Andhra Pradesh kingdoms who battled military expansion of Muslim dynasties in the 14th century, where the word 'Hindu' partly implies a religious identity in contrast to 'Turks' or Islamic religious identity. The term "Hindu" was later used occasionally in some Sanskrit texts such as the later Rajataranginis of Kashmir (Hinduka, c. 1450) and some 16th- to 18th-century Bengali Gaudiya Vaishnava texts, including "Chaitanya Charitamrita" and "Chaitanya Bhagavata". These texts used it to contrast Hindus from Muslims who are called Yavanas (foreigners) or Mlecchas (barbarians), with the 16th-century "Chaitanya Charitamrita" text and the 17th-century "Bhakta Mala" text using the phrase "Hindu dharma".
One of the earliest but ambiguous uses of the word Hindu is, states Arvind Sharma, in the 'Brahmanabad settlement' which Muhammad ibn Qasim made with non-Muslims after the Arab invasion of northwestern Sindh region of India, in 712 CE. The term 'Hindu' meant people who were non-Muslims, and it included Buddhists of the region. In the 11th-century text of Al Biruni, Hindus are referred to as "religious antagonists" to Islam, as those who believe in rebirth, presents them to hold a diversity of beliefs, and seems to oscillate between Hindus holding a centralist and pluralist religious views. In the texts of Delhi Sultanate era, states Sharma, the term Hindu remains ambiguous on whether it means people of a region or religion, giving the example of Ibn Battuta's explanation of the name "Hindu Kush" for a mountain range in Afghanistan. It was so called, wrote Ibn Battuta, because many Indian slaves died there of snow cold, as they were marched across that mountain range. The term "Hindu" there is ambivalent and could mean geographical region or religion.
The term Hindu appears in the texts from the Mughal Empire era. It broadly refers to non-Muslims. Pashaura Singh states, "in Persian writings, Sikhs were regarded as Hindu in the sense of non-Muslim Indians". Jahangir, for example, called the Sikh Guru Arjan a Hindu:
During the colonial era, the term Hindu had connotations of native religions of India, that is religions other than Christianity and Islam. In early colonial era Anglo-Hindu laws and British India court system, the term Hindu referred to people of all Indian religions as well as two non-Indian religions: Judaism and Zoroastrianism. In the 20th-century, personal laws were formulated for Hindus, and the term 'Hindu' in these colonial 'Hindu laws' applied to Buddhists, Jains and Sikhs in addition to denominational Hindus.
Beyond the stipulations of British law, colonial orientalists and particularly the influential Asiatick Researches founded in the 18th century, later called The Asiatic Society, initially identified just two religions in India – Islam, and Hinduism. These orientalists included all Indian religions such as Buddhism as a subgroup of Hinduism in the 18th century. These texts called followers of Islam as "Mohamedans", and all others as "Hindus". The text, by the early 19th century, began dividing Hindus into separate groups, for chronology studies of the various beliefs. Among the earliest terms to emerge were "Seeks and their College" (later spelled Sikhs by Charles Wilkins), "Boudhism" (later spelled Buddhism), and in the 9th volume of Asiatick Researches report on religions in India, the term "Jainism" received notice.
According to Pennington, the terms Hindu and Hinduism were thus constructed for colonial studies of India. The various sub-divisions and separation of subgroup terms were assumed to be result of "communal conflict", and Hindu was constructed by these orientalists to imply people who adhered to "ancient default oppressive religious substratum of India", states Pennington. Followers of other Indian religions so identified were later referred Buddhists, Sikhs or Jains and distinguished from Hindus, in an antagonistic two-dimensional manner, with Hindus and Hinduism stereotyped as irrational traditional and others as rational reform religions. However, these mid-19th-century reports offered no indication of doctrinal or ritual differences between Hindu and Buddhist, or other newly constructed religious identities. These colonial studies, states Pennigton, "puzzled endlessly about the Hindus and intensely scrutinized them, but did not interrogate and avoided reporting the practices and religion of Mughal and Arabs in South Asia", and often relied on Muslim scholars to characterise Hindus.
In contemporary era, the term Hindus are individuals who identify with one or more aspects of Hinduism, whether they are practising or non-practicing or "Laissez-faire". The term does not include those who identify with other Indian religions such as Buddhism, Jainism, Sikhism or various animist tribal religions found in India such as "Sarnaism". The term Hindu, in contemporary parlance, includes people who accept themselves as culturally or ethnically Hindu rather than with a fixed set of religious beliefs within Hinduism. One need not be religious in the minimal sense, states Julius Lipner, to be accepted as Hindu by Hindus, or to describe oneself as Hindu.
Hindus subscribe to a diversity of ideas on spirituality and traditions, but have no ecclesiastical order, no unquestionable religious authorities, no governing body, nor a single founding prophet; Hindus can choose to be polytheistic, pantheistic, monotheistic, monistic, agnostic, atheistic or humanist. Because of the wide range of traditions and ideas covered by the term Hinduism, arriving at a comprehensive definition is difficult. The religion "defies our desire to define and categorize it". A Hindu may, by his or her choice, draw upon ideas of other Indian or non-Indian religious thought as a resource, follow or evolve his or her personal beliefs, and still identify as a Hindu.
In 1995, Chief Justice P. B. Gajendragadkar was quoted in an Indian Supreme Court ruling:
Although Hinduism contains a broad range of philosophies, Hindus share philosophical concepts, such as but not limiting to dharma, karma, kama, artha, moksha and samsara, even if each subscribes to a diversity of views. Hindus also have shared texts such as the Vedas with embedded Upanishads, and common ritual grammar (Sanskara (rite of passage)) such as rituals during a wedding or when a baby is born or cremation rituals. Some Hindus go on pilgrimage to shared sites they consider spiritually significant, practice one or more forms of bhakti or puja, celebrate mythology and epics, major festivals, love and respect for guru and family, and other cultural traditions. A Hindu could:
In the Constitution of India, the word "Hindu" has been used in some places to denote persons professing any of these religions: Hinduism, Jainism, Buddhism or Sikhism. This however has been challenged by the Sikhs and by neo-Buddhists who were formerly Hindus. According to Sheen and Boyle, Jains have not objected to being covered by personal laws termed under 'Hindu', but Indian courts have acknowledged that Jainism is a distinct religion.
The Republic of India is in the peculiar situation that the Supreme Court of India has repeatedly been called upon to define "Hinduism" because the Constitution of India, while it prohibits "discrimination of any citizen" on grounds of religion in article 15, article 30 foresees special rights for "All minorities, whether based on religion or language". As a consequence, religious groups have an interest in being recognised as distinct from the Hindu majority in order to qualify as a "religious minority". Thus, the Supreme Court was forced to consider the question whether Jainism is part of Hinduism in 2005 and 2006.
Starting after the 10th century and particularly after the 12th century Islamic invasion, states Sheldon Pollock, the political response fused with the Indic religious culture and doctrines. Temples dedicated to deity Rama were built from north to south India, and textual records as well as hagiographic inscriptions began comparing the Hindu epic of Ramayana to regional kings and their response to Islamic attacks. The Yadava king of Devagiri named "Ramacandra", for example states Pollock, is described in a 13th-century record as, "How is this Rama to be described.. who freed Varanasi from the "mleccha" (barbarian, Turk Muslim) horde, and built there a golden temple of Sarngadhara". Pollock notes that the Yadava king "Ramacandra" is described as a devotee of deity Shiva (Shaivism), yet his political achievements and temple construction sponsorship in Varanasi, far from his kingdom's location in the Deccan region, is described in the historical records in Vaishnavism terms of Rama, a deity Vishnu avatar. Pollock presents many such examples and suggests an emerging Hindu political identity that was grounded in the Hindu religious text of Ramayana, one that has continued into the modern times, and suggests that this historic process began with the arrival of Islam in India.
Brajadulal Chattopadhyaya has questioned the Pollock theory and presented textual and inscriptional evidence. According to Chattopadhyaya, the Hindu identity and religious response to Islamic invasion and wars developed in different kingdoms, such as wars between Islamic Sultanates and the Vijayanagara kingdom (Karnataka), and Islamic raids on the kingdoms in Tamil Nadu. These wars were described not just using the mythical story of Rama from Ramayana, states Chattopadhyaya, the medieval records used a wide range of religious symbolism and myths that are now considered as part of Hindu literature. This emergence of religious with political terminology began with the first Muslim invasion of Sindh in the 8th century CE, and intensified 13th century onwards. The 14th-century Sanskrit text, "Madhuravijayam", a memoir written by "Gangadevi", the wife of Vijayanagara prince, for example describes the consequences of war using religious terms,
The historiographic writings in Telugu language from the 13th- and 14th-century Kakatiya dynasty period presents a similar "alien other (Turk)" and "self-identity (Hindu)" contrast. Chattopadhyaya, and other scholars, state that the military and political campaign during the medieval era wars in Deccan peninsula of India, and in the north India, were no longer a quest for sovereignty, they embodied a political and religious animosity against the "otherness of Islam", and this began the historical process of Hindu identity formation.
Andrew Nicholson, in his review of scholarship on Hindu identity history, states that the vernacular literature of Bhakti movement sants from 15th to 17th century, such as Kabir, Anantadas, Eknath, Vidyapati, suggests that distinct religious identities, between Hindus and Turks (Muslims), had formed during these centuries. The poetry of this period contrasts Hindu and Islamic identities, states Nicholson, and the literature vilifies the Muslims coupled with a "distinct sense of a Hindu religious identity".
Scholars state that Hindu, Buddhist and Jain identities are retrospectively-introduced modern constructions. Inscriptional evidence from the 8th century onwards, in regions such as South India, suggests that medieval era India, at both elite and folk religious practices level, likely had a "shared religious culture", and their collective identities were "multiple, layered and fuzzy". Even among Hinduism denominations such as Shaivism and Vaishnavism, the Hindu identities, states Leslie Orr, lacked "firm definitions and clear boundaries".
Overlaps in Jain-Hindu identities have included Jains worshipping Hindu deities, intermarriages between Jains and Hindus, and medieval era Jain temples featuring Hindu religious icons and sculpture. Beyond India, on Java island of Indonesia, historical records attest to marriages between Hindus and Buddhists, medieval era temple architecture and sculptures that simultaneously incorporate Hindu and Buddhist themes, where Hinduism and Buddhism merged and functioned as "two separate paths within one overall system", according to Ann Kenney and other scholars. Similarly, there is an organic relation of Sikhs to Hindus, states Zaehner, both in religious thought and their communities, and virtually all Sikhs' ancestors were Hindus. Marriages between Sikhs and Hindus, particularly among "Khatris", were frequent. Some Hindu families brought up a son as a Sikh, and some Hindus view Sikhism as a tradition within Hinduism, even though the Sikh faith is a distinct religion.
Julius Lipner states that the custom of distinguishing between Hindus, Buddhists, Jains, and Sikhs is a modern phenomena, but one that is a convenient abstraction. Distinguishing Indian traditions is a fairly recent practice, states Lipner, and is the result of "not only Western preconceptions about the nature of religion in general and of religion in India in particular, but also with the political awareness that has arisen in India" in its people and a result of Western influence during its colonial history.
Scholars such as Fleming and Eck state that the post-Epic era literature from the 1st millennium CE amply demonstrate that there was a historic concept of the Indian subcontinent as a sacred geography, where the sacredness was a shared set of religious ideas. For example, the twelve "Jyotirlingas" of Shaivism and fifty-one "Shaktipithas" of Shaktism are described in the early medieval era Puranas as pilgrimage sites around a theme. This sacred geography and Shaiva temples with same iconography, shared themes, motifs and embedded legends are found across India, from the Himalayas to hills of South India, from Ellora Caves to Varanasi by about the middle of 1st millennium. Shakti temples, dated to a few centuries later, are verifiable across the subcontinent. Varanasi as a sacred pilgrimage site is documented in the "Varanasimahatmya" text embedded inside the "Skanda Purana", and the oldest versions of this text are dated to 6th to 8th-century CE.
The idea of twelve sacred sites in Shiva Hindu tradition spread across the Indian subcontinent appears not only in the medieval era temples but also in copper plate inscriptions and temple seals discovered in different sites. According to Bhardwaj, non-Hindu texts such as the memoirs of Chinese Buddhist and Persian Muslim travellers attest to the existence and significance of the pilgrimage to sacred geography among Hindus by later 1st millennium CE.
According to Fleming, those who question whether the term Hindu and Hinduism are a modern construction in a religious context present their arguments based on some texts that have survived into the modern era, either of Islamic courts or of literature published by Western missionaries or colonial-era Indologists aiming for a reasonable construction of history. However, the existence of non-textual evidence such as cave temples separated by thousands of kilometers, as well as lists of medieval era pilgrimage sites, is evidence of a shared sacred geography and existence of a community that was self-aware of shared religious premises and landscape. Further, it is a norm in evolving cultures that there is a gap between the "lived and historical realities" of a religious tradition and the emergence of related "textual authorities". The tradition and temples likely existed well before the medieval era Hindu manuscripts appeared that describe them and the sacred geography. This, states Fleming, is apparent given the sophistication of the architecture and the sacred sites along with the variance in the versions of the Puranic literature. According to Diana L. Eck and other Indologists such as André Wink, Muslim invaders were aware of Hindu sacred geography such as Mathura, Ujjain, and Varanasi by the 11th-century. These sites became a target of their serial attacks in the centuries that followed.
The Hindus have been persecuted during the medieval and modern era. The medieval persecution included waves of plunder, killing, destruction of temples and enslavement by Turk-Mongol Muslim armies from central Asia. This is documented in Islamic literature such as those relating to 8th century Muhammad bin-Qasim, 11th century Mahmud of Ghazni, the Persian traveler Al Biruni, the 14th century Islamic army invasion led by Timur, and various Sunni Islamic rulers of the Delhi Sultanate and Mughal Empire. There were occasional exceptions such as Akbar who stopped the persecution of Hindus, and occasional severe persecution such as under Aurangzeb, who destroyed temples, forcibly converted non-Muslims to Islam and banned the celebration of Hindu festivals such as Holi and Diwali.
Other recorded persecution of Hindus include those under the reign of 18th century Tipu Sultan in south India, and during the colonial era. In the modern era, religious persecution of Hindus have been reported outside India in Pakistan and Bangladesh.
Christophe Jaffrelot states that modern Hindu nationalism was born in Maharashtra, in the 1920s, as a reaction to the Islamic Khilafat Movement wherein Indian Muslims championed and took the cause of the Turkish Ottoman sultan as the Caliph of all Muslims, at the end of the World War I. Hindus viewed this development as one of divided loyalties of Indian Muslim population, of pan-Islamic hegemony, and questioned whether Indian Muslims were a part of an inclusive anti-colonial Indian nationalism. The Hindu nationalism ideology that emerged, states Jeffrelot, was codified by Savarkar while he was a political prisoner of the British colonial empire.
Chris Bayly traces the roots of Hindu nationalism to the Hindu identity and political independence achieved by the Maratha confederacy, that overthrew the Islamic Mughal empire in large parts of India, allowing Hindus the freedom to pursue any of their diverse religious beliefs and restored Hindu holy places such as Varanasi. A few scholars view Hindu mobilisation and consequent nationalism to have emerged in the 19th century as a response to British colonialism by Indian nationalists and neo-Hinduism gurus. Jaffrelot states that the efforts of Christian missionaries and Islamic proselytizers, during the British colonial era, each of whom tried to gain new converts to their own religion, by stereotyping and stigmatising Hindus to an identity of being inferior and superstitious, contributed to Hindus re-asserting their spiritual heritage and counter cross examining Islam and Christianity, forming organisations such as the "Hindu Sabhas" (Hindu associations), and ultimately a Hindu-identity driven nationalism in the 1920s.
The colonial era Hindu revivalism and mobilisation, along with Hindu nationalism, states Peter van der Veer, was primarily a reaction to and competition with Muslim separatism and Muslim nationalism. The successes of each side fed the fears of the other, leading to the growth of Hindu nationalism and Muslim nationalism in the Indian subcontinent. In the 20th century, the sense of religious nationalism grew in India, states van der Veer, but only Muslim nationalism succeeded with the formation of the West and East Pakistan (later split into Pakistan and Bangladesh), as "an Islamic state" upon independence. Religious riots and social trauma followed as millions of Hindus, Jains, Buddhists and Sikhs moved out of the newly created Islamic states and resettled into the Hindu-majority post-British India. After the separation of India and Pakistan in 1947, the Hindu nationalism movement developed the concept of Hindutva in second half of the 20th century.
The Hindu nationalism movement has sought to reform Indian laws, that critics say attempts to impose Hindu values on India's Islamic minority. Gerald Larson states, for example, that Hindu nationalists have sought a uniform civil code, where all citizens are subject to the same laws, everyone has equal civil rights, and individual rights do not depend on the individual's religion. In contrast, opponents of Hindu nationalists remark that eliminating religious law from India poses a threat to the cultural identity and religious rights of Muslims, and people of Islamic faith have a constitutional right to Islamic shariah-based personal laws. A specific law, contentious between Hindu nationalists and their opponents in India, relates to the legal age of marriage for girls. Hindu nationalists seek that the legal age for marriage be eighteen that is universally applied to all girls regardless of their religion and that marriages be registered with local government to verify the age of marriage. Muslim clerics consider this proposal as unacceptable because under the shariah-derived personal law, a Muslim girl can be married at any age after she reaches puberty.
Hindu nationalism in India, states Katharine Adeney, is a controversial political subject, with no consensus about what it means or implies in terms of the form of government and religious rights of the minorities.
According to Pew Research, there are over 1 billion Hindus worldwide (15% of world's population). Along with Christians (31.5%), Muslims (23.2%) and Buddhists (7.1%), Hindus are one of the four major religious groups of the world.
Most Hindus are found in Asian countries. The countries with most Hindu residents and citizens include (in decreasing order) are India, Nepal, Bangladesh, Indonesia, Pakistan, Sri Lanka, United States, Malaysia, United Kingdom, Myanmar, Canada, Mauritius, Guyana, South Africa, Trinidad and Tobago, Fiji, Suriname.
The fertility rate, that is children per woman, for Hindus is 2.4, which is less than the world average of 2.5. Pew Research projects that there will be 1.161 billion Hindus by 2020.
In more ancient times, Hindu kingdoms arose and spread the religion and traditions across Southeast Asia, particularly Thailand, Nepal, Burma, Malaysia, Indonesia, Cambodia, Laos, Philippines, and what is now central Vietnam.
Over 3 million Hindus are found in Bali Indonesia, a culture whose origins trace back to ideas brought by Tamil Hindu traders to Indonesian islands in the 1st millennium CE. Their sacred texts are also the Vedas and the Upanishads. The Puranas and the Itihasa (mainly "Ramayana" and the "Mahabharata") are enduring traditions among Indonesian Hindus, expressed in community dances and shadow puppet ("wayang") performances. As in India, Indonesian Hindus recognises four paths of spirituality, calling it "Catur Marga". Similarly, like Hindus in India, Balinese Hindu believe that there are four proper goals of human life, calling it "Catur Purusartha" – dharma (pursuit of moral and ethical living), artha (pursuit of wealth and creative activity), kama (pursuit of joy and love) and moksha (pursuit of self-knowledge and liberation). | https://en.wikipedia.org/wiki?curid=13677 |
Hakka cuisine
Hakka cuisine is the cooking style of the Hakka people, who may also be found in other parts of Taiwan and in countries with significant overseas Hakka communities. There are numerous restaurants in Taiwan, Hong Kong, Indonesia, Malaysia, Singapore and Thailand serving Hakka cuisine. Hakka cuisine was listed in 2014 on the first Hong Kong Inventory of Intangible Cultural Heritage.
The Hakka people have a marked cuisine and style of Chinese cooking which is little known outside the Hakka home. It concentrates on the texture of food – the hallmark of Hakka cuisine. Whereas preserved meats feature in Hakka delicacy, stewed, braised, roast meats – 'texturised' contributions to the Hakka palate – have a central place in their repertoire. Preserved vegetables () are commonly used for steamed and braised dishes such as steamed minced pork with preserved vegetables and braised pork with salted vegetables. In fact, the raw materials for Hakka food are no different from raw materials for any other type of regional Chinese cuisine where what is cooked depends on what is available in the market. Hakka cuisine may be described as outwardly simple but tasty. The skill in Hakka cuisine lies in the ability to cook meat thoroughly without hardening it, and to naturally bring out the proteinous flavour (umami taste) of meat.
The Hakka who settled in the harbour and port areas of Hong Kong placed great emphasis on seafood cuisine. Hakka cuisine in Hong Kong is less dominated by expensive meats; instead, emphasis is placed on an abundance of vegetables. Pragmatic and simple, Hakka cuisine is garnished lightly with sparse or little flavouring. Modern Hakka cooking in Hong Kong favours offal, an example being deep-fried intestines (). Others include tofu with preservatives, along with their signature dish, salt baked chicken (). Another specialty is the poon choi (). While it may be difficult to prove these were the actual diets of the old Hakka community, it is at present a commonly accepted view. The above dishes and their variations are in fact found and consumed throughout China, including Guangdong Province, and are not particularly unique or confined to the Hakka population.
Besides meat as source of protein, there is a unique vegan dish called lei cha (). It comprises combinations of vegetables and beans. Although not specifically unique for all Hakka people but are definitely famous among the Hakka-Hopo families. This vegetable-based rice tea dish is gaining momentum in some multicultural countries like Malaysia. Cooking of this dish requires the help from other family members to complete all eight combinations. It helps foster the relationship between family members in return.
Steamed bun () is a popular snack for Hakka people. It is mainly made from glutinous rice and is available in sweet or salty options. Sweet version consists of sweetened black-eyed pea pastes or peanuts. Salty version consists of preserved radish.
Hakka food also includes other traditional Taiwanese dishes, just as other Taiwanese ethnic groups do. Some of the more notable dishes in Hakka cuisine are listed as follow:
In India, Pakistan and other regions with significant South Asian populations, the locally known "Hakka cuisine" is actually a Desi adaptation of original Hakka dishes. This variation of Hakka cuisine is in reality, mostly Indian Chinese cuisine and Pakistani Chinese cuisine. It is called "Hakka cuisine" because, in India and areas of Pakistan, many owners of restaurants who serve this cuisine are of Hakka origin. Typical dishes include 'chilli chicken' and 'Dongbei (northeastern) chow mein/hakka noodles' (an Indian version of real Northeastern Chinese cuisine), and these restaurants also serve traditional South Asian dishes such as pakora. Being very popular in these areas, this style of cuisine is often mistakenly credited as being representative of Hakka cuisine in general, whereas the authentic style of Hakka cuisine is rarely known in these regions.
Outside of South Asia, the premier place to enjoy Indo-Pak-Chinese cuisine is in Toronto, Canada, due to the large number of Chinese from South Asia who have emigrated to the region and have chosen to open restaurants and most of it being halal.
In Thailand, Bangkok's Chinatown is Yaowarat and including neighboring areas such as Sampheng, Charoen Chai, Charoen Krung, Suan Mali, Phlapphla Chai or Wong Wian Yi Sip Song Karakadakhom (July 22 Circle). In the past, many Hakka restaurants are located in the Suan Mali near Bangkok Metropolitan Administration General Hospital. But now they had moved into many places, such as Talad Phlu, which is also one of the Chinatown as well. | https://en.wikipedia.org/wiki?curid=13679 |
Hunan cuisine
Hunan cuisine, also known as Xiang cuisine, consists of the cuisines of the Xiang River region, Dongting Lake and western Hunan Province in China. It is one of the Eight Great Traditions of Chinese cuisine and is well known for its hot and spicy flavours, fresh aroma and deep colours. Common cooking techniques include stewing, frying, pot-roasting, braising and smoking. Due to the high agricultural output of the region, ingredients for Hunan dishes are many and varied.
The history of the cooking skills employed in Hunan cuisine dates back to the 17th century. The first mention of chili peppers in local gazettes in the province date to 1684, 21st year of the Kangxi Emperor. During the course of its history, Hunan cuisine assimilated a variety of local forms, eventually evolving into its own style. Some well-known dishes include fried chicken with Sichuan spicy sauce () and smoked pork with dried long green beans ().
Hunan cuisine consists of three primary styles:
With its liberal use of chili peppers, shallots and garlic, Hunan cuisine is known for being "gan la" () or purely hot, as opposed to Sichuan cuisine, to which it is often compared. Sichuan cuisine uses its distinctive "ma la" () seasoning and other complex flavour combinations, frequently employs Sichuan pepper along with chilies which are often dried. It also utilises more dried or preserved ingredients and condiments. Hunan cuisine, on the other hand, is often spicier by pure chili content and contains a larger variety of fresh ingredients. Both Hunan and Sichuan cuisine are perhaps significantly oilier than the other cuisines in China, but Sichuan dishes are generally oilier than Hunan dishes. Another characteristic distinguishing Hunan cuisine from Sichuan cuisine is that Hunan cuisine uses smoked and cured goods in its dishes much more frequently.
Hunan cuisine's menu changes with the seasons. In a hot and humid summer, a meal will usually start with cold dishes or a platter holding a selection of cold meats with chilies for opening the pores and keeping cool in the summer. In winter, a popular choice is the hot pot, thought to heat the blood in the cold months. A special hot pot called "yuanyang huoguo" () is notable for splitting the pot into two sides – a spicy one and a mild one. One of the classic dishes in Hunan cuisine served in restaurants and at home is farmer pepper fried pork. It is made with several common ingredients: pork belly, green pepper, fermented black beans and other spices.
A discussion of Hunan Cuisine overall may list a number of piquant dishes, usually but not always very hot and spicy. | https://en.wikipedia.org/wiki?curid=13680 |
Hyperinflation
In economics, hyperinflation is very high and typically accelerating inflation. It quickly erodes the real value of the local currency, as the prices of all goods increase. This causes people to minimize their holdings in that currency as they usually switch to more stable foreign currencies, in recent history often the US dollar. Prices typically remain stable in terms of other relatively stable currencies.
Unlike low inflation, where the process of rising prices is protracted and not generally noticeable except by studying past market prices, hyperinflation sees a rapid and continuing increase in nominal prices, the nominal cost of goods, and in the supply of currency. Typically, however, the general price level rises even more rapidly than the money supply as people try ridding themselves of the devaluing currency as quickly as possible. As this happens, the real stock of money (i.e., the amount of circulating money divided by the price level) decreases considerably.
Hyperinflation is often associated with some stress to the government budget, such as wars or their aftermath, sociopolitical upheavals, a collapse in aggregate supply or one in export prices, or other crises that make it difficult for the government to collect tax revenue. A sharp decrease in real tax revenue coupled with a strong need to maintain government spending, together with an inability or unwillingness to borrow, can lead a country into hyperinflation.
In 1956, Phillip Cagan wrote "The Monetary Dynamics of Hyperinflation", the book often regarded as the first serious study of hyperinflation and its effects (though "The Economics of Inflation" by C. Bresciani-Turroni on the German hyperinflation was published in Italian in 1931). In his book, Cagan defined a hyperinflationary episode as starting in the month that the monthly inflation rate exceeds 50%, and as ending when the monthly inflation rate drops below 50% and stays that way for at least a year. Economists usually follow Cagan’s description that hyperinflation occurs when the monthly inflation rate exceeds 50% (this is equivalent to a yearly rate of 12,874.63%).
The International Accounting Standards Board has issued guidance on accounting rules in a hyperinflationary environment. It does not establish an absolute rule on when hyperinflation arises. Instead, it lists factors that indicate the existence of hyperinflation:
While there can be a number of causes of high inflation, most hyperinflations have been caused by government budget deficits financed by currency creation. Peter Bernholz analysed 29 hyperinflations (following Cagan's definition) and concludes that at least 25 of them have been caused in this way. A necessary condition for hyperinflation is the use of paper money, instead of gold or silver coins. Most hyperinflations in history, with some exceptions, such as the French hyperinflation of 1789–1796, occurred after the use of fiat currency became widespread in the late 19th century. The French hyperinflation took place after the introduction of a non-convertible paper currency, the assignats.
Hyperinflation occurs when there is a continuing (and often accelerating) rapid increase in the amount of money that is not supported by a corresponding growth in the output of goods and services.
The increases in price that result from the rapid money creation creates a vicious circle, requiring ever growing amounts of new money creation to fund government deficits. Hence both monetary inflation and price inflation proceed at a rapid pace. Such rapidly increasing prices cause widespread unwillingness of the local population to hold the local currency as it rapidly loses its buying power. Instead they quickly spend any money they receive, which increases the velocity of money flow; this in turn causes further acceleration in prices. This means that the increase in the price level is greater than that of the money supply. The real stock of money, M/P, decreases. Here M refers to the money stock and P to the price level.
This results in an imbalance between the supply and demand for the money (including currency and bank deposits), causing rapid inflation. Very high inflation rates can result in a loss of confidence in the currency, similar to a bank run. Usually, the excessive money supply growth results from the government being either unable or unwilling to fully finance the government budget through taxation or borrowing, and instead it finances the government budget deficit through the printing of money.
Governments have sometimes resorted to excessively loose monetary policy, as it allows a government to devalue its debts and reduce (or avoid) a tax increase. Monetary inflation is effectively a flat tax on creditors that also redistributes proportionally to private debtors. Distributional effects of monetary inflation are complex and vary based on the situation, with some models finding regressive effects but other empirical studies progressive effects. As a form of tax, it is less overt than levied taxes and is therefore harder to understand by ordinary citizens. Inflation can obscure quantitative assessments of the true cost of living, as published price indices only look at data in retrospect, so may increase only months later. Monetary inflation can become hyperinflation if monetary authorities fail to fund increasing government expenses from taxes, government debt, cost cutting, or by other means, because either
Theories of hyperinflation generally look for a relationship between seigniorage and the inflation tax. In both Cagan's model and the neo-classical models, a tipping point occurs when the increase in money supply or the drop in the monetary base makes it impossible for a government to improve its financial position. Thus when fiat money is printed, government obligations that are not denominated in money increase in cost by more than the value of the money created.
From this, it might be wondered why any rational government would engage in actions that cause or continue hyperinflation. One reason for such actions is that often the alternative to hyperinflation is either depression or military defeat. The root cause is a matter of more dispute. In both classical economics and monetarism, it is always the result of the monetary authority irresponsibly borrowing money to pay all its expenses. These models focus on the unrestrained seigniorage of the monetary authority, and the gains from the inflation tax.
In neo-classical economic theory, hyperinflation is rooted in a deterioration of the monetary base, that is the confidence that there is a store of value that the currency will be able to command later. In this model, the perceived risk of holding currency rises dramatically, and sellers demand increasingly high premiums to accept the currency. This in turn leads to a greater fear that the currency will collapse, causing even higher premiums. One example of this is during periods of warfare, civil war, or intense internal conflict of other kinds: governments need to do whatever is necessary to continue fighting, since the alternative is defeat. Expenses cannot be cut significantly since the main outlay is armaments. Further, a civil war may make it difficult to raise taxes or to collect existing taxes. While in peacetime the deficit is financed by selling bonds, during a war it is typically difficult and expensive to borrow, especially if the war is going poorly for the government in question. The banking authorities, whether central or not, "monetize" the deficit, printing money to pay for the government's efforts to survive. The hyperinflation under the Chinese Nationalists from 1939 to 1945 is a classic example of a government printing money to pay civil war costs. By the end, currency was flown in over the Himalayas, and then old currency was flown out to be destroyed.
Hyperinflation is a complex phenomenon and one explanation may not be applicable to all cases. In both of these models, however, whether loss of confidence comes first, or central bank seigniorage, the other phase is ignited. In the case of rapid expansion of the money supply, prices rise rapidly in response to the increased supply of money relative to the supply of goods and services, and in the case of loss of confidence, the monetary authority responds to the risk premiums it has to pay by "running the printing presses."
Nevertheless, the immense acceleration process that occurs during hyperinflation (such as during the German hyperinflation of 1922/23) still remains unclear and unpredictable. The transformation of an inflationary development into the hyperinflation has to be identified as a very complex phenomenon, which could be a further advanced research avenue of the complexity economics in conjunction with research areas like mass hysteria, bandwagon effect, social brain, and mirror neurons.
A number of hyperinflations were caused by some sort of extreme negative supply shock, often but not always associated with wars, the breakdown of the communist system or natural disasters.
Since hyperinflation is visible as a monetary effect, models of hyperinflation center on the demand for money. Economists see both a rapid increase in the money supply and an increase in the velocity of money if the (monetary) inflating is not stopped. Either one, or both of these together are the root causes of inflation and hyperinflation. A dramatic increase in the velocity of money as the cause of hyperinflation is central to the "crisis of confidence" model of hyperinflation, where the risk premium that sellers demand for the paper currency over the nominal value grows rapidly. The second theory is that there is first a radical increase in the amount of circulating medium, which can be called the "monetary model" of hyperinflation. In either model, the second effect then follows from the first—either too little confidence forcing an increase in the money supply, or too much money destroying confidence.
In the "confidence model", some event, or series of events, such as defeats in battle, or a run on stocks of the specie that back a currency, removes the belief that the authority issuing the money will remain solvent—whether a bank or a government. Because people do not want to hold notes that may become valueless, they want to spend them. Sellers, realizing that there is a higher risk for the currency, demand a greater and greater premium over the original value. Under this model, the method of ending hyperinflation is to change the backing of the currency, often by issuing a completely new one. War is one commonly cited cause of crisis of confidence, particularly losing in a war, as occurred during Napoleonic Vienna, and capital flight, sometimes because of "contagion" is another. In this view, the increase in the circulating medium is the result of the government attempting to buy time without coming to terms with the root cause of the lack of confidence itself.
In the "monetary model", hyperinflation is a positive feedback cycle of rapid monetary expansion. It has the same cause as all other inflation: money-issuing bodies, central or otherwise, produce currency to pay spiraling costs, often from lax fiscal policy, or the mounting costs of warfare. When business people perceive that the issuer is committed to a policy of rapid currency expansion, they mark up prices to cover the expected decay in the currency's value. The issuer must then accelerate its expansion to cover these prices, which pushes the currency value down even faster than before. According to this model the issuer cannot "win" and the only solution is to abruptly stop expanding the currency. Unfortunately, the end of expansion can cause a severe financial shock to those using the currency as expectations are suddenly adjusted. This policy, combined with reductions of pensions, wages, and government outlays, formed part of the Washington consensus of the 1990s.
Whatever the cause, hyperinflation involves both the supply and velocity of money. Which comes first is a matter of debate, and there may be no universal story that applies to all cases. But once the hyperinflation is established, the pattern of increasing the money stock, by whichever agencies are allowed to do so, is universal. Because this practice increases the supply of currency without any matching increase in demand for it, the price of the currency, that is the exchange rate, naturally falls relative to other currencies. Inflation becomes hyperinflation when the increase in money supply turns specific areas of pricing power into a general frenzy of spending quickly before money becomes worthless. The purchasing power of the currency drops so rapidly that holding cash for even a day is an unacceptable loss of purchasing power. As a result, no one holds currency, which increases the velocity of money, and worsens the crisis.
Because rapidly rising prices undermine the role of money as a store of value, people try to spend it on real goods or services as quickly as possible. Thus, the monetary model predicts that the velocity of money will increase as a result of an excessive increase in the money supply. At the point when money velocity and prices rapidly accelerate in a vicious circle, hyperinflation is out of control, because ordinary policy mechanisms, such as increasing reserve requirements, raising interest rates, or cutting government spending will be ineffective and be responded to by shifting away from the rapidly devalued money and towards other means of exchange.
During a period of hyperinflation, bank runs, loans for 24-hour periods, switching to alternate currencies, the return to use of gold or silver or even barter become common. Many of the people who hoard gold today expect hyperinflation, and are hedging against it by holding specie. There may also be extensive capital flight or flight to a "hard" currency such as the US dollar. This is sometimes met with capital controls, an idea that has swung from standard, to anathema, and back into semi-respectability. All of this constitutes an economy that is operating in an "abnormal" way, which may lead to decreases in real production. If so, that intensifies the hyperinflation, since it means that the amount of goods in "too much money chasing too few goods" formulation is also reduced. This is also part of the vicious circle of hyperinflation.
Once the vicious circle of hyperinflation has been ignited, dramatic policy means are almost always required. Simply raising interest rates is insufficient. Bolivia, for example, underwent a period of hyperinflation in 1985, where prices increased 12,000% in the space of less than a year. The government raised the price of gasoline, which it had been selling at a huge loss to quiet popular discontent, and the hyperinflation came to a halt almost immediately, since it was able to bring in hard currency by selling its oil abroad. The crisis of confidence ended, and people returned deposits to banks. The German hyperinflation (1919–November 1923) was ended by producing a currency based on assets loaned against by banks, called the Rentenmark. Hyperinflation often ends when a civil conflict ends with one side winning.
Although wage and price controls are sometimes used to control or prevent inflation, no episode of hyperinflation has been ended by the use of price controls alone, because price controls that force merchants to sell at prices far below their restocking costs result in shortages that cause prices to rise still further.
Nobel prize winner Milton Friedman said "We economists don't know much, but we do know how to create a shortage. If you want to create a shortage of tomatoes, for example, just pass a law that retailers can't sell tomatoes for more than two cents per pound. Instantly you'll have a tomato shortage. It's the same with oil or gas."
Hyperinflation increases stock market prices; effectively wipes out the purchasing power of private and public savings; distorts the economy in favor of the hoarding of real assets; causes the monetary base, whether specie or hard currency, to flee the country; and makes the afflicted area anathema to investment.
One of the most important characteristics of hyperinflation is the accelerating substitution of the inflating money by stable money—gold and silver in former times, then relatively stable foreign currencies after the breakdown of the gold or silver standards (Thiers' Law). If inflation is high enough, government regulations like heavy penalties and fines, often combined with exchange controls, cannot prevent this currency substitution. As a consequence, the inflating currency is usually heavily undervalued compared to stable foreign money in terms of purchasing power parity. So foreigners can live cheaply and buy at low prices in the countries hit by high inflation. It follows that governments that do not succeed in engineering a successful currency reform in time must finally legalize the stable foreign currencies (or, formerly, gold and silver) that threaten to fully substitute the inflating money. Otherwise, their tax revenues, including the inflation tax, will approach zero. The last episode of hyperinflation in which this process could be observed was in Zimbabwe in the first decade of the 21st century. In this case, the local money was mainly driven out by the US dollar and the South African rand.
Enactment of price controls to prevent discounting the value of paper money relative to gold, silver, hard currency, or other commodities fail to force acceptance of a paper money that lacks intrinsic value. If the entity responsible for printing a currency promotes excessive money printing, with other factors contributing a reinforcing effect, hyperinflation usually continues. Hyperinflation is generally associated with paper money, which can easily be used to increase the money supply: add more zeros to the plates and print, or even stamp old notes with new numbers. Historically, there have been numerous episodes of hyperinflation in various countries followed by a return to "hard money". Older economies would revert to hard currency and barter when the circulating medium became excessively devalued, generally following a "run" on the store of value.
Much attention on hyperinflation centers on the effect on savers whose investments become worthless. Interest rate changes often cannot keep up with hyperinflation or even high inflation, certainly with contractually fixed interest rates. For example, in the 1970s in the United Kingdom inflation reached 25% per annum, yet interest rates did not rise above 15%—and then only briefly—and many fixed interest rate loans existed. Contractually, there is often no bar to a debtor clearing his long term debt with "hyperinflated cash", nor could a lender simply somehow suspend the loan. Contractual "early redemption penalties" were (and still are) often based on a penalty of "n" months of interest/payment; again no real bar to paying off what had been a large loan. In interwar Germany, for example, much private and corporate debt was effectively wiped out—certainly for those holding fixed interest rate loans.
Ludwig von Mises used the term "crack-up boom" (German: Katastrophenhausse) to describe the economic consequences of an unmitigated increasing in the base-money supply. As more and more money is provided, interest rates decline towards zero. Realizing that fiat money is losing value, investors will try to place money in assets such as real estate, stocks, even art; as these appear to represent "real" value. Asset prices are thus becoming inflated. This potentially spiraling process will ultimately lead to the collapse of the monetary system. The Cantillon effect says that those institutions that receive the new money first are the beneficiaries of the policy.
Hyperinflation is ended by drastic remedies, such as imposing the shock therapy of slashing government expenditures or altering the currency basis. One form this may take is dollarization, the use of a foreign currency (not necessarily the U.S. dollar) as a national unit of currency. An example was dollarization in Ecuador, initiated in September 2000 in response to a 75% loss of value of the Ecuadorian sucre in early 2000. But usually the "dollarization" takes place in spite of all efforts of the government to prevent it by exchange controls, heavy fines and penalties. The government has thus to try to engineer a successful currency reform stabilizing the value of the money. If it does not succeed with this reform the substitution of the inflating by stable money goes on. Thus it is not surprising that there have been at least seven historical cases in which the good (foreign) money did fully drive out the use of the inflating currency. In the end the government had to legalize the former, for otherwise its revenues would have fallen to zero.
Hyperinflation has always been a traumatic experience for the people who suffer it, and the next political regime almost always enacts policies to try to prevent its recurrence. Often this means making the central bank very aggressive about maintaining price stability, as was the case with the German Bundesbank, or moving to some hard basis of currency, such as a currency board. Many governments have enacted extremely stiff wage and price controls in the wake of hyperinflation, but this does not prevent further inflation of the money supply by the central bank, and always leads to widespread shortages of consumer goods if the controls are rigidly enforced.
In countries experiencing hyperinflation, the central bank often prints money in larger and larger denominations as the smaller denomination notes become worthless. This can result in the production of unusually large denominations of banknotes, including those denominated in amounts of 1,000,000,000 or more.
One way to avoid the use of large numbers is by declaring a new unit of currency. (As an example, instead of 10,000,000,000 dollars, a central bank might set 1 new dollar = 1,000,000,000 old dollars, so the new note would read "10 new dollars".) A recent example of this is Turkey's revaluation of the Lira on 1 January 2005, when the old Turkish lira (TRL) was converted to the New Turkish lira (TRY) at a rate of 1,000,000 old to 1 new Turkish Lira. While this does not lessen the actual value of a currency, it is called redenomination or revaluation and also occasionally happens in countries with lower inflation rates. During hyperinflation, currency inflation happens so quickly that bills reach large numbers before revaluation.
Some banknotes were stamped to indicate changes of denomination, as it would have taken too long to print new notes. By the time new notes were printed, they would be obsolete (that is, they would be of too low a denomination to be useful).
Metallic coins were rapid casualties of hyperinflation, as the scrap value of metal enormously exceeded its face value. Massive amounts of coinage were melted down, usually illicitly, and exported for hard currency.
Governments will often try to disguise the true rate of inflation through a variety of techniques. None of these actions addresses the root causes of inflation; and if discovered, they tend to further undermine trust in the currency, causing further increases in inflation. Price controls will generally result in shortages and hoarding and extremely high demand for the controlled goods, causing disruptions of supply chains. Products available to consumers may diminish or disappear as businesses no longer find it economic to continue producing and/or distributing such goods at the legal prices, further exacerbating the shortages.
There are also issues with computerized money-handling systems. In Zimbabwe, during the hyperinflation of the Zimbabwe dollar, many automated teller machines and payment card machines struggled with arithmetic overflow errors as customers required many billions and trillions of dollars at one time.
During the Crisis of the Third Century, Rome underwent hyperinflation caused by years of coinage devaluation.
In 1922, inflation in Austria reached 1,426%, and from 1914 to January 1923, the consumer price index rose by a factor of 11,836, with the highest banknote in denominations of 500,000 Austrian krones. After World War I, essentially all State enterprises ran at a loss, and the number of state employees in the capital, Vienna, was greater than in the earlier monarchy, even though the new republic was nearly one-eighth of the size.
Observing the Austrian response to developing hyperinflation, which included the hoarding of food and the speculation in foreign currencies, Owen S. Phillpotts, the Commercial Secretary at the British Legation in Vienna wrote: "The Austrians are like men on a ship who cannot manage it, and are continually signalling for help. While waiting, however, most of them begin to cut rafts, each for himself, out of the sides and decks. The ship has not yet sunk despite the leaks so caused, and those who have acquired stores of wood in this way may use them to cook their food, while the more seamanlike look on cold and hungry. The population lack courage and energy as well as patriotism."
Increasing hyperinflation in Bolivia has plagued, and at times crippled, its economy and currency since the 1970s. At one time in 1985, the country experienced an annual inflation rate of more than 20,000%. Fiscal and monetary reform reduced the inflation rate to single digits by the 1990s, and in 2004 Bolivia experienced a manageable 4.9% rate of inflation.
In 1987, the Bolivian peso was replaced by a new boliviano at a rate of one million to one (when 1 US dollar was worth 1.8–1.9 million pesos). At that time, 1 new boliviano was roughly equivalent to 1 U.S. dollar.
Brazilian hyperinflation lasted from 1985 (the year when the military dictatorship ended) to 1994, with prices rising by 184,901,570,954.39% in that time due to the uncontrolled printing of money. There were many economic plans that tried to contain hyperinflation including zeroes cuts, price freezes and even confiscation of bank accounts.
The highest value was in March 1990, when the government inflation index reached 82.39%. Hyperinflation ended in July 1994 with the Real Plan during the government of Itamar Franco. During the period of inflation Brazil adopted a total of six different currencies, as the government constantly changed due to rapid devaluation and increase in the number of zeros.
As the first user of fiat currency, China was also the first country to experience hyperinflation. Paper currency was introduced during the Tang Dynasty, and was generally welcomed. It maintained its value, as successive Chinese governments put in place strict controls on issuance. The convenience of paper currency for trade purposes led to strong demand for paper currency. It was only when discipline on quantity supplied broke down that hyperinflation emerged. The Yuan Dynasty (1271–1368) was the first to print large amounts of fiat paper money to fund its wars, resulting in hyperinflation. Much later, the Republic of China went through hyperinflation from 1948 to 1949. In 1947, the highest denomination bill was 50,000 yuan. By mid-1948, the highest denomination was 180,000,000 yuan. The 1948 currency reform replaced the yuan by the gold yuan at an exchange rate of 1 gold yuan = 3,000,000 yuan. In less than a year, the highest denomination was 10,000,000 gold yuan. In the final days of the civil war, the silver yuan was briefly introduced at the rate of 500,000,000 gold yuan. Meanwhile, the highest denomination issued by a regional bank was 6,000,000,000 yuan (issued by Xinjiang Provincial Bank in 1949). After renminbi was instituted by the new communist government, hyperinflation ceased, with a revaluation of 1:10,000 old yuan in 1955.
During the French Revolution and first Republic, the National Assembly issued bonds, some backed by seized church property, called assignats. Napoleon replaced them with the franc in 1803, at which time the assignats were basically worthless. Stephen D. Dillaye pointed out that one of the reasons for the failure was massive counterfeiting of the paper currency, largely through London. According to Dillaye: "Seventeen manufacturing establishments were in full operation in London, with a force of four hundred men devoted to the production of false and forged Assignats."
By November 1922, the value in gold of money in circulation had fallen from £300 million before World War I to £20 million. The Reichsbank responded by the unlimited printing of notes, thereby accelerating the devaluation of the mark. In his report to London, Lord D'Abernon wrote: "In the whole course of history, no dog has ever run after its own tail with the speed of the Reichsbank." Germany went through its worst inflation in 1923. In 1922, the highest denomination was 50,000 marks. By 1923, the highest denomination was 100,000,000,000,000 (1014) Marks. In December 1923 the exchange rate was 4,200,000,000,000 (4.2 × 1012) Marks to 1 US dollar. In 1923, the rate of inflation hit 3.25 × 106 percent per month (prices double every two days). Beginning on 20 November 1923, 1,000,000,000,000 old Marks were exchanged for 1 Rentenmark, so that 4.2 Rentenmarks were worth 1 US dollar, exactly the same rate the Mark had in 1914.
With the German invasion in April 1941, there was an abrupt increase in prices. This was due to psychological factors related to the fear of shortages and to the hoarding of goods. During the German and Italian Axis occupation of Greece (1941–1944), the agricultural, mineral, industrial etc. production of Greece were used to sustain the occupation forces, but also to secure provisions for the Afrika Korps. One part of these "sales" of provisions was settled with bilateral clearing through the German DEGRIGES and the Italian Sagic companies at very low prices. As the value of Greek exports in drachmas fell, the demand for drachmas followed suit and so did its forex rate. While shortages started due to naval blockades and hoarding, the prices of commodities soared. The other part of the "purchases" was settled with drachmas secured from the Bank of Greece and printed for this purpose by private printing presses. As prices soared, the Germans and Italians started requesting more and more drachmas from the Bank of Greece to offset price increases; each time prices increased, the note circulation followed suit soon afterwards. For the year starting November 1943, the inflation rate was 2.5 × 1010%, the circulation was 6.28 × 1018 drachmae and one gold sovereign cost 43,167 billion drachmas. The hyperinflation started subsiding immediately after the departure of the German occupation forces, but inflation rates took several years before they fell below 50%.
The Treaty of Trianon and political instability between 1919 and 1924 led to a major inflation of Hungary's currency. In 1921, in an attempt to stop this inflation, the national assembly of Hungary passed the Hegedűs reforms, including a 20% levy on bank deposits, but this precipitated a mistrust of banks by the public, especially the peasants, and resulted in a reduction in savings, and thus an increase in the amount of currency in circulation. Due to the reduced tax base, the government resorted to printing money, and in 1923 inflation in Hungary reached 98% per month.
Between the end of 1945 and July 1946, Hungary went through the highest inflation ever recorded. In 1944, the highest banknote value was 1,000 pengő. By the end of 1945, it was 10,000,000 pengő, and the highest value in mid-1946 was 100,000,000,000,000,000,000 (1020) pengő. A special currency, the adópengő (or tax pengő) was created for tax and postal payments. The inflation was such that the value of the adópengő was adjusted each day by radio announcement. On 1 January 1946, one adópengő equaled one pengő, but by late July, one adópengő equaled 2,000,000,000,000,000,000,000 or 2×1021 (2 sextillion) pengő.
When the pengő was replaced in August 1946 by the forint, the total value of all Hungarian banknotes in circulation amounted to of one US cent. Inflation had peaked at 1.3 × 1016% per month (i.e. prices doubled every 15.6 hours). On 18 August 1946, 400,000,000,000,000,000,000,000,000,000 or 4 pengő (four hundred quadrilliard on the long scale used in Hungary, or four hundred octillion on short scale) became 1 forint.
North Korea has most likely experienced hyperinflation from December 2009 to mid-January 2011. Based on the price of rice, North Korea's hyperinflation peaked in mid-January 2010, but according to black market exchange-rate data, and calculations based on purchasing power parity, North Korea experienced its peak month of inflation in early March 2010. These data points are unofficial, however, and therefore must be treated with a degree of caution.
In modern history, Peru underwent a period of hyperinflation period in the 1980s to the early 1990s starting with President Fernando Belaúnde's second administration, heightened during Alan García's first administration, to the beginning of Alberto Fujimori's term. Over 3,210,000,000 old soles would be worth one USD. Garcia's term introduced the inti, which worsened inflation into hyperinflation. Peru's currency and economy were pacified under Fujimori's Nuevo Sol program, which has remained Peru's currency since 1991.
Poland has gone through two episodes of hyperinflation since the country regained independence following the end of World War I, the first in 1923, the second in 1989–1990. Both events resulted in the introduction of new currencies. In 1924, the "złoty" replaced the original currency of post-war Poland, the mark. This currency was subsequently replaced by another of the same name in 1950, which was assigned the ISO code of PLZ. As a result of the second hyperinflation crisis, the current "new złoty" was introduced in 1990 (ISO code: PLN). See the article on Polish złoty for more information about the currency's history.
The newly independent Poland had been struggling with a large budget deficit since its inception in 1918 but it was in 1923 when inflation reached its peak. The exchange rate to the American dollar went from 9 Polish marks per dollar in 1918 to 6,375,000 marks per dollar at the end of 1923. A new personal 'inflation tax' was introduced. The resolution of the crisis is attributed to Władysław Grabski, who became prime minister of Poland in December 1923. Having nominated an all-new government and being granted extraordinary lawmaking powers by the Sejm for a period of six months, he introduced a new currency, established a new national bank and scrapped the inflation tax, which took place throughout 1924.
The economic crisis in Poland in the 1980s was accompanied by rising inflation when new money was printed to cover a budget deficit. Although inflation was not as acute as in 1920s, it is estimated that its annual rate reached around 600% in a period of over a year spanning parts of 1989 and 1990. The economy was stabilised by the adoption of the Balcerowicz Plan in 1989, named after the main author of the reforms, minister of finance Leszek Balcerowicz. The plan was largely inspired by the previous Grabski's reforms.
The Japanese government occupying the Philippines during World War II issued fiat currencies for general circulation. The Japanese-sponsored Second Philippine Republic government led by Jose P. Laurel at the same time outlawed possession of other currencies, most especially "guerrilla money". The fiat money's lack of value earned it the derisive nickname "Mickey Mouse money". Survivors of the war often tell tales of bringing suitcases or "bayong" (native bags made of woven coconut or buri leaf strips) overflowing with Japanese-issued bills. Early on, 75 Mickey Mouse pesos could buy one duck egg. In 1944, a box of matches cost more than 100 Mickey Mouse pesos.
In 1942, the highest denomination available was 10 pesos. Before the end of the war, because of inflation, the Japanese government was forced to issue 100-, 500-, and 1000-peso notes.
Malaya and Singapore were under Japanese occupation from 1942 until 1945. The Japanese issued banana money as the official currency to replace the Straits currency issued by the British. During that time, the cost of basic necessities increased drastically. As the occupation proceeded, the Japanese authorities printed more money to fund their wartime activities, which resulted in hyperinflation and a severe depreciation in value of the banana note.
From February to December 1942, $100 of Straits currency was worth $100 in Japanese scrip, after which the value of Japanese scrip began to erode, reaching $385 on December 1943 and $1,850 one year later. By 1 August 1945, this had inflated to $10,500, and 11 days later it had reached $95,000. After 13 August 1945, Japanese scrip had become valueless.
A seven-year period of uncontrollable spiralling inflation occurred in the early Soviet Union, running from the earliest days of the Bolshevik Revolution in November 1917 to the reestablishment of the gold standard with the introduction of the chervonets as part of the New Economic Policy. The inflationary crisis effectively ended in March 1924 with the introduction of the so-called "gold ruble" as the country's standard currency.
The early Soviet hyperinflationary period was marked by three successive redenominations of its currency, in which "new rubles" replaced old at the rates of 10,000:1 (1 January 1922), 100:1 (1 January 1923), and 50,000:1 (7 March 1924), respectively.
Between 1921 and 1922, inflation in the Soviet Union reached 213%.
Venezuela's hyperinflation began in November 2016. Inflation of Venezuela's bolivar fuerte (VEF) in 2014 reached 69% and was the highest in the world. In 2015, inflation was 181%, the highest in the world and the highest in the country's history at that time, 800% in 2016, over 4,000% in 2017, and 1,698,488% in 2018, with Venezuela spiraling into hyperinflation. While the Venezuelan government "has essentially stopped" producing official inflation estimates as of early 2018, one estimate of the rate at that time was 5,220%, according to inflation economist Steve Hanke of Johns Hopkins University.
Inflation has affected Venezuelans so much that in 2017, some people became video game gold farmers and could be seen playing games such as "RuneScape" to sell in-game currency or characters for real currency. In many cases, these gamers made more money than salaried workers in Venezuela even though they were earning just a few dollars per day. During the Christmas season of 2017, some shops would no longer use price tags since prices would inflate so quickly, so customers were required to ask staff at stores how much each item was.
The International Monetary Fund estimated in 2018 that Venezuela's inflation rate would reach 1,000,000% by the end of the year. This forecast was criticized by Steve H. Hanke, professor of applied economics at The Johns Hopkins University and senior fellow at the Cato Institute. According to Hanke, the IMF had released a "bogus forecast" because "no one has ever been able to accurately forecast the course or the duration of an episode of hyperinflation. But that has not stopped the IMF from offering inflation forecasts for Venezuela that have proven to be wildly inaccurate".
In July 2018, hyperinflation in Venezuela was sitting at 33,151%, "the 23rd most severe episode of hyperinflation in history".
In April 2019, the International Monetary Fund estimated that inflation would reach 10,000,000% by the end of 2019.
In May 2019, the Central Bank of Venezuela released economic data for the first time since 2015. According to this release, the inflation of Venezuela was 274% in 2016, 863% in 2017 and 130,060% in 2018. The annualised inflation rate as of April 2019 was estimated to be 282,972.8% as of April 2019, and cumulative inflation from 2016 to April 2019 was estimated at 53,798,500%.
The new reports imply a contraction of more than half of the economy in five years, according to the "Financial Times" "one of the biggest contractions in Latin American history". According two undisclosed sources from Reuters, the release of this numbers was due to pressure from China, a Maduro ally. One of this sources claims that the disclosure of economic numbers may bring Venezuela into compliance with the IMF, making it harder to support Juan Guaidó during the presidential crisis. At the time, the IMF was not able to support the validity of the data as they had not been able to contact the authorities.
Yugoslavia went through a period of hyperinflation and subsequent currency reforms from 1989 to 1994. One of several regional conflicts accompanying the dissolution of Yugoslavia was the Bosnian War (1992–1995). The Belgrade government of Slobodan Milošević backed ethnic Serbian forces in the conflict, resulting in a United Nations boycott of Yugoslavia. The UN boycott collapsed an economy already weakened by regional war, with the projected monthly inflation rate accelerating to one million percent by December 1993 (prices double every 2.3 days).
The highest denomination in 1988 was 50,000 dinars. By 1989 it was 2,000,000 dinars. In the 1990 currency reform, 1 new dinar was exchanged for 10,000 old dinars. In the 1992 currency reform, 1 new dinar was exchanged for 10 old dinars. The highest denomination in 1992 was 50,000 dinars. By 1993, it was 10,000,000,000 dinars. In the 1993 currency reform, 1 new dinar was exchanged for 1,000,000 old dinars. Before the year was over, however, the highest denomination was 500,000,000,000 dinars. In the 1994 currency reform, 1 new dinar was exchanged for 1,000,000,000 old dinars. In another currency reform a month later, 1 novi dinar was exchanged for 13 million dinars (1 novi dinar = 1 German mark at the time of exchange). The overall impact of hyperinflation was that 1 novi dinar was equal to 1 × 1027 – 1.3 × 1027 pre-1990 dinars. Yugoslavia's rate of inflation hit 5 × 1015% cumulative inflation over the time period 1 October 1993 and 24 January 1994.
Hyperinflation in Zimbabwe was one of the few instances that resulted in the abandonment of the local currency. At independence in 1980, the Zimbabwe dollar (ZWD) was worth about US$1.25. Afterwards, however, rampant inflation and the collapse of the economy severely devalued the currency. Inflation was steady until British Prime Minister Tony Blair reneged on land reform agreements arrived at between Margaret Thatcher and Robert Mugabe that continued land redistribution from the white farming community in 1998, resulting in reductions in food production and the decline of foreign investment. Several multinational companies began hoarding retail goods in warehouses in Zimbabwe and just south of the border, preventing commodities from becoming available on the market. The result was that to pay its expenditures Mugabe's government and Gideon Gono's Reserve Bank printed more and more notes with higher face values.
Hyperinflation began early in the 21st century, reaching 624% in 2004. It fell back to low triple digits before surging to a new high of 1,730% in 2006. The Reserve Bank of Zimbabwe revalued on 1 August 2006 at a ratio of 1,000 ZWD to each second dollar (ZWN), but year-to-year inflation rose by June 2007 to 11,000% (versus an earlier estimate of 9,000%). Larger denominations were progressively issued in 2008:
Inflation by 16 July officially surged to 2,200,000% with some analysts estimating figures surpassing 9,000,000%. As of 22 July 2008 the value of the ZWN fell to approximately 688 billion per US$1, or 688 trillion pre-August 2006 Zimbabwean dollars.
On 1 August 2008, the Zimbabwe dollar was redenominated at the ratio of ZWN to each third dollar (ZWR). On 19 August 2008, official figures announced for June estimated the inflation over 11,250,000%. Zimbabwe's annual inflation was 231,000,000% in July (prices doubling every 17.3 days). By October 2008 Zimbabwe was mired in hyperinflation with wages falling far behind inflation. In this dysfunctional economy hospitals and schools had chronic staffing problems, because many nurses and teachers could not afford bus fare to work. Most of the capital of Harare was without water because the authorities had stopped paying the bills to buy and transport the treatment chemicals. Desperate for foreign currency to keep the government functioning, Zimbabwe's central bank governor, Gideon Gono, sent runners into the streets with suitcases of Zimbabwean dollars to buy up American dollars and South African rand.
For periods after July 2008, no official inflation statistics were released. Prof. Steve H. Hanke overcame the problem by estimating inflation rates after July 2008 and publishing the Hanke Hyperinflation Index for Zimbabwe. Prof. Hanke's HHIZ measure indicated that the inflation peaked at an annual rate of 89.7 sextillion percent (89,700,000,000,000,000,000,000%) in mid-November 2008. The peak monthly rate was 79.6 billion percent, which is equivalent to a 98% daily rate, or around % yearly rate. At that rate, prices were doubling every 24.7 hours. Note that many of these figures should be considered mostly theoretical since hyperinflation did not proceed at this rate over a whole year.
At its November 2008 peak, Zimbabwe's rate of inflation approached, but failed to surpass, Hungary's July 1946 world record. On 2 February 2009, the dollar was redenominated for the third time at the ratio of ZWR to 1 ZWL, only three weeks after the $100 trillion banknote was issued on 16 January, but hyperinflation waned by then as official inflation rates in USD were announced and foreign transactions were legalised, and on 12 April the Zimbabwe dollar was abandoned in favour of using only foreign currencies. The overall impact of hyperinflation was US$1 = ZWD.
Some countries experienced very high inflation, but did not reach hyperinflation, as defined as a "monthly" inflation rate of 50%.
Between 1620 and 1622 the Kreuzer fell from 1 Reichsthaler to 124 Kreuzer in end of 1619 to 1 Reichstaler to over 600 (regionally over 1000) Kreuzer in end of 1622, during the Thirty Years' War. This is a monthly inflation rate of over 20.6% (regionally over 34.4%).
Between 1987 and 1995 the Iraqi Dinar went from an official value of 0.306 Dinars/USD (or US$3.26 per dinar, though the black market rate is thought to have been substantially lower) to 3,000 dinars/USD due to government printing of 10s of trillions of dinars starting with a base of only tens of billions. That equates to approximately 315% inflation per year averaged over that eight-year period.
In spite of increased oil prices in the late 1970s (Mexico is a producer and exporter), Mexico defaulted on its external debt in 1982. As a result, the country suffered a severe case of capital flight and several years of acute inflation and peso devaluation, leading to an accumulated inflation rate of almost 27,000% between December 1975 and late 1988. On 1 January 1993, Mexico created a new currency, the "nuevo peso" ("new peso", or MXN), which chopped three zeros off the old peso (One new peso was equal to 1,000 old MXP pesos).
Between 1998 and 1999, Ecuador faced a period of economic instability that resulted from a combined banking crisis, currency crisis, and sovereign debt crisis. Severe inflation and devaluation of the Ecuadorean Sucre lead to President Jamil Mahuad announcing on January 9, 2000 that the US dollar would be adopted as the national currency.
Despite the government's efforts to curb inflation, the Sucre depreciated rapidly at the end of 1999, resulting in widespread informal use of U.S. dollars in the financial system. As a last resort to prevent hyperinflation, the government formally adopted the U.S. dollar in January 2000. The stability of the new currency was a necessary first step towards economic recovery, but the exchange rate was fixed at 25,000:1, which resulted in great losses of wealth.
In Roman Egypt, where the best documentation on pricing has survived, the price of a measure of wheat was 200 drachmae in 276 AD, and increased to more than 2,000,000 drachmae in 334 AD, roughly 1,000,000% inflation in a span of 58 years.
Although the price increased by a factor of 10,000 over 58 years, the annual rate of inflation was only 17.2% (1.4% monthly) compounded.
Romania experienced high inflation in the 1990s. The highest denomination in 1990 was 100 lei and in 1998 was 100,000 lei. By 2000 it was 500,000 lei. In early 2005 it was 1,000,000 lei. In July 2005 the lei was replaced by the new leu at 10,000 old lei = 1 new leu. Inflation in 2005 was 9%. In July 2005 the highest denomination became 500 lei (= 5,000,000 old lei).
The Second Transnistrian ruble consisted solely of banknotes and suffered from high inflation, necessitating the issue of notes overstamped with higher denominations. 1 and sometimes 10 ruble become 10,000 ruble, 5 ruble become 50,000 and 10 ruble become 100,000 ruble. In 2000, a new ruble was introduced at a rate of 1 new ruble = 1,000,000 old rubles.
Since the end of 2017 Turkey has high inflation rates. It is speculated that the new elections took place frustrated because of the impending crisis to forestall. In October 2017, inflation was at 11.9%, the highest rate since July 2008. The Turkish lira fall from 1.503 TRY = 1 US dollar in 2010 to 5.5695 TRY = 1 US dollar in August 2018.
During the Revolutionary War, when the Continental Congress authorized the printing of paper called continental currency, the monthly inflation rate reached a peak of 47% in November 1779 (Bernholz 2003: 48). These notes depreciated rapidly, giving rise to the expression "not worth a continental". One cause of the inflation was counterfeiting by the British, who ran a press on HMS "Phoenix", moored in New York Harbor. The counterfeits were advertised and sold almost for the price of the paper they were printed on.
During the U.S. Civil War between January 1861 and April 1865, the Confederate States decided to finance the war by printing money. The Lerner Commodity Price Index of leading cities in the eastern Confederacy states subsequently increased from 100 to 9,200 in that time. In the final months of the Civil War, the Confederate dollar was almost worthless. Similarly, the Union government inflated its greenbacks, with the monthly rate peaking at 40% in March 1864 (Bernholz 2003: 107).
Vietnam went through a period of chaos and high inflation in the late 1980s, with inflation peaking at 774% in 1988, after the country's "price-wage-currency" reform package, led by then-Deputy Prime Minister , had failed. High inflation also occurred in the early stages of the socialist-oriented market economic reforms commonly referred to as the Đổi Mới.
Inflation rate is usually measured in percent per year. It can also be measured in percent per month or in price doubling time.
formula_1
formula_2
formula_3
formula_4
Often, at redenominations, three zeroes are cut from the bills. It can be read from the table that if the (annual) inflation is for example 100%, it takes 3.32 years to produce one more zero on the price tags, or 3 × 3.32 = 9.96 years to produce three zeroes. Thus can one expect a redenomination to take place about 9.96 years after the currency was introduced. | https://en.wikipedia.org/wiki?curid=13681 |
Herbert Hoover
Herbert Clark Hoover (August 10, 1874 – October 20, 1964) was an American engineer, businessman, and politician who served as the 31st president of the United States from 1929 to 1933. A member of the Republican Party, he held office during the onset of the Great Depression. Before serving as president, Hoover led the Commission for Relief in Belgium, served as the director of the U.S. Food Administration, and served as the 3rd U.S. Secretary of Commerce.
Hoover was born to a Quaker family in West Branch, Iowa. He took a position with a London-based mining company after graduating from Stanford University in 1895. After the outbreak of World War I, he became the head of the Commission for Relief in Belgium, an international relief organization that provided food to occupied Belgium. When the U.S. entered the war, President Woodrow Wilson appointed Hoover to lead the Food Administration, and Hoover became known as the country's "food czar". After the war, Hoover led the American Relief Administration, which provided food to the inhabitants of Central Europe and Eastern Europe. Hoover's war-time service made him a favorite of many progressives, and he unsuccessfully sought the Republican nomination in the 1920 presidential election.
After the 1920 election, newly elected Republican President Warren G. Harding appointed Hoover as Secretary of Commerce; Hoover continued to serve under President Calvin Coolidge after Harding died in 1923. Hoover was an unusually active and visible cabinet member, becoming known as "Secretary of Commerce and Under-Secretary of all other departments". He was influential in the development of radio and air travel and led the federal response to the Great Mississippi Flood of 1927. Hoover won the Republican nomination in the 1928 presidential election, and decisively defeated the Democratic candidate, Al Smith. The stock market crashed shortly after Hoover took office, and the Great Depression became the central issue of his presidency. Hoover pursued a variety of policies in an attempt to lift the economy, but opposed directly involving the federal government in relief efforts.
In the midst of the economic crisis, Hoover was decisively defeated by Democratic nominee Franklin D. Roosevelt in the 1932 presidential election. After leaving office, Hoover enjoyed one of the longest retirements of any former president, and he authored numerous works in subsequent decades. Hoover became increasingly conservative in this time, and he strongly criticized Roosevelt's foreign policy and New Deal domestic agenda. In the 1940s and 1950s, Hoover's public reputation was slightly rehabilitated after serving in various assignments for Presidents Harry S. Truman and Dwight D. Eisenhower, including as chairman of the Hoover Commission. Though he managed somewhat to rehabilitate his legacy, Hoover is still widely regarded as an inadequate U.S. president, and most polls of historians and political scientists rank him in the bottom third overall.
Herbert Hoover was born on August 10, 1874 in West Branch, Iowa. His father, Jesse Hoover, was a blacksmith and farm implement store owner of German, Swiss, and English ancestry. Hoover's mother, Hulda Randall Minthorn, was raised in Norwich, Ontario, Canada, before moving to Iowa in 1859. Like most other citizens of West Branch, Jesse and Hulda were Quakers. Around age two "Bertie", as he was called during that time, contracted a serious bout of croup, and was momentarily thought to have died until resuscitated by his uncle, John Minthorn. As a young child he was often referred to by his father as "my little stick in the mud" when he repeatedly got trapped in the mud crossing the unpaved street. Herbert's family figured prominently in the town's public prayer life, due almost entirely to mother Hulda's role in the church. As a child, Hoover consistently attended schools, but he did little reading on his own aside from the Bible. Hoover's father, noted by the local paper for his "pleasant, sunshiny disposition", died in 1880 at the age of 34. Hoover's mother died in 1884, leaving Hoover, his older brother, Theodore, and his younger sister, May, as orphans.
After a brief stay with one of his grandmothers in Kingsley, Iowa, Hoover lived the next 18 months with his uncle Allen Hoover in West Branch at a nearby farm. In November 1885, Hoover was sent to Newberg, Oregon to live with his uncle John Minthorn, a Quaker physician and businessman whose own son had died the year before. The Minthorn household was considered cultured and educational, and imparted a strong work ethic. Much like West Branch, Newberg was a frontier town settled largely by Midwestern Quakers. Minthorn ensured that Hoover received an education, but Hoover disliked the many chores assigned to him and often resented Minthorn. One observer described Hoover as "an orphan [who] seemed to be neglected in many ways." Hoover attended Friends Pacific Academy (now George Fox University), but dropped out at the age of thirteen to become an office assistant for his uncle's real estate office (Oregon Land Company) in Salem, Oregon. Though he did not attend high school, Hoover learned bookkeeping, typing, and mathematics at a night school.
Hoover entered Stanford University in 1891, its inaugural year, despite failing all the entrance exams except mathematics. During his freshman year, he switched his major from mechanical engineering to geology after working for John Casper Branner, the chair of Stanford's geology department. Hoover was a mediocre student, and he spent much of his time working in various part-time jobs or participating in campus activities. Though he was initially shy among fellow students, Hoover won election as student treasurer and became known for his distaste for fraternities and sororities. He served as student manager of both the baseball and football teams, and helped organize the inaugural Big Game versus the University of California. During the summers before and after his senior year, Hoover interned under economic geologist Waldemar Lindgren of the United States Geological Survey; these experiences convinced Hoover to pursue a career as a mining geologist.
When Hoover graduated from Stanford in 1895, the country was in the midst of the Panic of 1893, and he initially struggled to find a job. He worked in various low-level mining jobs in the Sierra Nevada mountain range until he convinced prominent mining engineer Louis Janin to hire him. After working as a mine scout for a year, Hoover was hired by Bewick, Moreing & Co., a London-based company that operated gold mines in Western Australia. Hoover first went to Coolgardie, then the center of the Eastern Goldfields. Though Hoover received a $5,000 salary (), conditions were harsh in the goldfields. Hoover described the Coolgardie and Murchison rangelands on the edge of the Great Victoria Desert as a land of "black flies, red dust and white heat."
Hoover traveled constantly across the Outback to evaluate and manage the company's mines. He convinced Bewick, Moreing to purchase the Sons of Gwalia gold mine, which proved to be one of the most successful mines in the region. Partly due to Hoover's efforts, the company eventually controlled approximately 50 percent of gold production in Western Australia. Hoover brought in many Italian immigrants to cut costs and counter the labour movement of the Australian miners. During his time with the mining company, Hoover became opposed to measures such as a minimum wage and workers' compensation, feeling that they were unfair to owners. Hoover's work impressed his employers, and in 1898 he was promoted to junior partner. An open feud developed between Hoover and his boss, Ernest Williams, but company leaders defused the situation by offering Hoover a compelling position in China.
Upon arriving in China, Hoover developed gold mines near Tianjin on behalf of Bewick, Moreing and the Chinese-owned Chinese Engineering and Mining Company. He became deeply interested in Chinese history, but quickly gave up on learning the language. He publicly warned that Chinese workers were inefficient and racially inferior. He made recommendations to improve the lot of the Chinese worker, seeking to end the practice of imposing long-term servitude contracts and to institute reforms for workers based on merit. The Boxer Rebellion broke out shortly after Hoover arrived in China, trapping the Hoovers and numerous other foreign nationals until a multi-national military force defeated Boxer forces in the Battle of Tientsin. Fearing the imminent collapse of the Chinese government, the director of the Chinese Engineering and Mining Company agreed to establish a new Sino-British venture with Bewick, Moreing. After Hoover and Bewick, Moreing established effective control over the new Chinese mining company, Hoover became the operating partner of Bewick, Moreing in late 1901.
As operating partner, Hoover continually traveled the world on behalf of Bewick, Moreing, visiting mines operated by the company on different continents. Beginning in December 1902, the company faced mounting legal and financial issues after one of the partners admitted to having fraudulently sold stock in a mine. More issues arose in 1904, after the British government formed two separate royal commission to investigate Bewick, Moreing's labor practices and financial dealings in Western Australia. After the company lost a suit Hoover began looking for a way to get out of the partnership, and he sold his shares in mid-1908.
After leaving Bewick, Moreing, Hoover worked as a London-based independent mining consultant and financier. Though he had risen to prominence as a geologist and mine operator, Hoover focused much of his attention on raising money, restructuring corporate organizations, and financing new ventures. He specialized in rejuvenating troubled mining operations, taking a share of the profits in exchange for his technical and financial expertise. Hoover thought of himself and his associates as "engineering doctors to sick concerns", and he earned a reputation as a "doctor of sick mines". He made investments on every continent and had offices in San Francisco; London; New York City; Paris; Petrograd; and Mandalay, British Burma. By 1914, Hoover was a very wealthy man, with an estimated personal fortune of $4 million (equivalent to $ million in ).
He co-founded the Zinc Corporation to extract zinc near the Australian city of Broken Hill. The Zinc Corporation developed the froth flotation process to extract zinc from lead-silver ore and operated the world's first selective or differential flotation plant. Hoover worked with the Burma Corporation, a British firm that produced silver, lead, and zinc in large quantities at the Namtu Bawdwin Mine. He also helped increase copper production in Kyshtym, Russia, through the use of pyritic smelting. He also agreed to manage a separate mine in the Altai Mountains that, according to Hoover, "developed probably the greatest and richest single body of ore known in the world."
In his spare time, Hoover wrote. His lectures at Columbia and Stanford universities were published in 1909 as "Principles of Mining", which became a standard textbook. The book reflects his move towards progressive ideals, as Hoover came to endorse eight-hour workdays and organized labor. Hoover became deeply interested in the history of science, and he was especially drawn to the "De re metallica", an influential 16th century work on mining and metallurgy. In 1912, Hoover and his wife published the first English translation of "De re metallica". Hoover also joined the board of trustees at Stanford, and led a successful campaign to appoint John Branner as the university's president.
During his senior year at Stanford, Hoover became smitten with a classmate named Lou Henry, though his financial situation precluded marriage at that time. The daughter of a banker from Monterey, California, Lou Henry decided to study geology at Stanford after attending a lecture delivered by John Branner. Immediately after earning a promotion in 1898, Hoover cabled Lou Henry, asking her to marry him. After she cabled back her acceptance of the proposal, Hoover briefly returned to the United States for their wedding. They would remain married until Lou Henry's death in 1944. Though his Quaker upbringing strongly influenced his career, Hoover rarely attended Quaker meetings during his adult life. Hoover and his wife had two children: Herbert Hoover Jr. (born in 1903) and Allan Henry Hoover (born in 1907). The Hoover family began living in London in 1902, though they frequently traveled as part of Hoover's career. After 1916, the Hoovers began living in the United States, maintaining homes in Palo Alto, California and Washington, D.C.
World War I broke out in June 1914, pitting the Allied Powers (France, Russia, Britain, and other countries) against the Central Powers (Germany, Austria-Hungary, and other countries). Hoover and other London-based American businessmen established a committee to organize the return of the roughly 100,000 Americans stranded in Europe. Hoover was appointed as the committee's chair and, with the assent of Congress and the executive branch, took charge of the distribution of relief to Americans in Europe. Hoover later stated, "I did not realize it at the moment, but on August 3, 1914, my career was over forever. I was on the slippery road of public life." By early October 1914, Hoover's organization had distributed relief to at least 40,000 Americans.
The German invasion of Belgium in August 1914 set off a food crisis in Belgium, which relied heavily on food imports. The Germans refused to take responsibility for feeding Belgian citizens in captured territory, and the British refused to lift their blockade of German-occupied Belgium unless the U.S. government supervised Belgian food imports as a neutral party in the war. With the cooperation of the Wilson administration and the CNSA, a Belgian relief organization, Hoover established the Commission for Relief in Belgium (CRB). The CRB obtained and imported millions of tons of foodstuffs for the CNSA to distribute, and helped ensure that the German army did not appropriate the food. Private donations and government grants supplied the majority of its $11-million-a-month budget, and the CRB became a veritable independent republic of relief, with its own flag, navy, factories, mills, and railroads. A British official described the CRB as a "piratical state organized for benevolence."
Hoover worked 14-hour days from London, administering the distribution of over two million tons of food to nine million war victims. In an early form of shuttle diplomacy, he crossed the North Sea forty times to meet with German authorities and persuade them to allow food shipments. He also convinced British Chancellor of the Exchequer David Lloyd George to allow individuals to send money to the people of Belgium, thereby lessening workload of the CRB. At the request of the French government, the CRB began delivering supplies to the people of Northern France in 1915. American diplomat Walter Page described Hoover as "probably the only man living who has privately (i.e., without holding office) negotiated understandings with the British, French, German, Dutch, and Belgian governments."
The United States declared war upon Germany in April 1917 after Germany engaged in unrestricted submarine warfare against American vessels in British waters. With the U.S. mobilizing for war, President Woodrow Wilson appointed Hoover to head the U.S. Food Administration, which was charged with ensuring the nation's food needs during the war. Hoover had hoped to join the administration in some capacity since at least 1916, and he obtained the position after lobbying several members of Congress and Wilson's confidant, Edward M. House. Earning the appellation of "food czar", Hoover recruited a volunteer force of hundreds of thousands of women and deployed propaganda in movie theaters, schools, and churches. He carefully selected men to assist in the agency leadership—Alonzo Taylor (technical abilities), Robert Taft (political associations), Gifford Pinchot (agricultural influence), and Julius Barnes (business acumen).
World War I had created a global food crisis that dramatically increased food prices and caused food riots and starvation in the countries at war. Hoover's chief goal as food czar was to provide supplies to the Allied Powers, but he also sought to stabilize domestic prices and to prevent domestic shortages. Under the broad powers granted by the Food and Fuel Control Act, the Food Administration supervised food production throughout the United States, and the administration made use of its authority to buy, import, store, and sell food. Determined to avoid rationing, Hoover established set days for people to avoid eating specified foods and save them for soldiers' rations: meatless Mondays, wheatless Wednesdays, and "when in doubt, eat potatoes". These policies were dubbed "Hooverizing" by government publicists, in spite of Hoover's continual orders that publicity should not mention him by name. The Food Administration shipped 23 million metric tons of food to the Allied Powers, preventing their collapse and earning Hoover great acclaim. As head of the Food Administration, Hoover gained a following in the United States, especially among progressives who saw in Hoover an expert administrator and symbol of efficiency.
World War I came to an end in November 1918, but Europe continued to face a critical food situation; Hoover estimated that as many as 400 million people faced the possibility of starvation. The United States Food Administration became the American Relief Administration (ARA), and Hoover was charged with providing food to Central and Eastern Europe. In addition to providing relief, the ARA rebuilt infrastructure in an effort to rejuvenate the economy of Europe. Throughout the Paris Peace Conference, Hoover served as a close adviser to President Wilson, and he largely shared Wilson's goals of establishing the League of Nations, settling borders on the basis of self-determination, and refraining from inflicting a harsh punishment on the defeated Central Powers. The following year, famed British economist John Maynard Keynes wrote in The Economic Consequences of the Peace that if Hoover's realism, "knowledge, magnanimity and disinterestedness" had found wider play in the councils of Paris, the world would have had "the Good Peace." After U.S. government funding for the ARA expired in mid-1919, Hoover transformed the ARA into a private organization, raising millions of dollars from private donors. He also established the European Children's Fund, which provided relief to fifteen million children across fourteen countries.
Despite the opposition of Senator Henry Cabot Lodge and other Republicans, Hoover provided aid to the defeated German nation after the war, as well as relief to famine-stricken Bolshevik-controlled areas of Russia. Hoover condemned Bolshevism, but warned President Wilson against an intervention in Russia, as he viewed the White Russian forces as little better than the Bolsheviks and feared the possibility of a protracted U.S. involvement. The Russian famine of 1921–22 claimed six million people, but the intervention of the ARA likely saved millions of lives. When asked if he was not helping Bolshevism by providing relief, Hoover stated, "twenty million people are starving. Whatever their politics, they shall be fed!" Reflecting the gratitude of many Europeans, in July 1922, Soviet author Maxim Gorky told Hoover that "your help will enter history as a unique, gigantic achievement, worthy of the greatest glory, which will long remain in the memory of millions of Russians whom you have saved from death."
In 1919, Hoover established the Hoover War Collection at Stanford University. He donated all the files of the Commission for Relief in Belgium, the U.S. Food Administration, and the American Relief Administration, and pledged $50,000 as an endowment (). Scholars were sent to Europe to collect pamphlets, society publications, government documents, newspapers, posters, proclamations, and other ephemeral materials related to the war and the revolutions that followed it. The collection was renamed the Hoover War Library in 1922 and is now known as the Hoover Institution. During the post-war period, Hoover also served as the president of the Federated American Engineering Societies.
Hoover had been little known among the American public before 1914, but his service in the Wilson administration established him as a contender in the 1920 presidential election. Hoover's wartime push for higher taxes, criticism of Attorney General A. Mitchell Palmer's actions during the First Red Scare, and his advocacy for measures such as the minimum wage, forty-eight-hour workweek, and elimination of child labor made him appealing to progressives of both parties. Despite his service in the Democratic administration of Woodrow Wilson, Hoover had never been closely affiliated with either the Democrats or the Republicans. He initially sought to avoid committing to any party in the 1920 election, hoping that either of the two major parties would draft him for president at their respective national convention. In March 1920, he changed his strategy and declared himself to be a Republican; he was motivated in large part by the belief that the Democratic candidate would have little chance of winning the 1920 presidential election. Despite his national renown, Hoover's service in the Wilson administration had alienated farmers and the conservative Old Guard of the GOP, and his presidential candidacy fizzled out after his defeat in the California primary by favorite son Hiram Johnson. At the 1920 Republican National Convention, Warren G. Harding emerged as a compromise candidate after the convention became deadlocked between supporters of Johnson, Leonard Wood, and Frank Orren Lowden. Hoover backed Harding's successful campaign in the general election, and he began laying the groundwork for a future presidential run by building up a base of strong supporters in the Republican Party.
After his election as president in 1920, Harding rewarded Hoover for his support, offering to appoint him as either Secretary of the Interior or Secretary of Commerce. Secretary of Commerce was considered a minor Cabinet post, with limited and vaguely defined responsibilities, but Hoover decided to accept the position. Hoover's progressive stances, continuing support for the League of Nations, and recent conversion to the Republican Party aroused opposition to his appointment from many Senate Republicans. To overcome this opposition, Harding paired Hoover's nomination with that of conservative favorite Andrew Mellon as Secretary of the Treasury, and the nominations of both Hoover and Mellon were confirmed by the Senate. Hoover would serve as Secretary of Commerce from 1921 to 1929, serving under Harding and, after Harding's death in 1923, President Calvin Coolidge. While some of the most prominent members of the Harding administration, including Attorney General Harry M. Daugherty and Secretary of Interior Albert B. Fall, were implicated in major scandals, Hoover emerged largely unscathed from investigations into the Harding administration.
Hoover envisioned the Commerce Department as the hub of the nation's growth and stability. His experience mobilizing the war-time economy convinced him that the federal government could promote efficiency by eliminating waste, increasing production, encouraging the adoption of data-based practices, investing in infrastructure, and conserving natural resources. Contemporaries described Hoover's approach as a "third alternative" between "unrestrained capitalism" and socialism, which was becoming increasingly popular in Europe. Hoover sought to foster a balance among labor, capital, and the government, and for this he has been variously labeled a corporatist or an associationalist.
Hoover demanded, and received, authority to coordinate economic affairs throughout the government. He created many sub-departments and committees, overseeing and regulating everything from manufacturing statistics to air travel. In some instances he "seized" control of responsibilities from other Cabinet departments when he deemed that they were not carrying out their responsibilities well; some began referring to him as the "Secretary of Commerce and Under-Secretary of all other departments." In response to the Depression of 1920–21, he convinced Harding to assemble a presidential commission on unemployment, which encouraged local governments to engage in countercyclical infrastructure spending. He endorsed much of Mellon's tax reduction program, but favored a more progressive tax system and opposed the treasury secretary's efforts to eliminate the estate tax.
Between 1923 and 1929, the number of families with radios grew from 300,000 to 10 million, and Hoover's tenure as Secretary of Commerce heavily influenced radio use in the United States. In the early and mid-1920s, Hoover's radio conferences played a key role in the organization, development, and regulation of radio broadcasting. Hoover also helped pass the Radio Act of 1927, which allowed the government to intervene and abolish radio stations that were deemed "non-useful" to the public. Hoover's attempts at regulating radio were not supported by all congressmen, and he received much opposition from the Senate and from radio station owners.
Hoover was also influential in the early development of air travel, and he sought to create a thriving private industry boosted by indirect government subsidies. He encouraged the development of emergency landing fields, required all runways to be equipped with lights and radio beams, and encouraged farmers to make use of planes for crop dusting. He also established the federal government's power to inspect planes and license pilots, setting a precedent for the later Federal Aviation Administration.
As Commerce Secretary, Hoover hosted national conferences on street traffic collectively known as the National Conference on Street and Highway Safety. Hoover's chief objective was to address the growing casualty toll of traffic accidents, but the scope of the conferences grew and soon embraced motor vehicle standards, rules of the road, and urban traffic control. He left the invited interest groups to negotiate agreements among themselves, which were then presented for adoption by states and localities. Because automotive trade associations were the best organized, many of the positions taken by the conferences reflected their interests. The conferences issued a model Uniform Vehicle Code for adoption by the states, and a Model Municipal Traffic Ordinance for adoption by cities. Both were widely influential, promoting greater uniformity between jurisdictions and tending to promote the automobile's priority in city streets.
With the goal of encouraging wise business investments, Hoover made the Commerce Department a clearinghouse of information. He recruited numerous academics from various fields and tasked them with publishing reports on different aspects of the economy, including steel production and films. To eliminate waste, he encouraged standardization of products like automobile tires and baby bottle nipples. Other efforts at eliminating waste included reducing labor losses from trade disputes and seasonal fluctuations, reducing industrial losses from accident and injury, and reducing the amount of crude oil spilled during extraction and shipping. He promoted international trade by opening overseas offices to advise businessmen. Hoover was especially eager to promote Hollywood films overseas. His "Own Your Own Home" campaign was a collaboration to promote ownership of single-family dwellings, with groups such as the Better Houses in America movement, the Architects' Small House Service Bureau, and the Home Modernizing Bureau. He worked with bankers and the savings and loan industry to promote the new long-term home mortgage, which dramatically stimulated home construction. Other accomplishments included winning the agreement of U.S. Steel to adopt an eight-hour workday, and the fostering of the Colorado River Compact, a water rights compact among Southwestern states.
The Great Mississippi Flood of 1927 broke the banks and levees of the lower Mississippi River in early 1927, resulting in the flooding of millions of acres and leaving 1.5 million people displaced from their homes. Although disaster response did not fall under the duties of the Commerce Department, the governors of six states along the Mississippi River specifically asked President Coolidge to appoint Hoover to coordinate the response to the flood. Believing that disaster response was not the domain of the federal government, Coolidge initially refused to become involved, but he eventually acceded to political pressure and appointed Hoover to chair a special committee to help the region. Hoover established over one hundred tent cities and a fleet of more than six hundred vessels, and raised $17 million (equivalent to $ million in ). In large part due to his leadership during the flood crisis, by 1928, Hoover had begun to overshadow President Coolidge himself. Though Hoover received wide acclaim for his role in the crisis, he ordered the suppression of reports of mistreatment of African Americans in refugee camps. He did so with the cooperation of African-American leader Robert Russa Moton, who was promised unprecedented influence once Hoover became president.
Hoover quietly built up support for a future presidential bid throughout the 1920s, but he carefully avoided alienating Coolidge, who was eligible to run for another term in the 1928 presidential election. Along with the rest of the nation, he was surprised when Coolidge announced in August 1927 that he would not seek another term. With the impending retirement of Coolidge, Hoover immediately emerged as the front-runner for the 1928 Republican nomination, and he quickly put together a strong campaign team led by Hubert Work, Will H. Hays, and Reed Smoot. Coolidge was unwilling to anoint Hoover as his successor; on one occasion he remarked that, "for six years that man has given me unsolicited advice—all of it bad." Despite his lukewarm feelings towards Hoover, Coolidge had no desire to split the party by publicly opposing the popular Commerce Secretary's candidacy.
Many wary Republican leaders cast about for an alternative candidate, such as Treasury Secretary Andrew Mellon or former Secretary of State Charles Evans Hughes. However, Hughes and Mellon declined to run, and other potential contenders like Frank Orren Lowden and Vice President Charles G. Dawes failed to garner widespread support. Hoover won the presidential nomination on the first ballot of the 1928 Republican National Convention. Convention delegates considered re-nominating Vice President Charles Dawes to be Hoover's running mate, but Coolidge, who hated Dawes, remarked that this would be "a personal affront" to him. The convention instead selected Senator Charles Curtis of Kansas. Hoover accepted the nomination at Stanford Stadium, telling a huge crowd that he would continue the policies of the Harding and Coolidge administrations. The Democrats nominated New York governor Al Smith, who became the first Catholic major party nominee for president.
Hoover centered his campaign around the Republican record of peace and prosperity, as well as his own reputation as a successful engineer and public official. Averse to giving political speeches, Hoover largely stayed out of the fray and left the campaigning to Curtis and other Republicans. Smith was more charismatic and gregarious than Hoover, but his campaign was damaged by anti-Catholicism and his overt opposition to Prohibition. Hoover had never been a strong proponent of Prohibition, but he accepted the Republican Party's plank in favor of it and issued an ambivalent statement calling Prohibition "a great social and economic experiment, noble in motive and far-reaching in purpose." In the South, Hoover and the national party pursued a "lily-white" strategy, removing black Republicans from leadership positions in an attempt to curry favor with white Southerners.
Hoover maintained polling leads throughout the 1928 campaign, and he decisively defeated Smith on election day, taking 58 percent of the popular vote and 444 of the 531 electoral votes. Historians agree that Hoover's national reputation and the booming economy, combined with deep splits in the Democratic Party over religion and Prohibition, guaranteed his landslide victory. Hoover's appeal to Southern white voters succeeded in cracking the "Solid South", and he won five Southern states. Hoover's victory was positively received by newspapers; one wrote that Hoover would "drive so forcefully at the tasks now before the nation that the end of his eight years as president will find us looking back on an era of prodigious achievement."
Hoover's detractors wondered why he did not do anything to reapportion congress after the 1920 United States Census which saw an increase in urban and immigrant populations. The 1920 Census was the first and only Decennial Census where the results were not used to reapportion Congress, which ultimately influenced the 1928 Electoral College and impacted the Presidential Election.
Hoover saw the presidency as a vehicle for improving the conditions of all Americans by encouraging public-private cooperation—what he termed "volunteerism". He tended to oppose governmental coercion or intervention, as he thought they infringed on American ideals of individualism and self-reliance. The first major bill that he signed, the Agricultural Marketing Act of 1929, established the Federal Farm Board in order to stabilize farm prices. Hoover made extensive use of commissions to study issues and propose solutions, and many of those commissions were sponsored by private donors rather than by the government. One of the commissions started by Hoover, the Research Committee on Social Trends, was tasked with surveying the entirety of American society. He appointed a Cabinet consisting largely of wealthy, business-oriented conservatives, including Secretary of the Treasury Andrew Mellon. Lou Henry Hoover was an activist First Lady. She typified the new woman of the post-World War I era: intelligent, robust, and aware of multiple female possibilities.
On taking office, Hoover said that "given the chance to go forward with the policies of the last eight years, we shall soon with the help of God, be in sight of the day when poverty will be banished from this nation." Having seen the fruits of prosperity brought by technological progress, many shared Hoover's optimism, and the already bullish stock market climbed even higher on Hoover's accession. This optimism concealed several threats to sustained U.S. economic growth, including a persistent farm crisis, a saturation of consumer goods like automobiles, and growing income inequality. Most dangerous of all to the economy was excessive speculation that had raised stock prices far beyond their value. Some regulators and bankers had warned Coolidge and Hoover that a failure to curb speculation would lead to "one of the greatest financial catastrophes that this country has ever seen," but both presidents were reluctant to become involved with the workings of the Federal Reserve System, which regulated banks.
In late October 1929, the Stock Market Crash of 1929 occurred, and the worldwide economy began to spiral downward into the Great Depression. The causes of the Great Depression remain a matter of debate, but Hoover viewed a lack of confidence in the financial system as the fundamental economic problem facing the nation. He sought to avoid direct federal intervention, believing that the best way to bolster the economy was through the strengthening of businesses such as banks and railroads. He also feared that allowing individuals on the "dole" would permanently weaken the country. Instead, Hoover strongly believed that local governments and private giving should address the needs of individuals.
Though he attempted to put a positive spin on Black Tuesday, Hoover moved quickly to address the stock market collapse. In the days following Black Tuesday, Hoover gathered business and labor leaders, asking them to avoid wage cuts and work stoppages while the country faced what he believed would be a short recession similar to the Depression of 1920–21. Hoover also convinced railroads and public utilities to increase spending on construction and maintenance, and the Federal Reserve announced that it would cut interest rates. In early 1930, Hoover acquired from Congress an additional $100 million to continue the Federal Farm Board lending and purchasing policies. These actions were collectively designed to prevent a cycle of deflation and provide a fiscal stimulus. At the same time, Hoover opposed congressional proposals to provide federal relief to the unemployed, as he believed that such programs were the responsibility of state and local governments and philanthropic organizations.
Hoover had taken office hoping to raise agricultural tariffs in order to help farmers reeling from the farm crisis of the 1920s, but his attempt to raise agricultural tariffs became connected with a bill that broadly raised tariffs. Hoover refused to become closely involved in the congressional debate over the tariff, and Congress produced a tariff bill that raised rates for many goods. Despite the widespread unpopularity of the bill, Hoover felt that he could not reject the main legislative accomplishment of the Republican-controlled 71st Congress. Over the objection of many economists, Hoover signed the Smoot–Hawley Tariff Act into law in June 1930. Canada, France, and other nations retaliated by raising tariffs, resulting in a contraction of international trade and a worsening of the economy. Progressive Republicans such as Senator William E. Borah of Idaho were outraged when Hoover signed the tariff act, and Hoover's relations with that wing of the party never recovered.
By the end of 1930, the national unemployment rate had reached 11.9 percent, but it was not yet clear to most Americans that the economic downturn would be worse than the Depression of 1920–21. A series of bank failures in late 1930 heralded a larger collapse of the economy in 1931. While other countries left the gold standard, Hoover refused to abandon it; he derided any other monetary system as "collectivism." Hoover viewed the weak European economy as a major cause of economic troubles in the United States. In response to the collapse of the German economy, Hoover marshaled congressional support behind a one-year moratorium on European war debts. The Hoover Moratorium was warmly received in Europe and the United States, but Germany remained on the brink of defaulting on its loans. As the worldwide economy worsened, democratic governments fell; in Germany, Nazi Party leader Adolf Hitler assumed power.
By mid-1931, the unemployment rate had reached 15 percent, giving rise to growing fears that the country was experiencing a depression far worse than recent economic downturns. A reserved man with a fear of public speaking, Hoover allowed his opponents in the Democratic Party to define him as cold, incompetent, reactionary, and out-of-touch. Hoover's opponents developed defamatory epithets to discredit him, such as "Hooverville" (the shanty towns and homeless encampments), "Hoover leather" (cardboard used to cover holes in the soles of shoes), and "Hoover blanket" (old newspaper used to cover oneself from the cold). While Hoover continued to resist direct federal relief efforts, Governor Franklin D. Roosevelt of New York launched the Temporary Emergency Relief Administration to provide aid to the unemployed. Democrats positioned the program as a kinder alternative to Hoover's alleged apathy towards the unemployed.
The economy continued to worsen, with unemployment rates nearing 23 percent in early 1932, and Hoover finally heeded calls for more direct federal intervention. In January 1932, he convinced Congress to authorize the establishment of the Reconstruction Finance Corporation (RFC), which would provide government-secured loans to financial institutions, railroads, and local governments. The RFC saved numerous businesses from failure, but it failed to stimulate commercial lending as much as Hoover had hoped, partly because it was run by conservative bankers unwilling to make riskier loans. The same month the RFC was established, Hoover signed the Federal Home Loan Bank Act, establishing 12 district banks overseen by a Federal Home Loan Bank Board in a manner similar to the Federal Reserve System. He also helped arrange passage of the Glass–Steagall Act of 1932, emergency banking legislation designed to expand banking credit by expanding the collateral on which Federal Reserve banks were authorized to lend. As these measures failed to stem the economic crisis, Hoover signed the Emergency Relief and Construction Act, a $2 billion public works bill, in July 1932.
After a decade of budget surpluses, the federal government experienced a budget deficit in 1931. Though some economists, like William Trufant Foster, favored deficit spending to address the Great Depression, most politicians and economists believed in the necessity of keeping a balanced budget. In late 1931, Hoover proposed a tax plan to increase tax revenue by 30 percent, resulting in the passage of the Revenue Act of 1932. The act increased taxes across the board, rolling back much of the tax cut reduction program Mellon had presided over during the 1920s. Top earners were taxed at 63 percent on their net income, the highest rate since the early 1920s. The act also doubled the top estate tax rate, cut personal income tax exemptions, eliminated the corporate income tax exemption, and raised corporate tax rates. Despite the passage of the Revenue Act, the federal government continued to run a budget deficit.
Hoover seldom mentioned civil rights while he was president. He believed that African Americans and other races could improve themselves with education and individual initiative. Hoover appointed more African Americans to federal positions than Harding and Coolidge had combined, but many African-American leaders condemned various aspects of the Hoover administration, including Hoover's unwillingness to push for a federal anti-lynching law. Hoover also continued to pursue the lily-white strategy, removing African Americans from positions of leadership in the Republican Party in an attempt to end the Democratic Party's dominance in the South. Though Robert Moton and some other black leaders accepted the lily-white strategy as a temporary measure, most African-American leaders were outraged. Hoover further alienated black leaders by nominating conservative Southern judge John J. Parker to the Supreme Court; Parker's nomination ultimately failed in the Senate due to opposition from the NAACP and organized labor. Many black voters switched to the Democratic Party in the 1932 election, and African Americans would later become an important part of Franklin Roosevelt's New Deal coalition.
As part of his efforts to limit unemployment, Hoover sought to cut immigration to the United States, and in 1930 he promulgated an executive order requiring individuals to have employment before migrating to the United States. With the goal of opening up more jobs for U.S. citizens, Secretary of Labor William N. Doak began a campaign to prosecute illegal immigrants in the United States. Though Doak did not seek to deport one specific group of immigrants, his campaign most strongly affected Mexican Americans, especially Mexican Americans living in Southern California. Many of the deportations were overseen by state and local authorities who acted on the encouragement of Doak and the Department of Labor. During the 1930s, approximately one million Mexican Americans were forcibly "repatriated" to Mexico; approximately sixty percent of those deported were birthright citizens. According to legal professor Kevin R. Johnson, the repatriation campaign meets the modern legal standards of ethnic cleansing, as it involved the forced removal of a racial minority by government actors.
Despite his politics, Hoover was never a teetotaler. While serving as Secretary of Commerce he would often stop at the Belgian embassy for cocktails.
On taking office, Hoover urged Americans to obey the Eighteenth Amendment and the Volstead Act, which had established Prohibition across the United States. To make public policy recommendations regarding Prohibition, he created the Wickersham Commission. Hoover had hoped that the commission's public report would buttress his stance in favor of Prohibition, but the report criticized the enforcement of the Volstead Act and noted the growing public opposition to Prohibition. After the Wickersham Report was published in 1931, Hoover rejected the advice of some of his closest allies and refused to endorse any revision of the Volstead Act or the Eighteenth Amendment, as he feared doing so would undermine his support among Prohibition advocates. As public opinion increasingly turned against Prohibition, more and more people flouted the law, and a grassroots movement began working in earnest for Prohibition's repeal. In January 1933, a constitutional amendment repealing the Eighteenth Amendment was approved by Congress and submitted to the states for ratification. By December 1933, it had been ratified by the requisite number of states to become the Twenty-first Amendment.
According to Leuchtenburg, Hoover was "the last American president to take office with no conspicuous need to pay attention to the rest of the world". Nevertheless, during Hoover's term, the world order established in the immediate aftermath of World War I began to crumble. As president, Hoover largely made good on his pledge made prior to assuming office not to interfere in Latin America's internal affairs. In 1930, he released the Clark Memorandum, a rejection of the Roosevelt Corollary and a move towards non-interventionism in Latin America. Hoover did not completely refrain from the use of the military in Latin American affairs; he thrice threatened intervention in the Dominican Republic, and he sent warships to El Salvador to support the government against a left-wing revolution. Notwithstanding those actions, he wound down the Banana Wars, ending the occupation of Nicaragua and nearly bringing an end to the occupation of Haiti.
Hoover placed a priority on disarmament, which he hoped would allow the United States to shift money from the military to domestic needs. Hoover and Secretary of State Henry L. Stimson focused on extending the 1922 Washington Naval Treaty, which sought to prevent a naval arms race. As a result of Hoover's efforts, the United States and other major naval powers signed the 1930 London Naval Treaty. The treaty represented the first time that the naval powers had agreed to cap their tonnage of auxiliary vessels, as previous agreements had only affected capital ships.
At the 1932 World Disarmament Conference, Hoover urged further cutbacks in armaments and the outlawing of tanks and bombers, but his proposals were not adopted.
In 1931, Japan invaded Manchuria, defeating the Republic of China's military forces and establishing Manchukuo, a puppet state. The Hoover administration deplored the invasion, but also sought to avoid antagonizing the Japanese, fearing that taking too strong a stand would weaken the moderate forces in the Japanese government and alienate a potential ally against the Soviet Union, which he saw as a much greater threat. In response to the Japanese invasion, Hoover and Secretary of State Stimson outlined the Stimson Doctrine, which held that the United States would not recognize territories gained by force.
Thousands of World War I veterans and their families demonstrated and camped out in Washington, DC, during June 1932, calling for immediate payment of bonuses that had been promised by the World War Adjusted Compensation Act in 1924; the terms of the act called for payment of the bonuses in 1945. Although offered money by Congress to return home, some members of the "Bonus Army" remained. Washington police attempted to disperse the demonstrators, but they were outnumbered and unsuccessful. Shots were fired by the police in a futile attempt to attain order, and two protesters were killed while many officers were injured. Hoover sent U.S. Army forces led by General Douglas MacArthur to the protests. MacArthur, believing he was fighting a Communist revolution, chose to clear out the camp with military force. Though Hoover had not ordered MacArthur's clearing out of the protesters, he endorsed it after the fact. The incident proved embarrassing for the Hoover administration, and destroyed any remaining chance he had of winning re-election.
By mid-1931 few observers thought that Hoover had much hope of winning a second term in the midst of the ongoing economic crisis. Nonetheless, Hoover faced little opposition for re-nomination at the 1932 Republican National Convention, as Coolidge and other prominent Republicans all passed on the opportunity to challenge Hoover. Franklin D. Roosevelt won the presidential nomination on the fourth ballot of the 1932 Democratic National Convention, defeating the 1928 Democratic nominee, Al Smith. The Democrats attacked Hoover as the cause of the Great Depression, and for being indifferent to the suffering of millions. As Governor of New York, Roosevelt had called on the New York legislature to provide aid for the needy, establishing Roosevelt's reputation for being more favorable toward government interventionism during the economic crisis. The Democratic Party, including Al Smith and other national leaders, coalesced behind Roosevelt, while progressive Republicans like George Norris and Robert La Follette Jr. deserted Hoover.
Hoover's detractors wondered why he did not do anything to reapportion congress after the 1920 United States Census which saw an increase in urban and immigrant populations. The 1920 Census was the first and only Decennial Census where the results were not used to reapportion Congress; which ultimately influenced the 1928 Electoral College and impacted the Presidential Election.
Hoover originally planned to make only one or two major speeches, and to leave the rest of the campaigning to proxies, as sitting presidents had traditionally done. However, encouraged by Republican pleas and outraged by Democratic claims, Hoover entered the public fray. In his nine major radio addresses Hoover primarily defended his administration and his philosophy of government, urging voters to hold to the "foundations of experience" and reject the notion that government interventionism could save the country from the Depression. In his campaign trips around the country, Hoover was faced with perhaps the most hostile crowds ever seen by a sitting president. Besides having his train and motorcades pelted with eggs and rotten fruit, he was often heckled while speaking, and on several occasions, the Secret Service halted attempts to kill Hoover by disgruntled citizens, including capturing one man nearing Hoover carrying sticks of dynamite, and another already having removed several spikes from the rails in front of the president's train.
Hoover's attempts to vindicate his administration fell on deaf ears, as much of the public blamed his administration for the depression. In the electoral vote, Hoover lost 59–472, carrying six states. Hoover won just 39.7 percent of the popular vote, a reduction of 26 percentage points from his result in the 1928 election. Roosevelt's performance in the popular vote made him the first Democratic presidential nominee to win the presidency with a majority of the popular vote since Franklin Pierce in 1852.
Hoover departed from Washington in March 1933, bitter at his election loss and continuing unpopularity. As Coolidge, Harding, Wilson, and Taft had all died during the 1920s or early 1930s, Hoover was the sole living ex-president from 1933 to 1953. Hoover and his wife lived in Palo Alto until her death in 1944, at which point Hoover began to live permanently at the Waldorf Astoria hotel in New York City. During the 1930s, Hoover increasingly self-identified as a conservative. He closely followed national events after leaving public office, becoming a constant critic of Franklin Roosevelt. In response to continued attacks on his character and presidency, Hoover wrote more than two dozen books, including "The Challenge to Liberty" (1934), which harshly criticized Roosevelt's New Deal. Hoover described the New Deal's National Recovery Administration and Agricultural Adjustment Administration as "fascistic", and he called the 1933 Banking Act a "move to gigantic socialism."
Only 58 when he left office, Hoover held out hope for another term as president throughout the 1930s. At the 1936 Republican National Convention, Hoover's speech attacking the New Deal was well received, but the nomination went to Kansas Governor Alf Landon. In the general election, Hoover delivered numerous well-publicized speeches on behalf of Landon, but Landon was defeated by Roosevelt. Though Hoover was eager to oppose Roosevelt at every turn, Senator Arthur Vandenberg and other Republicans urged the still-unpopular Hoover to remain out of the fray during the debate over Roosevelt's proposed Judiciary Reorganization Bill of 1937. At the 1940 Republican National Convention, Hoover again hoped for the presidential nomination, but it went to the internationalist Wendell Willkie, who lost to Roosevelt in the general election.
During a 1938 trip to Europe, Hoover met with Adolf Hitler and stayed at Hermann Göring's hunting lodge. He expressed dismay at the persecution of Jews in Germany and believed that Hitler was mad, but did not present a threat to the U.S. Instead, Hoover believed that Roosevelt posed the biggest threat to peace, holding that Roosevelt's policies provoked Japan and discouraged France and the United Kingdom from reaching an "accommodation" with Germany. After the September 1939 invasion of Poland by Germany, Hoover opposed U.S. involvement in World War II, including the Lend-Lease policy. He rejected Roosevelt's offers to help coordinate relief in Europe, but, with the help of old friends from the CRB, helped establish the Commission for Polish Relief. After the beginning of the occupation of Belgium in 1940, Hoover provided aid for Belgian civilians, though this aid was described as unnecessary by German broadcasts.
During a radio broadcast on June 29, 1941, one week after the Nazi invasion of the Soviet Union, Hoover disparaged any "tacit alliance" between the U.S. and the USSR, stating, "if we join the war and Stalin wins, we have aided him to impose more communism on Europe and the world... War alongside Stalin to impose freedom is more than a travesty. It is a tragedy." Much to his own frustration, Hoover was not called upon to serve after the United States entered World War II due to his differences with Roosevelt and his continuing unpopularity. He did not pursue the presidential nomination at the 1944 Republican National Convention, and, at the request of Republican nominee Thomas E. Dewey, refrained from campaigning during the general election.
Following World War II, Hoover befriended President Harry S. Truman despite their ideological differences. Because of Hoover's experience with Germany at the end of World War I, in 1946 Truman selected the former president to tour Germany to ascertain the food needs of the occupied nation. After touring Germany, Hoover produced a number of reports critical of U.S. occupation policy. He stated in one report that "there is the illusion that the New Germany left after the annexations can be reduced to a 'pastoral state.' It cannot be done unless we exterminate or move 25,000,000 people out of it." On Hoover's initiative, a school meals program in the American and British occupation zones of Germany was begun on April 14, 1947; the program served 3,500,000 children.
In 1947, Truman appointed Hoover to a commission to reorganize the executive departments; the commission elected Hoover as chairman and became known as the Hoover Commission. The commission recommended changes designed to strengthen the president's ability to manage the federal government. Though Hoover had opposed Roosevelt's concentration of power in the 1930s, he believed that a stronger presidency was required with the advent of the Atomic Age. During the 1948 presidential election, Hoover supported Republican nominee Thomas Dewey's unsuccessful campaign against Truman, but he remained on good terms with Truman. Hoover favored the United Nations in principle, but he opposed granting membership to the Soviet Union and other Communist states. He viewed the Soviet Union to be as morally repugnant as Nazi Germany and supported the efforts of Richard Nixon and others to expose Communists in the United States.
In 1949, New York Governor Thomas E. Dewey offered Hoover the Senate seat vacated by Robert F. Wagner. He declined.
Hoover backed conservative leader Robert A. Taft at the 1952 Republican National Convention, but the party's presidential nomination instead went to Dwight D. Eisenhower, who went on to win the 1952 election. Though Eisenhower appointed Hoover to another presidential commission, Hoover disliked Eisenhower, faulting the latter's failure to roll back the New Deal. Hoover's public work helped to rehabilitate his reputation, as did his use of self-deprecating humor; he occasionally remarked that "I am the only person of distinction who's ever had a depression named after him." In 1958, Congress passed the Former Presidents Act, offering a $25,000 yearly pension () to each former president. Hoover took the pension even though he did not need the money, possibly to avoid embarrassing Truman, whose precarious financial status played a role in the law's enactment. In the early 1960s, President John F. Kennedy offered Hoover various positions; Hoover declined the offers but defended Kennedy after the Bay of Pigs invasion and was personally distraught by Kennedy's assassination in 1963.
Hoover wrote several books during his retirement, including "The Ordeal of Woodrow Wilson", in which he strongly defended Wilson's actions at the Paris Peace Conference. In 1944, he began working on "Freedom Betrayed", which he often referred to as his "magnum opus." In "Freedom Betrayed", Hoover strongly critiques Roosevelt's foreign policy, especially Roosevelt's decision to recognize the Soviet Union in order to provide aid to that country during World War II. The book was published in 2012 after being edited by historian George H. Nash.
Hoover faced three major illnesses during the last two years of his life, including an August 1962 operation in which a growth on his large intestine was removed. He died on October 20, 1964 in New York City following massive internal bleeding.
Though Hoover's last spoken words are unknown, his last known written words were a get well message to his friend Harry Truman, six days before his death, after he heard that Truman sustained injuries from slipping in a bathroom: "Bathtubs are a menace to ex-presidents for as you may recall a bathtub rose up and fractured my vertebrae when I was in Venezuela on your world famine mission in 1946. My warmest sympathy and best wishes for your recovery." Two months earlier, on August 10, Hoover reached the age of 90, only the second U.S. president (after John Adams) to do so. When asked how he felt on reaching the milestone, Hoover replied, "Too old." At the time of his death, Hoover had been out of office for over 31 years ( days altogether). This was the longest retirement in presidential history until Jimmy Carter broke that record in September 2012.
Hoover was honored with a state funeral in which he lay in state in the United States Capitol rotunda. Then, on October 25, he was buried in West Branch, Iowa, near his presidential library and birthplace on the grounds of the Herbert Hoover National Historic Site. Afterwards, Hoover's wife, Lou Henry, who had been buried in Palo Alto, California, following her death in 1944, was re-interred beside him.
Hoover was extremely unpopular when he left office after the 1932 election, and his historical reputation would not begin to recover until the 1970s. According to Professor David E. Hamilton, historians have credited Hoover for his genuine belief in voluntarism and cooperation, as well as the innovation of some of his programs. However, Hamilton also notes that Hoover was politically inept and failed to recognize the severity of the Great Depression. Nicholas Lemann writes that Hoover has been remembered "as the man who was too rigidly conservative to react adeptly to the Depression, as the hapless foil to the great Franklin Roosevelt, and as the politician who managed to turn a Republican country into a Democratic one." Polls of historians and political scientists have generally ranked Hoover in the bottom third of presidents. A 2018 poll of the American Political Science Association's Presidents and Executive Politics section ranked Hoover as the 36th best president. A 2017 C-Span poll of historians also ranked Hoover as the 36th best president.
Although Hoover is generally regarded as having had a failed presidency, he has also received praise for his actions as a humanitarian and public official. Biographer Glen Jeansonne writes that Hoover was "one of the most extraordinary Americans of modern times," adding that Hoover "led a life that was a prototypical Horatio Alger story, except that Horatio Alger stories stop at the pinnacle of success." Biographer Kenneth Whyte writes that, "the question of where Hoover belongs in the American political tradition remains a loaded one to this day. While he clearly played important roles in the development of both the progressive and conservative traditions, neither side will embrace him for fear of contamination with the other."
The Herbert Hoover Presidential Library and Museum is located in West Branch, Iowa next to the Herbert Hoover National Historic Site. The library is one of thirteen presidential libraries run by the National Archives and Records Administration. The Hoover–Minthorn House, where Hoover lived from 1885 to 1891, is located in Newberg, Oregon. His Rapidan fishing camp in Virginia, which he donated to the government in 1933, is now a National Historic Landmark within the Shenandoah National Park. The Lou Henry and Herbert Hoover House, built in 1919 in Stanford, California, is now the official residence of the president of Stanford University, and a National Historic Landmark. Also located at Stanford is the Hoover Institution, a think tank and research institution started by Hoover.
Hoover has been memorialized in the names of several things, including the Hoover Dam on the Colorado River and numerous elementary, middle, and high schools across the United States. Two minor planets, 932 Hooveria and 1363 Herberta, are named in his honor. The Polish capital of Warsaw has a square named after Hoover, and the historic townsite of Gwalia, Western Australia contains the Hoover House Bed and Breakfast, where Hoover resided while managing and visiting the mine during the first decade of the twentieth century. A medicine ball game known as Hooverball is named for Hoover; it was invented by White House physician Admiral Joel T. Boone to help Hoover keep fit while serving as president.
Hoover was inducted into the National Mining Hall of Fame in 1988 (inaugural class). His wife was inducted into the hall in 1990.
Hoover was inducted into the Australian Prospectors and Miners' Hall of Fame in the category Directors and Management. | https://en.wikipedia.org/wiki?curid=13682 |
Hildegard of Bingen
Hildegard of Bingen (; ; 1098 – 17 September 1179), also known as Saint Hildegard and the Sibyl of the Rhine, was a German Benedictine abbess, writer, composer, philosopher, Christian mystic, visionary, and polymath. She is one of the best-known composers of sacred monophony, as well as the most-recorded in modern history. She has been considered by many in Europe to be the founder of scientific natural history in Germany.
Hildegard's fellow nuns elected her as "magistra" in 1136; she founded the monasteries of Rupertsberg in 1150 and Eibingen in 1165. She wrote theological, botanical, and medicinal texts, as well as letters, liturgical songs for women choirs to sing and poems, while supervising miniature illuminations in the Rupertsberg manuscript of her first work, "Scivias". There are more surviving chants by Hildegard than by any other composer from the entire Middle Ages, and she is one of the few known composers to have written both the music and the words. One of her works, the "Ordo Virtutum", is an early example of liturgical drama and arguably the oldest surviving morality play. She is also noted for the invention of a constructed language known as "Lingua Ignota".
Although the history of her formal canonization is complicated, branches of the Roman Catholic Church have recognized her as a saint for centuries. On 10 May 2012, Pope Benedict XVI extended the liturgical cult of St. Hildegard to the entire Catholic Church in a process known as "equivalent canonization". On 7 October 2012, he named her a Doctor of the Church, in recognition of "her holiness of life and the originality of her teaching."
Hildegard was born around the year 1098, although the exact date is uncertain. Her parents were Mechtild of Merxheim-Nahet and Hildebert of Bermersheim, a family of the free lower nobility in the service of the Count Meginhard of Sponheim. Sickly from birth, Hildegard is traditionally considered their youngest and tenth child, although there are records of only seven older siblings. In her "Vita", Hildegard states that from a very young age she had experienced visions.
From early childhood, long before she undertook her public mission or even her monastic vows, Hildegard's spiritual awareness was grounded in what she called the "umbra viventis lucis", the reflection of the living Light. Her letter to Guibert of Gembloux, which she wrote at the age of seventy-seven, describes her experience of this light with admirable precision:
"From my early childhood, before my bones, nerves and veins were fully strengthened, I have always seen this vision in my soul, even to the present time when I am more than seventy years old. In this vision my soul, as God would have it, rises up high into the vault of heaven and into the changing sky and spreads itself out among different peoples, although they are far away from me in distant lands and places. And because I see them this way in my soul, I observe them in accord with the shifting of clouds and other created things. I do not hear them with my outward ears, nor do I perceive them by the thoughts of my own heart or by any combination of my five senses, but in my soul alone, while my outward eyes are open. So I have never fallen prey to ecstasy in the visions, but I see them wide awake, day and night. And I am constantly fettered by sickness, and often in the grip of pain so intense that it threatens to kill me, but God has sustained me until now. The light which I see thus is not spatial, but it is far, far brighter than a cloud which carries the sun. I can measure neither height, nor length, nor breadth in it; and I call it "the reflection of the living Light." And as the sun, the moon, and the stars appear in water, so writings, sermons, virtues, and certain human actions take form for me and gleam."
Perhaps because of Hildegard's visions, or as a method of political positioning (or both), Hildegard's parents offered her as an oblate to the Benedictine monastery at the Disibodenberg, which had been recently reformed in the Palatinate Forest. The date of Hildegard's enclosure at the monastery is the subject of debate. Her "Vita" says she was professed with an older woman, Jutta, the daughter of Count Stephan II of Sponheim, at the age of eight. However, Jutta's date of enclosure is known to have been in 1112, when Hildegard would have been fourteen. Their vows were received by Bishop Otto Bamberg on All Saints' Day, 1112. Some scholars speculate that Hildegard was placed in the care of Jutta at the age of eight, and the two women were then enclosed together six years later.
In any case, Hildegard and Jutta were enclosed together at the Disibodenberg, and formed the core of a growing community of women attached to the male monastery. Jutta was also a visionary and thus attracted many followers who came to visit her at the cloister. Hildegard tells us that Jutta taught her to read and write, but that she was unlearned and therefore incapable of teaching Hildegard sound biblical interpretation. The written record of the "Life of Jutta" indicates that Hildegard probably assisted her in reciting the psalms, working in the garden and other handiwork, and tending to the sick. This might have been a time when Hildegard learned how to play the ten-stringed psaltery. Volmar, a frequent visitor, may have taught Hildegard simple psalm notation. The time she studied music could have been the beginning of the compositions she would later create.
Upon Jutta's death in 1136, Hildegard was unanimously elected as "magistra" of the community by her fellow nuns. Abbot Kuno of Disibodenberg asked Hildegard to be Prioress, which would be under his authority. Hildegard, however, wanted more independence for herself and her nuns, and asked Abbot Kuno to allow them to move to Rupertsberg. This was to be a move towards poverty, from a stone complex that was well established to a temporary dwelling place. When the abbot declined Hildegard's proposition, Hildegard went over his head and received the approval of Archbishop Henry I of Mainz. Abbot Kuno did not relent until Hildegard was stricken by an illness that kept her paralyzed and unable to move from her bed, an event that she attributed to God's unhappiness at her not following his orders to move her nuns to a new location in Rupertsberg. It was only when the Abbot himself could not move Hildegard that he decided to grant the nuns their own monastery. Hildegard and about twenty nuns thus moved to the St. Rupertsberg monastery in 1150, where Volmar served as provost, as well as Hildegard's confessor and scribe. In 1165 Hildegard founded a second monastery for her nuns at Eibingen.
Before Hildegard's death, a problem arose with the clergy of Mainz. A man buried in Rupertsburg had died after excommunication from the Church. Therefore, the clergy wanted to remove his body from the sacred ground. Hildegard did not accept this idea, replying that it was a sin and that the man had been reconciled to the church at the time of his death.
Hildegard said that she first saw "The Shade of the Living Light" at the age of three, and by the age of five she began to understand that she was experiencing visions. She used the term 'visio' (the Latin for "vision") to describe this feature of her experience and recognized that it was a gift that she could not explain to others. Hildegard explained that she saw all things in the light of God through the five senses: sight, hearing, taste, smell, and touch. Hildegard was hesitant to share her visions, confiding only to Jutta, who in turn told Volmar, Hildegard's tutor and, later, secretary. Throughout her life, she continued to have many visions, and in 1141, at the age of 42, Hildegard received a vision she believed to be an instruction from God, to "write down that which you see and hear." Still hesitant to record her visions, Hildegard became physically ill. The illustrations recorded in the book of Scivias were visions that Hildegard experienced, causing her great suffering and tribulations. In her first theological text, "Scivias" ("Know the Ways"), Hildegard describes her struggle within:
But I, though I saw and heard these things, refused to write for a long time through doubt and bad opinion and the diversity of human words, not with stubbornness but in the exercise of humility, until, laid low by the scourge of God, I fell upon a bed of sickness; then, compelled at last by many illnesses, and by the witness of a certain noble maiden of good conduct [the nun Richardis von Stade] and of that man whom I had secretly sought and found, as mentioned above, I set my hand to the writing. While I was doing it, I sensed, as I mentioned before, the deep profundity of scriptural exposition; and, raising myself from illness by the strength I received, I brought this work to a close – though just barely – in ten years. (...) And I spoke and wrote these things not by the invention of my heart or that of any other person, but as by the secret mysteries of God I heard and received them in the heavenly places. And again I heard a voice from Heaven saying to me, 'Cry out, therefore, and write thus!'
It was between November 1147 and February 1148 at the synod in Trier that Pope Eugenius heard about Hildegard's writings. It was from this that she received Papal approval to document her visions as revelations from the Holy Spirit, giving her instant credence.
On 17 September 1179, when Hildegard died, her sisters claimed they saw two streams of light appear in the skies and cross over the room where she was dying.
Hildegard's hagiography, "Vita Sanctae Hildegardis", was compiled by the monk Theoderic of Echternach after Hildegard's death. He included the hagiographical work "Libellus" or "Little Book" begun by Godfrey of Disibodenberg. Godfrey had died before he was able to complete his work. Guibert of Gembloux was invited to finish the work; however, he had to return to his monastery with the project unfinished. Theoderic utilized sources Guibert had left behind to complete the "Vita".
Hildegard's works include three great volumes of visionary theology; a variety of musical compositions for use in liturgy, as well as the musical morality play "Ordo Virtutum"; one of the largest bodies of letters (nearly 400) to survive from the Middle Ages, addressed to correspondents ranging from popes to emperors to abbots and abbesses, and including records of many of the sermons she preached in the 1160s and 1170s; two volumes of material on natural medicine and cures; an invented language called the "Lingua ignota" ("unknown language"); and various minor works, including a gospel commentary and two works of hagiography.
Several manuscripts of her works were produced during her lifetime, including the illustrated Rupertsberg manuscript of her first major work, "Scivias" (lost since 1945); the Dendermonde Codex, which contains one version of her musical works; and the Ghent manuscript, which was the first fair-copy made for editing of her final theological work, the "Liber Divinorum Operum". At the end of her life, and probably under her initial guidance, all of her works were edited and gathered into the single Riesenkodex manuscript.
Hildegard's most significant works were her three volumes of visionary theology: "Scivias" ("Know the Ways", composed 1142–1151), "Liber Vitae Meritorum" ("Book of Life's Merits" or "Book of the Rewards of Life", composed 1158–1163); and "Liber Divinorum Operum" ("Book of Divine Works", also known as "De operatione Dei", "On God's Activity", composed 1163/4–1172 or 1174). In these volumes, the last of which was completed when she was well into her seventies, Hildegard first describes each vision, whose details are often strange and enigmatic, and then interprets their theological contents in the words of the "voice of the Living Light."
With permission from Abbot Kuno of Disibodenberg, she began journaling visions she had (which is the basis for "Scivias"). "Scivias" is a contraction of "Sci vias Domini" ("Know the Ways of the Lord"), and it was Hildegard's first major visionary work, and one of the biggest milestones in her life. Perceiving a divine command to "write down what you see and hear," Hildegard began to record and interpret her visionary experiences. In total, 26 visionary experiences were captured in this compilation.
"Scivias" is structured into three parts of unequal length. The first part (six visions) chronicles the order of God's creation: the Creation and Fall of Adam and Eve, the structure of the universe (famously described as the shape of an "egg"), the relationship between body and soul, God's relationship to his people through the Synagogue, and the choirs of angels. The second part (seven visions) describes the order of redemption: the coming of Christ the Redeemer, the Trinity, the Church as the Bride of Christ and the Mother of the Faithful in baptism and confirmation, the orders of the Church, Christ's sacrifice on the Cross and the Eucharist, and the fight against the devil. Finally, the third part (thirteen visions) recapitulates the history of salvation told in the first two parts, symbolized as a building adorned with various allegorical figures and virtues. It concludes with the Symphony of Heaven, an early version of Hildegard's musical compositions.
In early 1148, a commission was sent by the Pope to Disibodenberg to find out more about Hildegard and her writings. The commission found that the visions were authentic and returned to the Pope, with a portion of the "Scivias". Portions of the uncompleted work were read aloud to Pope Eugenius III at the Synod of Trier in 1148, after which he sent Hildegard a letter with his blessing. This blessing was later construed as papal approval for all of Hildegard's wide-ranging theological activities. Towards the end of her life, Hildegard commissioned a richly decorated manuscript of "Scivias" (the Rupertsberg Codex); although the original has been lost since its evacuation to Dresden for safekeeping in 1945, its images are preserved in a hand-painted facsimile from the 1920s.
In her second volume of visionary theology, composed between 1158 and 1163, after she had moved her community of nuns into independence at the Rupertsberg in Bingen, Hildegard tackled the moral life in the form of dramatic confrontations between the virtues and the vices. She had already explored this area in her musical morality play, "Ordo Virtutum", and the "Book of the Rewards of Life" takes up that play's characteristic themes. Each vice, although ultimately depicted as ugly and grotesque, nevertheless offers alluring, seductive speeches that attempt to entice the unwary soul into their clutches. Standing in our defence, however, are the sober voices of the Virtues, powerfully confronting every vicious deception.
Amongst the work's innovations is one of the earliest descriptions of purgatory as the place where each soul would have to work off its debts after death before entering heaven. Hildegard's descriptions of the possible punishments there are often gruesome and grotesque, which emphasize the work's moral and pastoral purpose as a practical guide to the life of true penance and proper virtue.
Hildegard's last and grandest visionary work had its genesis in one of the few times she experienced something like an ecstatic loss of consciousness. As she described it in an autobiographical passage included in her Vita, sometime in about 1163, she received "an extraordinary mystical vision" in which was revealed the "sprinkling drops of sweet rain" that John the Evangelist experienced when he wrote, "In the beginning was the Word..." (John 1:1). Hildegard perceived that this Word was the key to the "Work of God", of which humankind is the pinnacle. The "Book of Divine Works", therefore, became in many ways an extended explication of the Prologue to John's Gospel.
The ten visions of this work's three parts are cosmic in scale, to illustrate various ways of understanding the relationship between God and his creation. Often, that relationship is established by grand allegorical female figures representing Divine Love ("Caritas") or Wisdom ("Sapientia"). The first vision opens the work with a salvo of poetic and visionary images, swirling about to characterize God's dynamic activity within the scope of his work within the history of salvation. The remaining three visions of the first part introduce the famous image of a human being standing astride the spheres that make up the universe and detail the intricate relationships between the human as microcosm and the universe as macrocosm. This culminates in the final chapter of Part One, Vision Four with Hildegard's commentary on the Prologue to John's Gospel (John 1:1–14), a direct rumination on the meaning of "In the beginning was the Word..." The single vision that constitutes the whole of Part Two stretches that rumination back to the opening of Genesis, and forms an extended commentary on the seven days of the creation of the world told in Genesis 1–2:3. This commentary interprets each day of creation in three ways: literal or cosmological; allegorical or ecclesiological (i.e. related to the Church's history); and moral or tropological (i.e. related to the soul's growth in virtue). Finally, the five visions of the third part take up again the building imagery of "Scivias" to describe the course of salvation history. The final vision (3.5) contains Hildegard's longest and most detailed prophetic program of the life of the Church from her own days of "womanish weakness" through to the coming and ultimate downfall of the Antichrist.
Attention in recent decades to women of the medieval Church has led to a great deal of popular interest in Hildegard's music. In addition to the "Ordo Virtutum," sixty-nine musical compositions, each with its own original poetic text, survive, and at least four other texts are known, though their musical notation has been lost. This is one of the largest repertoires among medieval composers.
One of her better-known works, "Ordo Virtutum" ("Play of the Virtues"), is a morality play. It is uncertain when some of Hildegard's compositions were composed, though the "Ordo Virtutum" is thought to have been composed as early as 1151. It is an independent Latin morality play with music (82 songs); it does not supplement or pay homage to the Mass or the Office of a certain feast. The most significant part of this entire composition is, however, that the "Ordo virtutum" is the earliest known surviving musical drama that is not attached to a liturgy.
This entertainment was both performed and bemused by a select community of noblewomen and nuns. Even more fascinating about this piece, the devil has no music whatsoever in the plot of the play, he instead shouts and bellows all his lines. All other characters sing in monophonic plainchant. This includes Patriarchs, Prophets, A Happy Soul, A Unhappy Soul and A Penitent Soul along with 16 female Virtues (including Mercy, Innocence, Chasity, Obedience, Hope, and Faith).
The "Ordo Virtutum" was probably performed as a manifestation of the theology Hildegard delineated in the "Scivias". The play serves as a group enchantment of the Christian story of sin, confession, repentance, and forgiveness. Notably, it is the female Virtues who restore the fallen to the community of the faithful, not the male Patriarchs or Prophets. This would have been a significant message to the nuns in Hildegard's convent. Scholars assert that the role of the Devil would have been played by Volmar, while Hildegard's nuns would have played the parts of Anima (the human souls) and the Virtues.
In addition to the "Ordo Virtutum", Hildegard composed many liturgical songs that were collected into a cycle called the "Symphonia armoniae celestium revelationum". The songs from the Symphonia are set to Hildegard's own text and range from antiphons, hymns, and sequences, to responsories. Her music is described as monophonic, that is, consisting of exactly one melodic line. Its style is characterized by soaring melodies that can push the boundaries of the more staid ranges of traditional Gregorian chant. Though Hildegard's music is often thought to stand outside the normal practices of monophonic monastic chant, current researchers are also exploring ways in which it may be viewed in comparison with her contemporaries, such as Hermannus Contractus. Another feature of Hildegard's music that both reflects twelfth-century evolutions of chant and pushes those evolutions further is that it is highly melismatic, often with recurrent melodic units. Scholars such as Margot Fassler, Marianne Richert Pfau, and Beverly Lomer also note the intimate relationship between music and text in Hildegard's compositions, whose rhetorical features are often more distinct than is common in twelfth-century chant. As with all medieval chant notation, Hildegard's music lacks any indication of tempo or rhythm; the surviving manuscripts employ late German style notation, which uses very ornamental neumes. The reverence for the Virgin Mary reflected in music shows how deeply influenced and inspired Hildegard of Bingen and her community were by the Virgin Mary and the saints.
The definition of viriditas or "greenness" is an earthly expression of the heavenly in an integrity that overcomes dualisms. This greenness or power of life appears frequently in Hildegard's works.
Despite Hildegard's self-professed view that her compositions have as their object the praise of God, one scholar has asserted that Hildegard made a close association between music and the female body in her musical compositions. According to him, the poetry and music of Hildegard's Symphonia would, therefore, be concerned with the anatomy of female desire thus described as Sapphonic, or pertaining to Sappho, connecting her to a history of female rhetoricians.
Hildegard's medicinal and scientific writings, though thematically complementary to her ideas about nature expressed in her visionary works, are different in focus and scope. Neither claim to be rooted in her visionary experience and its divine authority. Rather, they spring from her experience helping in and then leading the monastery's herbal garden and infirmary, as well as the theoretical information she likely gained through her wide-ranging reading in the monastery's library. As she gained practical skills in diagnosis, prognosis, and treatment, she combined physical treatment of physical diseases with holistic methods centered on "spiritual healing." She became well known for her healing powers involving the practical application of tinctures, herbs, and precious stones. She combined these elements with a theological notion ultimately derived from Genesis: all things put on earth are for the use of humans. In addition to her hands-on experience, she also gained medical knowledge, including elements of her humoral theory, from traditional Latin texts.
Hildegard catalogued both her theory and practice in two works. The first, "Physica," contains nine books that describe the scientific and medicinal properties of various plants, stones, fish, reptiles, and animals. This document is also thought to contain the first recorded reference of the usage of hops in beer as a preservative. The second, "Causae et Curae", is an exploration of the human body, its connections to the rest of the natural world, and the causes and cures of various diseases. Hildegard documented various medical practices in these books, including the use of bleeding and home remedies for many common ailments. She also explains remedies for common agricultural injuries such as burns, fractures, dislocations, and cuts. Hildegard may have used the books to teach assistants at the monastery. These books are historically significant because they show areas of medieval medicine that were not well documented because their practitioners (mainly women) rarely wrote in Latin. Her writings were commentated on by Mélanie Lipinska, a Polish scientist.
In addition to its wealth of practical evidence, "Causae et Curae" is also noteworthy for its organizational scheme. Its first part sets the work within the context of the creation of the cosmos and then humanity as its summit, and the constant interplay of the human person as microcosm both physically and spiritually with the macrocosm of the universe informs all of Hildegard's approach. Her hallmark is to emphasize the vital connection between the "green" health of the natural world and the holistic health of the human person. "Viriditas", or greening power, was thought to sustain human beings and could be manipulated by adjusting the balance of elements within a person. Thus, when she approached medicine as a type of gardening, it was not just as an analogy. Rather, Hildegard understood the plants and elements of the garden as direct counterparts to the humors and elements within the human body, whose imbalance led to illness and disease.
Thus, the nearly three hundred chapters of the second book of "Causae et Curae" "explore the etiology, or causes, of disease as well as human sexuality, psychology, and physiology." In this section, she gives specific instructions for bleeding based on various factors, including gender, the phase of the moon (bleeding is best done when the moon is waning), the place of disease (use veins near diseased organ or body part) or prevention (big veins in arms), and how much blood to take (described in imprecise measurements, like "the amount that a thirsty person can swallow in one gulp"). She even includes bleeding instructions for animals to keep them healthy. In the third and fourth sections, Hildegard describes treatments for malignant and minor problems and diseases according to the humoral theory, again including information on animal health. The fifth section is about diagnosis and prognosis, which includes instructions to check the patient's blood, pulse, urine and stool. Finally, the sixth section documents a lunar horoscope to provide an additional means of prognosis for both disease and other medical conditions, such as conception and the outcome of pregnancy. For example, she indicates that a waxing moon is good for human conception and is also good for sowing seeds for plants (sowing seeds is the plant equivalent of conception). Elsewhere, Hildegard is even said to have stressed the value of boiling drinking water in an attempt to prevent infection.
As Hildegard elaborates the medical and scientific relationship between the human microcosm and the macrocosm of the universe, she often focuses on interrelated patterns of four: "the four elements (fire, air, water, and earth), the four seasons, the four humors, the four zones of the earth, and the four major winds." Although she inherited the basic framework of humoral theory from ancient medicine, Hildegard's conception of the hierarchical inter-balance of the four humors (blood, phlegm, black bile, and yellow bile) was unique, based on their correspondence to "superior" and "inferior" elements—blood and phlegm corresponding to the "celestial" elements of fire and air, and the two biles corresponding to the "terrestrial" elements of water and earth. Hildegard understood the disease-causing imbalance of these humors to result from the improper dominance of the subordinate humors. This disharmony reflects that introduced by Adam and Eve in the Fall, which for Hildegard marked the indelible entrance of disease and humoral imbalance into humankind. As she writes in "Causae et Curae" c. 42:
It happens that certain men suffer diverse illnesses. This comes from the phlegm which is superabundant within them. For if man had remained in paradise, he would not have had the "flegmata" within his body, from which many evils proceed, but his flesh would have been whole and without dark humor ["livor"]. However, because he consented to evil and relinquished good, he was made into a likeness of the earth, which produces good and useful herbs, as well as bad and useless ones, and which has in itself both good and evil moistures. From tasting evil, the blood of the sons of Adam was turned into the poison of semen, out of which the sons of man are begotten. And therefore their flesh is ulcerated and permeable [to disease]. These sores and openings create a certain storm and smoky moisture in men, from which the "flegmata" arise and coagulate, which then introduce diverse infirmities to the human body. All this arose from the first evil, which man began at the start, because if Adam had remained in paradise, he would have had the sweetest health, and the best dwelling-place, just as the strongest balsam emits the best odor; but on the contrary, man now has within himself poison and phlegm and diverse illnesses.
Hildegard also invented an alternative alphabet. "Litterae ignotae" ("Alternate Alphabet") was another work and was more or less a secret code, or even an intellectual code – much like a modern crossword puzzle today.
The text of her writing and compositions reveals Hildegard's use of this form of modified medieval Latin, encompassing many invented, conflated and abridged words. Because of her inventions of words for her lyrics and use of a constructed script, many conlangers look upon her as a medieval precursor.
Hildegard's "Lingua ignota" ("Unknown Language") was a composition that comprised a series of invented words that corresponded to an eclectic list of nouns. Scholars believe that Hildegard used her "Lingua Ignota" to increase solidarity among her nuns.
Maddocks claims that it is likely Hildegard learned simple Latin and the tenets of the Christian faith but was not instructed in the Seven Liberal Arts, which formed the basis of all education for the learned classes in the Middle Ages: the "Trivium" of grammar, dialectic, and rhetoric plus the "Quadrivium" of arithmetic, geometry, astronomy, and music. The correspondence she kept with the outside world, both spiritual and social, transcended the cloister as a space of spiritual confinement and served to document Hildegard's grand style and strict formatting of medieval letter writing.
Contributing to Christian European rhetorical traditions, Hildegard "authorized herself as a theologian" through alternative rhetorical arts. Hildegard was creative in her interpretation of theology. She believed that her monastery should exclude novices who were not from the nobility because she did not want her community to be divided on the basis of social status. She also stated that "woman may be made from man, but no man can be made without a woman."
Because of church limitation on public, discursive rhetoric, the medieval rhetorical arts included preaching, letter writing, poetry, and the encyclopedic tradition. Hildegard's participation in these arts speaks to her significance as a female rhetorician, transcending bans on women's social participation and interpretation of scripture. The acceptance of public preaching by a woman, even a well-connected abbess and acknowledged prophet, does not fit the stereotype of this time. Her preaching was not limited to the monasteries; she preached publicly in 1160 in Germany. (New York: Routledge, 2001, 9). She conducted four preaching tours throughout Germany, speaking to both clergy and laity in chapter houses and in public, mainly denouncing clerical corruption and calling for reform.
Many abbots and abbesses asked her for prayers and opinions on various matters. She traveled widely during her four preaching tours. She had several devoted followers, including Guibert of Gembloux, who wrote to her frequently and became her secretary after Volmar's death in 1173. Hildegard also influenced several monastic women, exchanging letters with Elisabeth of Schönau, a nearby visionary.
Hildegard corresponded with popes such as Eugene III and Anastasius IV, statesmen such as Abbot Suger, German emperors such as Frederick I Barbarossa, and other notable figures such as Saint Bernard of Clairvaux, who advanced her work, at the behest of her abbot, Kuno, at the Synod of Trier in 1147 and 1148. Hildegard of Bingen's correspondence is an important component of her literary output.
Hildegard was one of the first persons for whom the Roman canonization process was officially applied, but the process took so long that four attempts at canonization were not completed and she remained at the level of her beatification. Her name was nonetheless taken up in the Roman Martyrology at the end of the 16th century. Her feast day is 17 September. Numerous popes have referred to Hildegard as a saint, including Pope John Paul II and Pope Benedict XVI.
On 10 May 2012, Pope Benedict XVI extended the liturgical cult of St. Hildegard to the entire Catholic Church in a process known as "equivalent canonization," thus laying the groundwork for naming her a Doctor of the Church. On 7 October 2012, the feast of the Holy Rosary, the pope named her a Doctor of the Church, the fourth woman among 36 saints given that title by the Roman Catholic Church. He called her "perennially relevant" and "an authentic teacher of theology and a profound scholar of natural science and music."
Hildegard of Bingen also appears in the calendar of saints of various Anglican churches, such as that of the Church of England, in which she is commemorated on 17 September.
Hildegard's parish and pilgrimage church in Eibingen near Rüdesheim houses her relics.
In recent years, Hildegard has become of particular interest to feminist scholars. They note her reference to herself as a member of the weaker sex and her rather constant belittling of women. Hildegard frequently referred to herself as an unlearned woman, completely incapable of Biblical exegesis. Such a statement on her part, however, worked to her advantage because it made her statements that all of her writings and music came from visions of the Divine more believable, therefore giving Hildegard the authority to speak in a time and place where few women were permitted a voice. Hildegard used her voice to amplify the Church's condemnation of institutional corruption, in particular simony.
Hildegard has also become a figure of reverence within the contemporary New Age movement, mostly because of her holistic and natural view of healing, as well as her status as a mystic. Though her medical writings were long neglected, and then studied without reference to their context, she was the inspiration for Dr. Gottfried Hertzka's "Hildegard-Medicine", and is the namesake for June Boyce-Tillman's Hildegard Network, a healing center that focuses on a holistic approach to wellness and brings together people interested in exploring the links between spirituality, the arts, and healing. Her reputation as a medicinal writer and healer was also used by early feminists to argue for women's rights to attend medical schools. Hildegard's reincarnation has been debated since 1924 when Austrian mystic Rudolf Steiner lectured that a nun of her description was the past life of Russian poet-philosopher Vladimir Soloviev, whose Sophianic visions are often compared to Hildegard's. Sophiologist Robert Powell writes that hermetic astrology proves the match, while mystical communities in Hildegard's lineage include that of artist Carl Schroeder as studied by Columbia sociologist Courtney Bender and supported by reincarnation researchers Walter Semkiw and Kevin Ryerson.
Recordings and performances of Hildegard's music have gained critical praise and popularity since 1979. See Discography listed below.
The following modern musical works are directly linked to Hildegard and her music or texts:
The artwork "The Dinner Party" features a place setting for Hildegard.
In space, the minor planet 898 Hildegard is named for her.
In film, Hildegard has been portrayed by Patricia Routledge in a BBC documentary called "Hildegard of Bingen" (1994), by Ángela Molina in "Barbarossa" (2009) and by Barbara Sukowa in the film "Vision", directed by Margarethe von Trotta.
Hildegard was the subject of a 2012 fictionalized biographic novel "Illuminations" by Mary Sharatt.
The plant genus "Hildegardia" is named after her because of her contributions to herbal medicine.
Hildegard makes an appearance in "The Baby-Sitters Club #101: Claudia Kishi, Middle School Drop-Out" by Ann M. Martin, when Anna Stevenson dresses as Hildegard for Halloween.
A feature documentary film, "," was released by American director Michael M. Conti in 2014.
Primary Sources (in translation):
Secondary Sources: | https://en.wikipedia.org/wiki?curid=13684 |
Hilversum
Hilversum () is a city and municipality in the province of North Holland, Netherlands. Located in the heart of the Gooi, it is the largest urban centre in that area. It is surrounded by heathland, woods, meadows, lakes, and smaller towns. Hilversum is part of the Randstad, one of the largest conurbations in Europe.
Hilversum lies south-east of Amsterdam and north of Utrecht. The town is known for its architecturally important Town Hall (Raadhuis Hilversum), designed by Willem Marinus Dudok and built in 1931.
Hilversum has one public library, two swimming pools (Van Hellemond Sport and De Lieberg), a number of sporting halls and several shopping centres (such as Hilvertshof, Winkelcentrum Kerkelanden, De Riebeeckgalerij and Winkelcentrum Seinhorst). Locally, the town centre is known as "het dorp", which means "the village".
Hilversum is often called "media city", since it is the principal centre for radio and television broadcasting in the Netherlands, and is home to an extensive complex of radio and television studios and to the administrative headquarters of the multiple broadcasting organizations which make up the Netherlands Public Broadcasting system. Hilversum is also home to many newer commercial TV production companies. Radio Netherlands, which has been broadcasting worldwide via shortwave radio since the 1920s, is also based here.
The following is a list of organizations that have, or are continuing to, broadcast from studios in Hilversum:
One result of the town's history as an important radio transmission centre is that many older radio sets throughout Europe featured "Hilversum" as a pre-marked dial position on their tuning scales.
Dutch national voting in the Eurovision Song Contest is normally co-ordinated from Hilversum.
Hilversum has a variety of international schools, such as the "Violenschool" and "International School Hilversum "Alberdingk Thijm"". Also, Nike's, Hunkemöller's and Converse's European headquarters are located in Hilversum.
Earthenware found in Hilversum gives its name to the Hilversum culture, which is an early- to mid-Bronze Age, or 800–1200 BC material culture. Artifacts from this prehistoric civilization bear similarities to the Wessex Culture of southern Britain and may indicate that the first Hilversum residents emigrated from that area. The first brick settlements formed around 900, but it was not until 1305 that the first official mention of Hilversum ("Hilfersheem" from "Hilvertshem" meaning "houses between the hills") is found. At that point it was a part of Naarden, the oldest town in the Gooi area.
Farming, raising sheep and some wool manufacturing were the means of life for the Gooi in the Middle Ages. In 1424 Hilversum received its first official independent status. This made possible further growth in the village because permission from Naarden was no longer needed for new industrial development.
The town grew further in the 17th century when the Dutch economy as a whole entered its age of prosperity, and several canals were built connecting it indirectly to Amsterdam.
In 1725 and 1766 large fires destroyed most of the town, leveling parts of the old townhouse and the church next to it. The town overcame these setbacks and the textile industry continued to develop, among other ways by devising a way to weave cows' hair.
In the 19th century a substantial textile and tapestry industry emerged, aided by a railway link to Amsterdam in 1874. From that time the town grew quickly with rich commuters from Amsterdam moving in, building themselves large villas in the wooded surroundings, and gradually starting to live in Hilversum permanently. Despite this growth, Hilversum was never granted city rights so it is still referred to by many locals as "het dorp," or "the village."
For the 1928 Summer Olympics in neighboring Amsterdam, it hosted all of the non-jumping equestrian and the running part of the modern pentathlon event.
The "Nederlandse Seintoestellen Fabriek" (NSF) company established a professional transmitter and radio factory in Hilversum in the early 1920s, growing into the largest of its kind in the Netherlands.
Following the defeat of Allied forces in the Netherlands in 1940, and its occupation by Nazi Germany, Hilversum became the headquarters of the German Army ("Heer") in the Netherlands..
In 1948, NSF was taken over by Philips. However, Dutch radio broadcasting organizations (followed by television broadcasters during the 1950s) centralised their operations in Hilversum, providing a source of continuing economic growth. The concentration of broadcasters in Hilversum has given it its enduring status as the media city for the Netherlands.
In 1964, the population reached a record high – over 103,000 people called Hilversum home. However, the textile industry had started its decline; only one factory, Veneta, managed to continue into the 1960s, when it also had to close its doors. Another major industry, the chemical factory IFF, also closed by the end of the 1960s.
After the 1960s, the population gradually declined, until stabilising at around 85,000. Several factors other than the slump in manufacturing have featured in this decline: one is the fact that the average family nowadays consists of fewer people, so fewer people live in each house; second, the town is virtually unable to expand because all the surrounding lands were sold by city architect W.M. Dudok to the Goois Natuurreservaat (""). The third reason for this decline of the population was because the property values were increasing rapidly in that moment of time, and many people were forced to move to less expensive areas in the Netherlands.
Some sources blame connections in the television world for attracting crime to Hilversum; the town has had to cope with mounting drug-related issues in a community with higher than average unemployment and ongoing housing shortage.
Hilversum was one of the first towns to have a local party of the populist movement called "Leefbaar" ("liveable"). Founded by former social-democrat party strongman Jan Nagel, it was initially held at bay for alderman positions. In 2001, Nagel from Leefbaar Hilversum teamed up with Leefbaar Utrecht leaders to found a national Leefbaar Nederland party. By strange coincidence, in 2002 the most vocal Leefbaar Rotterdam politician Pim Fortuyn was shot and killed by an animal rights activist at Hilversum Media Park just after finishing a radio interview. This happened, however, after a break between Fortuyn and Nagel during a Leefbaar Nederland board meeting in Hilversum on Fortuyn's anti-Islamic viewpoints.
The town of Hilversum has put a great deal of effort into improvements, including a recent renovation to its central train station, thorough renovation of the main shopping centre (Hilvertshof), and development of new dining and retail districts downtown including the "vintage" district in the Leeuwenstraat. Several notable architectural accomplishments include the Institute for Sound and Vision, and Zanderij Crailoo (""), the largest man-made wildlife crossing in the world.
The nearby Media Park was the scene of the 2002 assassination of politician Pim Fortuyn; in 2015, a gunman carrying a false pistol stormed into Nederlandse Omroep Stichting's headquarters, demanding airtime on the evening news.
The population declined from 103,000 in 1964 to 84,000 in 2006, but rose again to 90.000 in 2018. The decline is mostly due to the fact that families are smaller these days.
The large Catholic neo-gothic St. Vitus church (P.J.H. Cuypers, 1892, bell tower 96 metres).
The city played host to many landscape artists during the 19th century, including Barend Cornelis Koekkoek.
In the 1950s and 1960s the city played host to a major European Tennis tournament.
The 1958 Eurovision Song Contest took place in Hilversum.
In 2020 the international television event was broadcast from Studio 21 in Media Park (Hilversum). This event was held in place of the 2020 Eurovision Song Contest which was cancelled due to the Covid-19 pandemic.
Hilversum is well connected to the Dutch railway network, and has three stations.
Most local and regional buses are operated by Connexxion, but two of the bus routes are operated by Syntus Utrecht and two others by U-OV and Pouw Vervoer. Regional bus route 320 is operated by both Connexxion and Pouw Vervoer.
In 2018, major road works started to make room for a new BRT bus lane from Hilversum to Huizen, set to open in early 2021.
The municipal council of Hilversum consists of 37 seats, which are divided as followed since the last local election of 2018:
Government
After the 2018 elections, the municipal government was made up of aldermen from the political parties Hart voor Hilversum, D66 and VVD.
The mayor of Hilversum is Pieter Broertjes, former lead editor of the Volkskrant, a nationwide distributed newspaper.
It was the first city with a "Leefbaar" party (which was intended as just a local party). Today, Leefbaar Hilversum has been reduced to only 1 seat, but some other parties have their origins in Leefbaar Hilversum:
Notable people born in Hilversum: | https://en.wikipedia.org/wiki?curid=13686 |
History of the Internet
The history of the Internet has its origin in the efforts to build and interconnect computer networks that arose from research and development in the United States and involved international collaboration, particularly with researchers in the United Kingdom and France.
Computer science was an emerging discipline in the late 1950s that began to consider time-sharing between computer users and, later, the possibility of achieving this over wide area networks. Independently, Paul Baran proposed a distributed network based on data in message blocks in the early 1960s and Donald Davies conceived of packet switching in 1965 at the National Physical Laboratory (NPL) in the UK, which became a testbed for research for two decades. The U.S. Department of Defense awarded contracts in 1969 for the development of the ARPANET project, directed by Robert Taylor and managed by Lawrence Roberts. ARPANET adopted the packet switching technology proposed by Davies and Baran, underpinned by mathematical work in the early 1970s by Leonard Kleinrock. The network was built by Bolt, Beranek, and Newman.
Early packet switching networks such as the NPL network, ARPANET, Merit Network, and CYCLADES in the early 1970s researched and provided data networking. The ARPANET project and international working groups led to the development of protocols for internetworking, in which multiple separate networks could be joined into a network of networks, which produced various standards. Vint Cerf, at Stanford University, and Bob Kahn, at ARPA, published research in 1973 that evolved into the Transmission Control Protocol (TCP) and Internet Protocol (IP), the two protocols of the Internet protocol suite. The design included concepts from the French CYCLADES project directed by Louis Pouzin.
In the early 1980s the National Science Foundation (NSF) funded national supercomputing centers at several universities in the United States and provided interconnectivity in 1986 with the NSFNET project, which created network access to these supercomputer sites for research and academic organizations in the United States. International connections to NSFNET, the emergence of architecture such as the Domain Name System, and the adoption of TCP/IP internationally marked the beginnings of the Internet. Commercial Internet service providers (ISPs) began to emerge in the very late 1980s. The ARPANET was decommissioned in 1990. Limited private connections to parts of the Internet by officially commercial entities emerged in several American cities by late 1989 and 1990. The NSFNET was decommissioned in 1995, removing the last restrictions on the use of the Internet to carry commercial traffic.
Research at CERN in Switzerland by British computer scientist Tim Berners-Lee in 1989-90 resulted in the World Wide Web, linking hypertext documents into an information system, accessible from any node on the network. Since the mid-1990s, the Internet has had a revolutionary impact on culture, commerce, and technology, including the rise of near-instant communication by electronic mail, instant messaging, voice over Internet Protocol (VoIP) telephone calls, two-way interactive video calls, and the World Wide Web with its discussion forums, blogs, social networking, and online shopping sites. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet's takeover of the global communication landscape was rapid in historical terms: it only communicated 1% of the information flowing through two-way telecommunications networks in the year 1993, 51% by 2000, and more than 97% of the telecommunicated information by 2007. Today, the Internet continues to grow, driven by ever greater amounts of online information, commerce, entertainment, and social networking. However, the future of the global network may be shaped by regional differences.
The concept of data communication – transmitting data between two different places through an electromagnetic medium such as radio or an electric wire – pre-dates the introduction of the first computers. Such communication systems were typically limited to point to point communication between two end devices. Semaphore lines, telegraph systems and telex machines can be considered early precursors of this kind of communication. The telegraph in the late 19th century was the first fully digital communication system.
Early computers had a central processing unit and remote terminals. As the technology evolved, new systems were devised to allow communication over longer distances (for terminals) or with higher speed (for interconnection of local devices) that were necessary for the mainframe computer model. These technologies made it possible to exchange data (such as files) between remote computers. However, the point-to-point communication model was limited, as it did not allow for direct communication between any two arbitrary systems; a physical link was necessary. The technology was also considered unsafe for strategic and military use because there were no alternative paths for the communication in case of an enemy attack.
Fundamental theoretical work in data transmission and information theory was developed by Claude Shannon, Harry Nyquist, and Ralph Hartley in the early 20th century. Information theory, as enunciated by Shannon in 1948, provided a firm theoretical underpinning to understand the trade-offs between signal-to-noise ratio, bandwidth, and error-free transmission in the presence of noise, in telecommunications technology.
The development of transistor technology was fundamental to a new generation of electronic devices that later effected almost every aspect of the human experience. The long-sought realization of the field-effect transistor, in form of the MOS transistor (MOSFET), by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959, brought new opportunities for miniaturization and mass-production for a wide range of uses. It became the basic building block of the information revolution and the information age, and laid the foundation for power electronic technology that later enabled the development of wireless Internet technology. Network bandwidth has been doubling every 18 months since the 1970s, which found expression in Edholm's law, similar to the scaling expressed by Moore's law for semiconductors.
With limited exceptions, the earliest computers were connected directly to terminals used by individual users, typically in the same building or site.
Wide area networks (WANs) emerged during the 1950s and became established during the 1960s.
Christopher Strachey, who became Oxford University's first professor of computation, filed a patent application for time-sharing in February 1959. In June that year, he gave a paper "Time Sharing in Large Fast Computers" at the UNESCO Information Processing Conference in Paris where he passed the concept on to J. C. R. Licklider. Licklider, Vice President at Bolt Beranek and Newman, Inc., discussed a computer network in his January 1960 paper "Man-Computer Symbiosis":
In August 1962, Licklider and Welden Clark published the paper "On-Line Man-Computer Communication" which was one of the first descriptions of a networked future.
In October 1962, Licklider was hired by Jack Ruina as director of the newly established Information Processing Techniques Office (IPTO) within DARPA, with a mandate to interconnect the United States Department of Defense's main computers at Cheyenne Mountain, the Pentagon, and SAC HQ. There he formed an informal group within DARPA to further computer research. He began by writing memos in 1963 describing a distributed network to the IPTO staff, whom he called "Members and Affiliates of the Intergalactic Computer Network".
Although he left the IPTO in 1964, five years before the ARPANET went live, it was his vision of universal networking that provided the impetus for one of his successors, Robert Taylor, to initiate the ARPANET development. Licklider later returned to lead the IPTO in 1973 for two years.
The issue of connecting separate physical networks to form one logical network was the first of many problems. Early networks used message switched systems that required rigid routing structures prone to single point of failure. In the 1960s, Paul Baran of the RAND Corporation produced a study of survivable networks for the U.S. military in the event of nuclear war. Information transmitted across Baran's network would be divided into what he called "message blocks". Independently, Donald Davies (National Physical Laboratory, UK), proposed and was the first to put into practice a local area network based on what he called packet switching, the term that would ultimately be adopted. Larry Roberts applied Davies' concepts of packet switching for the ARPANET wide area network, and sought input from Paul Baran. Leonard Kleinrock subsequently developed the mathematical theory behind the performance of this technology building on his earlier work on queueing theory.
Packet switching is a rapid store and forward networking design that divides messages up into arbitrary packets, with routing decisions made per-packet. It provides better bandwidth utilization and response times than the traditional circuit-switching technology used for telephony, particularly on resource-limited interconnection links.
The software for establishing links between network sites in the ARPANET was the Network Control Program (NCP), completed in c. 1970. Further development in the early 1970s by Robert E. Kahn and Vint Cerf let to the formulation of the "Transmission Control Program", and its specification in December 1974 in . This work also coined the terms "catenet" (concatenated network) and "internet" as a contraction of "internetworking", which describe the interconnection of multiple networks. This software was monolithic in design using two simplex communication channels for each user session. The software was redesigned as a modular protocol stack, using full-duplex channels. Originally named IP/TCP it was installed in the ARPANET for production use in January 1983.
Following discussions with J. C. R. Licklider in 1965, Donald Davies became interested in data communications for computer networks. Later that year, at the National Physical Laboratory (United Kingdom), Davies designed and proposed a national data network based on packet switching. The following year, he described the use of an "Interface computer" to act as a router. The proposal was not taken up nationally but by 1967, a pilot experiment had demonstrated the feasibility of packet switched networks. He and his team were the first to use the term 'protocol' in a data-commutation context in 1967.
By 1969 he had begun building the Mark I packet-switched network to meet the needs of the multidisciplinary laboratory and prove the technology under operational conditions. In 1976, 12 computers and 75 terminal devices were attached, and more were added until the network was replaced in 1986. NPL, followed by ARPANET, were the first two networks in the world to use packet switching, and were interconnected in the early 1970s. The NPL team also carried out simulation work on packet networks, including datagram networks.
Robert Taylor was promoted to the head of the Information Processing Techniques Office (IPTO) at Defense Advanced Research Projects Agency (DARPA) in 1966. He intended to realize Licklider's ideas of an interconnected networking system. As part of the IPTO's role, three network terminals had been installed: one for System Development Corporation in Santa Monica, one for Project Genie at University of California, Berkeley, and one for the Compatible Time-Sharing System project at Massachusetts Institute of Technology (MIT). Taylor's identified need for networking became obvious from the waste of resources apparent to him.
Bringing in Larry Roberts from MIT, he initiated a project to build such a network. Roberts and Thomas Merrill had been researching computer time-sharing over wide area networks. At the first ACM Symposium on Operating Systems Principles in October 1967, Roberts presented a proposal for the "ARPA net", based on Wesley Clark's proposal for using Interface Message Processors to create a message switching network. At the conference, Roger Scantlebury presented Donald Davies' work on packet switching for data communications and mentioned the work of Paul Baran at RAND. Roberts incorporated the packet switching concepts into the ARPANET design and upgraded the proposed communications speed from 2.4 kbps to 50 kbps.
ARPA awarded the contract to build the network to Bolt Beranek & Newman, and the first ARPANET link was established between the University of California, Los Angeles (UCLA) and the Stanford Research Institute at 22:30 hours on October 29, 1969.
By December 1969, a four-node network was connected by adding the University of Utah and the University of California, Santa Barbara. In the same year, Taylor helped fund ALOHAnet, a system designed by professor Norman Abramson and others at the University of Hawaii at Manoa that transmitted data by radio between seven computers on four islands on Hawaii. By 1981, the number of hosts had grown to 213.
ARPANET development was centered around the Request for Comments (RFC) process, still used today for proposing and distributing Internet Protocols and Systems. RFC 1, entitled "Host Software", was written by Steve Crocker from the University of California, Los Angeles, and published on April 7, 1969. These early years were documented in the 1972 film .
ARPANET became the technical core of what would become the Internet, and a primary tool in developing the technologies used. The early ARPANET used the Network Control Program (NCP, sometimes Network Control Protocol) rather than TCP/IP. On January 1, 1983, known as flag day, NCP on the ARPANET was replaced by the more flexible and powerful family of TCP/IP protocols, marking the start of the modern Internet.
Early international collaborations on ARPANET were sparse. Connections were made in 1973 to the Norwegian Seismic Array (NORSAR), via a satellite link at the Tanum Earth Station in Sweden, and to Peter Kirstein's research group at University College London which provided a gateway to British academic networks.
The Merit Network was formed in 1966 as the Michigan Educational Research Information Triad to explore computer networking between three of Michigan's public universities as a means to help the state's educational and economic development. With initial support from the State of Michigan and the National Science Foundation (NSF), the packet-switched network was first demonstrated in December 1971 when an interactive host to host connection was made between the IBM mainframe computer systems at the University of Michigan in Ann Arbor and Wayne State University in Detroit. In October 1972 connections to the CDC mainframe at Michigan State University in East Lansing completed the triad. Over the next several years in addition to host to host interactive connections the network was enhanced to support terminal to host connections, host to host batch connections (remote job submission, remote printing, batch file transfer), interactive file transfer, gateways to the Tymnet and Telenet public data networks, X.25 host attachments, gateways to X.25 data networks, Ethernet attached hosts, and eventually TCP/IP and additional public universities in Michigan join the network. All of this set the stage for Merit's role in the NSFNET project starting in the mid-1980s.
The CYCLADES packet switching network was a French research network designed and directed by Louis Pouzin. First demonstrated in 1973, it was developed to explore alternatives to the early ARPANET design and to support network research generally. It was the first network to make the hosts responsible for reliable delivery of data, rather than the network itself, using unreliable datagrams and associated end-to-end protocol mechanisms. Concepts of this network influenced later ARPANET architecture.
Based on international research initiatives, particularly the contributions of Rémi Després, packet switching network standards were developed by the International Telegraph and Telephone Consultative Committee (ITU-T) in the form of X.25 and related standards. X.25 is built on the concept of virtual circuits emulating traditional telephone connections. In 1974, X.25 formed the basis for the SERCnet network between British academic and research sites, which later became JANET. The initial ITU Standard on X.25 was approved in March 1976.
The British Post Office, Western Union International and Tymnet collaborated to create the first international packet switched network, referred to as the International Packet Switched Service (IPSS), in 1978. This network grew from Europe and the US to cover Canada, Hong Kong, and Australia by 1981. By the 1990s it provided a worldwide networking infrastructure.
Unlike ARPANET, X.25 was commonly available for business use. Telenet offered its Telemail electronic mail service, which was also targeted to enterprise use rather than the general email system of the ARPANET.
The first public dial-in networks used asynchronous TTY terminal protocols to reach a concentrator operated in the public network. Some networks, such as CompuServe, used X.25 to multiplex the terminal sessions into their packet-switched backbones, while others, such as Tymnet, used proprietary protocols. In 1979, CompuServe became the first service to offer electronic mail capabilities and technical support to personal computer users. The company broke new ground again in 1980 as the first to offer real-time chat with its CB Simulator. Other major dial-in networks were America Online (AOL) and Prodigy that also provided communications, content, and entertainment features. Many bulletin board system (BBS) networks also provided on-line access, such as FidoNet which was popular amongst hobbyist computer users, many of them hackers and amateur radio operators.
In 1979, two students at Duke University, Tom Truscott and Jim Ellis, originated the idea of using Bourne shell scripts to transfer news and messages on a serial line UUCP connection with nearby University of North Carolina at Chapel Hill. Following public release of the software in 1980, the mesh of UUCP hosts forwarding on the Usenet news rapidly expanded. UUCPnet, as it would later be named, also created gateways and links between FidoNet and dial-up BBS hosts. UUCP networks spread quickly due to the lower costs involved, ability to use existing leased lines, X.25 links or even ARPANET connections, and the lack of strict use policies compared to later networks like CSNET and Bitnet. All connects were local. By 1981 the number of UUCP hosts had grown to 550, nearly doubling to 940 in 1984.
Sublink Network, operating since 1987 and officially founded in Italy in 1989, based its interconnectivity upon UUCP to redistribute mail and news groups messages throughout its Italian nodes (about 100 at the time) owned both by private individuals and small companies. Sublink Network represented possibly one of the first examples of the Internet technology becoming progress through popular diffusion.
With so many different network methods, something was needed to unify them. Robert E. Kahn of DARPA and ARPANET recruited Vinton Cerf of Stanford University to work with him on the problem. By 1973, they had worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. Cerf credits Hubert Zimmermann and Louis Pouzin (designer of the CYCLADES network), and his graduate students Judy Estrin, Richard Karp, Yogen Dalal and Carl Sunshine with important work on this design. Concurrently, an International Networking Working Group formed in 1972, led by Cerf; active members included Alex McKenzie, Donald Davies, Roger Scantlebury, Louis Pouzin and Hubert Zimmermann.
The specification of the resulting protocol, the Transmission Control Protocol (TCP), was published as by the Network Working Group in December 1974. It contains the first attested use of the term "internet", as a shorthand for "internetwork".
Between 1976 and 1977, Yogen Dalal proposed separating TCP's routing and transmission control functions into two discrete layers, which led to the splitting of TCP into the TCP and IP protocols, and the development of TCP/IP.
With the role of the network reduced to a core of functionality, it became possible to exchange traffic with other network independently from their detailed characteristics, thereby solving Kahn's initial problem. DARPA agreed to fund development of prototype software, and after several years of work, the first demonstration of a gateway between the Packet Radio network in the SF Bay area and the ARPANET was conducted by the Stanford Research Institute. On November 22, 1977 a three network demonstration was conducted including the ARPANET, the SRI's Packet Radio Van on the Packet Radio Network and the Atlantic Packet Satellite network.
Stemming from the first specifications of TCP in 1974, TCP/IP emerged in 1978 in nearly its final form, as used for the first decades of the Internet. which is described in IETF publication RFC 791 (September 1981).
IPv4 uses 32-bit addresses which limits the address space to 232 addresses, i.e. addresses. The last available IPv4 address was assigned in January 2011. IPv4 is being replaced by its successor, called "IPv6", which uses 128 bit addresses, providing 2128 addresses, i.e. . This is a vastly increased address space. The shift to IPv6 is expected to take many years, decades, or perhaps longer, to complete, since there were four billion machines with IPv4 when the shift began.
The associated standards for IPv4 were published by 1981 as RFCs 791, 792 and 793, and adopted for use. DARPA sponsored or encouraged the development of TCP/IP implementations for many operating systems and then scheduled a migration of all hosts on all of its packet networks to TCP/IP. On January 1, 1983, known as flag day, TCP/IP protocols became the standard for the ARPANET, replacing the earlier NCP protocol.
After the ARPANET had been up and running for several years, ARPA looked for another agency to hand off the network to; ARPA's primary mission was funding cutting edge research and development, not running a communications utility. Eventually, in July 1975, the network had been turned over to the Defense Communications Agency, also part of the Department of Defense. In 1983, the U.S. military portion of the ARPANET was broken off as a separate network, the MILNET. MILNET subsequently became the unclassified but military-only NIPRNET, in parallel with the SECRET-level SIPRNET and JWICS for TOP SECRET and above. NIPRNET does have controlled security gateways to the public Internet.
The networks based on the ARPANET were government funded and therefore restricted to noncommercial uses such as research; unrelated commercial use was strictly forbidden. This initially restricted connections to military sites and universities. During the 1980s, the connections expanded to more educational institutions, and even to a growing number of companies such as Digital Equipment Corporation and Hewlett-Packard, which were participating in research projects or providing services to those who were.
Several other branches of the U.S. government, the National Aeronautics and Space Administration (NASA), the National Science Foundation (NSF), and the Department of Energy (DOE) became heavily involved in Internet research and started development of a successor to ARPANET. In the mid-1980s, all three of these branches developed the first Wide Area Networks based on TCP/IP. NASA developed the NASA Science Network, NSF developed CSNET and DOE evolved the Energy Sciences Network or ESNet.
NASA developed the TCP/IP based NASA Science Network (NSN) in the mid-1980s, connecting space scientists to data and information stored anywhere in the world. In 1989, the DECnet-based Space Physics Analysis Network (SPAN) and the TCP/IP-based NASA Science Network (NSN) were brought together at NASA Ames Research Center creating the first multiprotocol wide area network called the NASA Science Internet, or NSI. NSI was established to provide a totally integrated communications infrastructure to the NASA scientific community for the advancement of earth, space and life sciences. As a high-speed, multiprotocol, international network, NSI provided connectivity to over 20,000 scientists across all seven continents.
In 1981 NSF supported the development of the Computer Science Network (CSNET). CSNET connected with ARPANET using TCP/IP, and ran TCP/IP over X.25, but it also supported departments without sophisticated network connections, using automated dial-up mail exchange.
In 1986, the NSF created NSFNET, a 56 kbit/s backbone to support the NSF-sponsored supercomputing centers. The NSFNET also provided support for the creation of regional research and education networks in the United States, and for the connection of university and college campus networks to the regional networks. The use of NSFNET and the regional networks was not limited to supercomputer users and the 56 kbit/s network quickly became overloaded. NSFNET was upgraded to 1.5 Mbit/s in 1988 under a cooperative agreement with the Merit Network in partnership with IBM, MCI, and the State of Michigan. The existence of NSFNET and the creation of Federal Internet Exchanges (FIXes) allowed the ARPANET to be decommissioned in 1990.
NSFNET was expanded and upgraded to 45 Mbit/s in 1991, and was decommissioned in 1995 when it was replaced by backbones operated by several commercial Internet service providers.
The research and academic community continues to develop and use advanced networks such as Internet2 in the United States and JANET in the United Kingdom.
The term "internet" was reflected in the first RFC published on the TCP protocol (RFC 675: Internet Transmission Control Program, December 1974) as a short form of "internetworking", when the two terms were used interchangeably. In general, an internet was a collection of networks linked by a common protocol. In the time period when the ARPANET was connected to the newly formed NSFNET project in the late 1980s, the term was used as the name of the network, Internet, being the large and global TCP/IP network.
As interest in networking grew by needs of collaboration, exchange of data, and access of remote computing resources, the TCP/IP technologies spread throughout the rest of the world. The hardware-agnostic approach in TCP/IP supported the use of existing network infrastructure, such as the IPSS X.25 network, to carry Internet traffic.
Many sites unable to link directly to the Internet created simple gateways for the transfer of electronic mail, the most important application of the time. Sites with only intermittent connections used UUCP or FidoNet and relied on the gateways between these networks and the Internet. Some gateway services went beyond simple mail peering, such as allowing access to File Transfer Protocol (FTP) sites via UUCP or mail.
Finally, routing technologies were developed for the Internet to remove the remaining centralized routing aspects. The Exterior Gateway Protocol (EGP) was replaced by a new protocol, the Border Gateway Protocol (BGP). This provided a meshed topology for the Internet and reduced the centric architecture which ARPANET had emphasized. In 1994, Classless Inter-Domain Routing (CIDR) was introduced to support better conservation of address space which allowed use of route aggregation to decrease the size of routing tables.
In 1982, NORSAR and Peter Kirstein's group at University College London (UCL) left the ARPANET and began to use TCP/IP over satellite links. UCL provided access between the Internet and academic networks in the UK.
Between 1984 and 1988 CERN began installation and operation of TCP/IP to interconnect its major internal computer systems, workstations, PCs and an accelerator control system. CERN continued to operate a limited self-developed system (CERNET) internally and several incompatible (typically proprietary) network protocols externally. There was considerable resistance in Europe towards more widespread use of TCP/IP, and the CERN TCP/IP intranets remained isolated from the Internet until 1989 when a transatlantic connection to Cornell University was established.
In 1988, the first international connections to NSFNET was established by France's INRIA, and Piet Beertema at the Centrum Wiskunde & Informatica (CWI) in the Netherlands. Daniel Karrenberg, from CWI, visited Ben Segal, CERN's TCP/IP coordinator, looking for advice about the transition EUnet, the European side of the UUCP Usenet network (much of which ran over X.25 links), over to TCP/IP. The previous year, Segal had met with Len Bosack from the then still small company Cisco about purchasing some TCP/IP routers for CERN, and Segal was able to give Karrenberg advice and forward him on to Cisco for the appropriate hardware. This expanded the European portion of the Internet across the existing UUCP networks. The NORDUnet connection to NSFNET was in place soon after, providing open access for university students in Denmark, Finland, Iceland, Norway, and Sweden. In January 1989 CERN opened its first external TCP/IP connections. This coincided with the creation of Réseaux IP Européens (RIPE), initially a group of IP network administrators who met regularly to carry out coordination work together. Later, in 1992, RIPE was formally registered as a cooperative in Amsterdam.
In 1991 JANET, the UK national research and education network adopted Internet Protocol on the existing network. The same year, Dai Davies introduced Internet technology into the pan-European NREN, EuropaNet, which was built on the X.25 protocol. The European Academic and Research Network (EARN) and RARE adopted IP around the same time, and the European Internet backbone EBONE became operational in 1992.
At the same time as the rise of internetworking in Europe, ad hoc networking to ARPA and in-between Australian universities formed, based on various technologies such as X.25 and UUCPNet. These were limited in their connection to the global networks, due to the cost of making individual international UUCP dial-up or X.25 connections. In 1989, Australian universities joined the push towards using IP protocols to unify their networking infrastructures. AARNet was formed in 1989 by the Australian Vice-Chancellors' Committee and provided a dedicated IP based network for Australia. New Zealand's first international Internet connection was established the same year.
In May 1982 South Korea set up a two-node domestic TCP/IP network, adding a third node the following year. Japan, which had built the UUCP-based network JUNET in 1984, connected to NSFNET in 1989 marking the spread of the Internet to Asia. It hosted the annual meeting of the Internet Society, INET'92, in Kobe. Singapore developed TECHNET in 1990, and Thailand gained a global Internet connection between Chulalongkorn University and UUNET in 1992.
Nonetheless, for a period in the late 1980s and early 1990s, engineers, organizations and nations were polarized over the issue of which standard, the OSI model or the Internet protocol suite would result in the best and most robust computer networks.
While developed countries with technological infrastructures were joining the Internet, developing countries began to experience a digital divide separating them from the Internet. On an essentially continental basis, they are building organizations for Internet resource administration and sharing operational experience, as more and more transmission facilities go into place.
At the beginning of the 1990s, African countries relied upon X.25 IPSS and 2400 baud modem UUCP links for international and internetwork computer communications.
In August 1995, InfoMail Uganda, Ltd., a privately held firm in Kampala now known as InfoCom, and NSN Network Services of Avon, Colorado, sold in 1997 and now known as Clear Channel Satellite, established Africa's first native TCP/IP high-speed satellite Internet services. The data connection was originally carried by a C-Band RSCC Russian satellite which connected InfoMail's Kampala offices directly to NSN's MAE-West point of presence using a private network from NSN's leased ground station in New Jersey. InfoCom's first satellite connection was just 64 kbit/s, serving a Sun host computer and twelve US Robotics dial-up modems.
In 1996, a USAID funded project, the Leland Initiative, started work on developing full Internet connectivity for the continent. Guinea, Mozambique, Madagascar and Rwanda gained satellite earth stations in 1997, followed by Ivory Coast and Benin in 1998.
Africa is building an Internet infrastructure. AFRINIC, headquartered in Mauritius, manages IP address allocation for the continent. As do the other Internet regions, there is an operational forum, the Internet Community of Operational Networking Specialists.
There are many programs to provide high-performance transmission plant, and the western and southern coasts have undersea optical cable. High-speed cables join North Africa and the Horn of Africa to intercontinental cable systems. Undersea cable development is slower for East Africa; the original joint effort between New Partnership for Africa's Development (NEPAD) and the East Africa Submarine System (Eassy) has broken off and may become two efforts.
The Asia Pacific Network Information Centre (APNIC), headquartered in Australia, manages IP address allocation for the continent. APNIC sponsors an operational forum, the Asia-Pacific Regional Internet Conference on Operational Technologies (APRICOT).
South Korea's first Internet system, the System Development Network (SDN) began operation on 15 May 1982. SDN was connected to the rest of the world in August 1983 using UUCP (Unixto-Unix-Copy); connected to CSNET in December 1984; and formally connected to the U.S. Internet in 1990.
In 1991, the People's Republic of China saw its first TCP/IP college network, Tsinghua University's TUNET. The PRC went on to make its first global Internet connection in 1994, between the Beijing Electro-Spectrometer Collaboration and Stanford University's Linear Accelerator Center. However, China went on to implement its own digital divide by implementing a country-wide content filter.
As with the other regions, the Latin American and Caribbean Internet Addresses Registry (LACNIC) manages the IP address space and other resources for its area. LACNIC, headquartered in Uruguay, operates DNS root, reverse DNS, and other key services.
Initially, as with its predecessor networks, the system that would evolve into the Internet was primarily for government and government body use.
However, interest in commercial use of the Internet quickly became a commonly debated topic. Although commercial use was forbidden, the exact definition of commercial use was unclear and subjective. UUCPNet and the X.25 IPSS had no such restrictions, which would eventually see the official barring of UUCPNet use of ARPANET and NSFNET connections. (Some UUCP links still remained connecting to these networks however, as administrators cast a blind eye to their operation.)
As a result, during the late 1980s, the first Internet service provider (ISP) companies were formed. Companies like PSINet, UUNET, Netcom, and Portal Software were formed to provide service to the regional research networks and provide alternate network access, UUCP-based email and Usenet News to the public. The first commercial dialup ISP in the United States was The World, which opened in 1989.
In 1992, the U.S. Congress passed the Scientific and Advanced-Technology Act, , which allowed NSF to support access by the research and education communities to computer networks which were not used exclusively for research and education purposes, thus permitting NSFNET to interconnect with commercial networks. This caused controversy within the research and education community, who were concerned commercial use of the network might lead to an Internet that was less responsive to their needs, and within the community of commercial network providers, who felt that government subsidies were giving an unfair advantage to some organizations.
By 1990, ARPANET's goals had been fulfilled and new networking technologies exceeded the original scope and the project came to a close. New network service providers including PSINet, Alternet, CERFNet, ANS CO+RE, and many others were offering network access to commercial customers. NSFNET was no longer the de facto backbone and exchange point of the Internet. The Commercial Internet eXchange (CIX), Metropolitan Area Exchanges (MAEs), and later Network Access Points (NAPs) were becoming the primary interconnections between many networks. The final restrictions on carrying commercial traffic ended on April 30, 1995 when the National Science Foundation ended its sponsorship of the NSFNET Backbone Service and the service ended. NSF provided initial support for the NAPs and interim support to help the regional research and education networks transition to commercial ISPs. NSF also sponsored the very high speed Backbone Network Service (vBNS) which continued to provide support for the supercomputing centers and research and education in the United States.
The World Wide Web (sometimes abbreviated "www" or "W3") is an information space where documents and other web resources are identified by URIs, interlinked by hypertext links, and can be accessed via the Internet using a web browser and (more recently) web-based applications. It has become known simply as "the Web". As of the 2010s, the World Wide Web is the primary tool billions use to interact on the Internet, and it has changed people's lives immeasurably.
Precursors to the web browser emerged in the form of hyperlinked applications during the mid and late 1980s (the bare concept of hyperlinking had by then existed for some decades). Following these, Tim Berners-Lee is credited with inventing the World Wide Web in 1989 and developing in 1990 both the first web server, and the first web browser, called WorldWideWeb (no spaces) and later renamed Nexus. Many others were soon developed, with Marc Andreessen's 1993 Mosaic (later Netscape), being particularly easy to use and install, and often credited with sparking the Internet boom of the 1990s. Other major web browsers have been Internet Explorer, Firefox, Google Chrome, Microsoft Edge, Opera and Safari.
NCSA Mosaic was a graphical browser which ran on several popular office and home computers. It is credited with first bringing multimedia content to non-technical users by including images and text on the same page, unlike previous browser designs; Marc Andreessen, its creator, also established the company that in 1994, released Netscape Navigator, which resulted in one of the early browser wars, when it ended up in a competition for dominance (which it lost) with Microsoft Windows' Internet Explorer. Commercial use restrictions were lifted in 1995. The online service America Online (AOL) offered their users a connection to the Internet via their own internal browser.
During the first decade or so of the public Internet, the immense changes it would eventually enable in the 2000s were still nascent. In terms of providing context for this period, mobile cellular devices ("smartphones" and other cellular devices) which today provide near-universal access, were used for business and not a routine household item owned by parents and children worldwide. Social media in the modern sense had yet to come into existence, laptops were bulky and most households did not have computers. Data rates were slow and most people lacked means to video or digitize video; media storage was transitioning slowly from analog tape to digital optical discs (DVD and to an extent still, floppy disc to CD). Enabling technologies used from the early 2000s such as PHP, modern JavaScript and Java, technologies such as AJAX, HTML 4 (and its emphasis on CSS), and various software frameworks, which enabled and simplified speed of web development, largely awaited invention and their eventual widespread adoption.
The Internet was widely used for mailing lists, emails, e-commerce and early popular online shopping (Amazon and eBay for example), online forums and bulletin boards, and personal websites and blogs, and use was growing rapidly, but by more modern standards the systems used were static and lacked widespread social engagement. It awaited a number of events in the early 2000s to change from a communications technology to gradually develop into a key part of global society's infrastructure.
Typical design elements of these "Web 1.0" era websites included: Static pages instead of dynamic HTML; content served from filesystems instead of relational databases; pages built using Server Side Includes or CGI instead of a web application written in a dynamic programming language; HTML 3.2-era structures such as frames and tables to create page layouts; online guestbooks; overuse of GIF buttons and similar small graphics promoting particular items; and HTML forms sent via email. (Support for server side scripting was rare on shared servers so the usual feedback mechanism was via email, using mailto forms and their email program.
During the period 1997 to 2001, the first speculative investment bubble related to the Internet took place, in which "dot-com" companies (referring to the ".com" top level domain used by businesses) were propelled to exceedingly high valuations as investors rapidly stoked stock values, followed by a market crash; the first dot-com bubble. However this only temporarily slowed enthusiasm and growth, which quickly recovered and continued to grow.
The changes that would propel the Internet into its place as a social system took place during a relatively short period of no more than five years, starting from around 2004. They included:
and shortly after (approximately 2007–2008 onward):
With the call to Web 2.0, the period up to around 2004–2005 was retrospectively named and described by some as Web 1.0.
The term "Web 2.0" describes websites that emphasize user-generated content (including user-to-user interaction), usability, and interoperability. It first appeared in a January 1999 article called "Fragmented Future" written by Darcy DiNucci, a consultant on electronic information design, where she wrote:
The term resurfaced during 2002 – 2004, and gained prominence in late 2004 following presentations by Tim O'Reilly and Dale Dougherty at the first Web 2.0 Conference. In their opening remarks, John Battelle and Tim O'Reilly outlined their definition of the "Web as Platform", where software applications are built upon the Web as opposed to upon the desktop. The unique aspect of this migration, they argued, is that "customers are building your business for you". They argued that the activities of users generating content (in the form of ideas, text, videos, or pictures) could be "harnessed" to create value.
Web 2.0 does not refer to an update to any technical specification, but rather to cumulative changes in the way Web pages are made and used. Web 2.0 describes an approach, in which sites focus substantially upon allowing users to interact and collaborate with each other in a social media dialogue as creators of user-generated content in a virtual community, in contrast to Web sites where people are limited to the passive viewing of content. Examples of Web 2.0 include social networking sites, blogs, wikis, folksonomies, video sharing sites, hosted services, Web applications, and mashups. Terry Flew, in his 3rd Edition of "New Media" described what he believed to characterize the differences between Web 1.0 and Web 2.0:
This era saw several household names gain prominence through their community-oriented operation – YouTube, Twitter, Facebook, Reddit and Wikipedia being some examples.
The process of change that generally coincided with "Web 2.0" was itself greatly accelerated and transformed only a short time later by the increasing growth in mobile devices. This mobile revolution meant that computers in the form of smartphones became something many people used, took with them everywhere, communicated with, used for photographs and videos they instantly shared or to shop or seek information "on the move" – and used socially, as opposed to items on a desk at home or just used for work.
Location-based services, services using location and other sensor information, and crowdsourcing (frequently but not always location based), became common, with posts tagged by location, or websites and services becoming location aware. Mobile-targeted websites (such as "m.website.com") became common, designed especially for the new devices used. Netbooks, ultrabooks, widespread 4G and Wi-Fi, and mobile chips capable or running at nearly the power of desktops from not many years before on far lower power usage, became enablers of this stage of Internet development, and the term "App" emerged (short for "Application program" or "Program") as did the "App store".
The first Internet link into low earth orbit was established on January 22, 2010 when astronaut T. J. Creamer posted the first unassisted update to his Twitter account from the International Space Station, marking the extension of the Internet into space. (Astronauts at the ISS had used email and Twitter before, but these messages had been relayed to the ground through a NASA data link before being posted by a human proxy.) This personal Web access, which NASA calls the Crew Support LAN, uses the space station's high-speed Ku band microwave link. To surf the Web, astronauts can use a station laptop computer to control a desktop computer on Earth, and they can talk to their families and friends on Earth using Voice over IP equipment.
Communication with spacecraft beyond earth orbit has traditionally been over point-to-point links through the Deep Space Network. Each such data link must be manually scheduled and configured. In the late 1990s NASA and Google began working on a new network protocol, Delay-tolerant networking (DTN) which automates this process, allows networking of spaceborne transmission nodes, and takes the fact into account that spacecraft can temporarily lose contact because they move behind the Moon or planets, or because space weather disrupts the connection. Under such conditions, DTN retransmits data packages instead of dropping them, as the standard TCP/IP Internet Protocol does. NASA conducted the first field test of what it calls the "deep space internet" in November 2008. Testing of DTN-based communications between the International Space Station and Earth (now termed Disruption-Tolerant Networking) has been ongoing since March 2009, and is scheduled to continue until March 2014.
This network technology is supposed to ultimately enable missions that involve multiple spacecraft where reliable inter-vessel communication might take precedence over vessel-to-earth downlinks. According to a February 2011 statement by Google's Vint Cerf, the so-called "Bundle protocols" have been uploaded to NASA's EPOXI mission spacecraft (which is in orbit around the Sun) and communication with Earth has been tested at a distance of approximately 80 light seconds.
As a globally distributed network of voluntarily interconnected autonomous networks, the Internet operates without a central governing body. Each constituent network chooses the technologies and protocols it deploys from the technical standards that are developed by the Internet Engineering Task Force (IETF). However, successful interoperation of many networks requires certain parameters that must be common throughout the network. For managing such parameters, the Internet Assigned Numbers Authority (IANA) oversees the allocation and assignment of various technical identifiers. In addition, the Internet Corporation for Assigned Names and Numbers (ICANN) provides oversight and coordination for the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System.
The IANA function was originally performed by USC Information Sciences Institute (ISI), and it delegated portions of this responsibility with respect to numeric network and autonomous system identifiers to the Network Information Center (NIC) at Stanford Research Institute (SRI International) in Menlo Park, California. ISI's Jonathan Postel managed the IANA, served as RFC Editor and performed other key roles until his premature death in 1998.
As the early ARPANET grew, hosts were referred to by names, and a HOSTS.TXT file would be distributed from SRI International to each host on the network. As the network grew, this became cumbersome. A technical solution came in the form of the Domain Name System, created by ISI's Paul Mockapetris in 1983. The Defense Data Network—Network Information Center (DDN-NIC) at SRI handled all registration services, including the top-level domains (TLDs) of .mil, .gov, .edu, .org, .net, .com and .us, root nameserver administration and Internet number assignments under a United States Department of Defense contract. In 1991, the Defense Information Systems Agency (DISA) awarded the administration and maintenance of DDN-NIC (managed by SRI up until this point) to Government Systems, Inc., who subcontracted it to the small private-sector Network Solutions, Inc.
The increasing cultural diversity of the Internet also posed administrative challenges for centralized management of the IP addresses. In October 1992, the Internet Engineering Task Force (IETF) published RFC 1366, which described the "growth of the Internet and its increasing globalization" and set out the basis for an evolution of the IP registry process, based on a regionally distributed registry model. This document stressed the need for a single Internet number registry to exist in each geographical region of the world (which would be of "continental dimensions"). Registries would be "unbiased and widely recognized by network providers and subscribers" within their region.
The RIPE Network Coordination Centre (RIPE NCC) was established as the first RIR in May 1992. The second RIR, the Asia Pacific Network Information Centre (APNIC), was established in Tokyo in 1993, as a pilot project of the Asia Pacific Networking Group.
Since at this point in history most of the growth on the Internet was coming from non-military sources, it was decided that the Department of Defense would no longer fund registration services outside of the .mil TLD. In 1993 the U.S. National Science Foundation, after a competitive bidding process in 1992, created the InterNIC to manage the allocations of addresses and management of the address databases, and awarded the contract to three organizations. Registration Services would be provided by Network Solutions; Directory and Database Services would be provided by AT&T; and Information Services would be provided by General Atomics.
Over time, after consultation with the IANA, the IETF, RIPE NCC, APNIC, and the Federal Networking Council (FNC), the decision was made to separate the management of domain names from the management of IP numbers. Following the examples of RIPE NCC and APNIC, it was recommended that management of IP address space then administered by the InterNIC should be under the control of those that use it, specifically the ISPs, end-user organizations, corporate entities, universities, and individuals. As a result, the American Registry for Internet Numbers (ARIN) was established as in December 1997, as an independent, not-for-profit corporation by direction of the National Science Foundation and became the third Regional Internet Registry.
In 1998, both the IANA and remaining DNS-related InterNIC functions were reorganized under the control of ICANN, a California non-profit corporation contracted by the United States Department of Commerce to manage a number of Internet-related tasks. As these tasks involved technical coordination for two principal Internet name spaces (DNS names and IP addresses) created by the IETF, ICANN also signed a memorandum of understanding with the IAB to define the technical work to be carried out by the Internet Assigned Numbers Authority. The management of Internet address space remained with the regional Internet registries, which collectively were defined as a supporting organization within the ICANN structure. ICANN provides central coordination for the DNS system, including policy coordination for the split registry / registrar system, with competition among registry service providers to serve each top-level-domain and multiple competing registrars offering DNS services to end-users.
The Internet Engineering Task Force (IETF) is the largest and most visible of several loosely related ad-hoc groups that provide technical direction for the Internet, including the Internet Architecture Board (IAB), the Internet Engineering Steering Group (IESG), and the Internet Research Task Force (IRTF).
The IETF is a loosely self-organized group of international volunteers who contribute to the engineering and evolution of Internet technologies. It is the principal body engaged in the development of new Internet standard specifications. Much of the work of the IETF is organized into "Working Groups". Standardization efforts of the Working Groups are often adopted by the Internet community, but the IETF does not control or patrol the Internet.
The IETF grew out of quarterly meeting of U.S. government-funded researchers, starting in January 1986. Non-government representatives were invited by the fourth IETF meeting in October 1986. The concept of Working Groups was introduced at the fifth meeting in February 1987. The seventh meeting in July 1987 was the first meeting with more than one hundred attendees. In 1992, the Internet Society, a professional membership society, was formed and IETF began to operate under it as an independent international standards body. The first IETF meeting outside of the United States was held in Amsterdam, The Netherlands, in July 1993. Today, the IETF meets three times per year and attendance has been as high as ca. 2,000 participants. Typically one in three IETF meetings are held in Europe or Asia. The number of non-US attendees is typically ca. 50%, even at meetings held in the United States.
The IETF is not a legal entity, has no governing board, no members, and no dues. The closest status resembling membership is being on an IETF or Working Group mailing list. IETF volunteers come from all over the world and from many different parts of the Internet community. The IETF works closely with and under the supervision of the Internet Engineering Steering Group (IESG) and the Internet Architecture Board (IAB). The Internet Research Task Force (IRTF) and the Internet Research Steering Group (IRSG), peer activities to the IETF and IESG under the general supervision of the IAB, focus on longer term research issues.
Request for Comments (RFCs) are the main documentation for the work of the IAB, IESG, IETF, and IRTF. RFC 1, "Host Software", was written by Steve Crocker at UCLA in April 1969, well before the IETF was created. Originally they were technical memos documenting aspects of ARPANET development and were edited by Jon Postel, the first RFC Editor.
RFCs cover a wide range of information from proposed standards, draft standards, full standards, best practices, experimental protocols, history, and other informational topics. RFCs can be written by individuals or informal groups of individuals, but many are the product of a more formal Working Group. Drafts are submitted to the IESG either by individuals or by the Working Group Chair. An RFC Editor, appointed by the IAB, separate from IANA, and working in conjunction with the IESG, receives drafts from the IESG and edits, formats, and publishes them. Once an RFC is published, it is never revised. If the standard it describes changes or its information becomes obsolete, the revised standard or updated information will be re-published as a new RFC that "obsoletes" the original.
The Internet Society (ISOC) is an international, nonprofit organization founded during 1992 "to assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". With offices near Washington, DC, USA, and in Geneva, Switzerland, ISOC has a membership base comprising more than 80 organizational and more than 50,000 individual members. Members also form "chapters" based on either common geographical location or special interests. There are currently more than 90 chapters around the world.
ISOC provides financial and organizational support to and promotes the work of the standards settings bodies for which it is the organizational home: the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), the Internet Engineering Steering Group (IESG), and the Internet Research Task Force (IRTF). ISOC also promotes understanding and appreciation of the Internet model of open, transparent processes and consensus-based decision-making.
Since the 1990s, the Internet's governance and organization has been of global importance to governments, commerce, civil society, and individuals. The organizations which held control of certain technical aspects of the Internet were the successors of the old ARPANET oversight and the current decision-makers in the day-to-day technical aspects of the network. While recognized as the administrators of certain aspects of the Internet, their roles and their decision-making authority are limited and subject to increasing international scrutiny and increasing objections. These objections have led to the ICANN removing themselves from relationships with first the University of Southern California in 2000, and in September 2009, gaining autonomy from the US government by the ending of its longstanding agreements, although some contractual obligations with the U.S. Department of Commerce continued. Finally, on October 1, 2016 ICANN ended its contract with the United States Department of Commerce National Telecommunications and Information Administration (NTIA), allowing oversight to pass to the global Internet community.
The IETF, with financial and organizational support from the Internet Society, continues to serve as the Internet's ad-hoc standards body and issues Request for Comments.
In November 2005, the World Summit on the Information Society, held in Tunis, called for an Internet Governance Forum (IGF) to be convened by United Nations Secretary General. The IGF opened an ongoing, non-binding conversation among stakeholders representing governments, the private sector, civil society, and the technical and academic communities about the future of Internet governance. The first IGF meeting was held in October/November 2006 with follow up meetings annually thereafter. Since WSIS, the term "Internet governance" has been broadened beyond narrow technical concerns to include a wider range of Internet-related policy issues.
Tim Berners-Lee, inventor of the Internet, was becoming concerned about threats to the web's future and in November 2009 at the IGF in Washington DC launched the World Wide Web Foundation (WWWF) to campaign to make the web a safe and empowering tool for the good of humanity with access to all. In November 2019 at the IGF in Berlin, Berners-Lee and the WWWF went on to launch the "Contract for the Web", a campaign initiative to persuade governments, companies and citizens to commit to nine principles to stop "misuse" with the warning "If we don't act now - and act together - to prevent the web being misused by those who want to exploit, divide and undermine, we are at risk of squandering" (its potential for good).
Due to its prominence and immediacy as an effective means of mass communication, the Internet has also become more politicized as it has grown. This has led in turn, to discourses and activities that would once have taken place in other ways, migrating to being mediated by internet.
Examples include political activities such as public protest and canvassing of support and votes, but also:
On April 23, 2014, the Federal Communications Commission (FCC) was reported to be considering a new rule that would permit Internet service providers to offer content providers a faster track to send content, thus reversing their earlier net neutrality position. A possible solution to net neutrality concerns may be municipal broadband, according to Professor Susan Crawford, a legal and technology expert at Harvard Law School. On May 15, 2014, the FCC decided to consider two options regarding Internet services: first, permit fast and slow broadband lanes, thereby compromising net neutrality; and second, reclassify broadband as a telecommunication service, thereby preserving net neutrality. On November 10, 2014, President Obama recommended the FCC reclassify broadband Internet service as a telecommunications service in order to preserve net neutrality. On January 16, 2015, Republicans presented legislation, in the form of a U.S. Congress HR discussion draft bill, that makes concessions to net neutrality but prohibits the FCC from accomplishing the goal or enacting any further regulation affecting Internet service providers (ISPs). On January 31, 2015, AP News reported that the FCC will present the notion of applying ("with some caveats") Title II (common carrier) of the Communications Act of 1934 to the internet in a vote expected on February 26, 2015. Adoption of this notion would reclassify internet service from one of information to one of telecommunications and, according to Tom Wheeler, chairman of the FCC, ensure net neutrality. The FCC is expected to enforce net neutrality in its vote, according to "The New York Times".
On February 26, 2015, the FCC ruled in favor of net neutrality by applying Title II (common carrier) of the Communications Act of 1934 and Section 706 of the Telecommunications act of 1996 to the Internet. The FCC chairman, Tom Wheeler, commented, "This is no more a plan to regulate the Internet than the First Amendment is a plan to regulate free speech. They both stand for the same concept."
On March 12, 2015, the FCC released the specific details of the net neutrality rules. On April 13, 2015, the FCC published the final rule on its new "Net Neutrality" regulations.
On December 14, 2017, the F.C.C Repealed their March 12, 2015 decision by a 3–2 vote regarding net neutrality rules.
E-mail has often been called the killer application of the Internet. It predates the Internet, and was a crucial tool in creating it. Email started in 1965 as a way for multiple users of a time-sharing mainframe computer to communicate. Although the history is undocumented, among the first systems to have such a facility were the System Development Corporation (SDC) Q32 and the Compatible Time-Sharing System (CTSS) at MIT.
The ARPANET computer network made a large contribution to the evolution of electronic mail. An experimental inter-system transferred mail on the ARPANET shortly after its creation. In 1971 Ray Tomlinson created what was to become the standard Internet electronic mail addressing format, using the @ sign to separate mailbox names from host names.
A number of protocols were developed to deliver messages among groups of time-sharing computers over alternative transmission systems, such as UUCP and IBM's VNET email system. Email could be passed this way between a number of networks, including ARPANET, BITNET and NSFNET, as well as to hosts connected directly to other sites via UUCP. See the history of SMTP protocol.
In addition, UUCP allowed the publication of text files that could be read by many others. The News software developed by Steve Daniel and Tom Truscott in 1979 was used to distribute news and bulletin board-like messages. This quickly grew into discussion groups, known as newsgroups, on a wide range of topics. On ARPANET and NSFNET similar discussion groups would form via mailing lists, discussing both technical issues and more culturally focused topics (such as science fiction, discussed on the sflovers mailing list).
During the early years of the Internet, email and similar mechanisms were also fundamental to allow people to access resources that were not available due to the absence of online connectivity. UUCP was often used to distribute files using the 'alt.binary' groups. Also, FTP e-mail gateways allowed people that lived outside the US and Europe to download files using ftp commands written inside email messages. The file was encoded, broken in pieces and sent by email; the receiver had to reassemble and decode it later, and it was the only way for people living overseas to download items such as the earlier Linux versions using the slow dial-up connections available at the time. After the popularization of the Web and the HTTP protocol such tools were slowly abandoned.
As the Internet grew through the 1980s and early 1990s, many people realized the increasing need to be able to find and organize files and information. Projects such as Archie, Gopher, WAIS, and the FTP Archive list attempted to create ways to organize distributed data. In the early 1990s, Gopher, invented by Mark P. McCahill offered a viable alternative to the World Wide Web. However, in 1993 the World Wide Web saw many advances to indexing and ease of access through search engines, which often neglected Gopher and Gopherspace. As popularity increased through ease of use, investment incentives also grew until in the middle of 1994 the WWW's popularity gained the upper hand. Then it became clear that Gopher and the other projects were doomed fall short.
One of the most promising user interface paradigms during this period was hypertext. The technology had been inspired by Vannevar Bush's "Memex" and developed through Ted Nelson's research on Project Xanadu, Douglas Engelbart's research on NLS and Augment, and Andries van Dam's research from HES in 1968, through FRESS, Intermedia, and others. Many small self-contained hypertext systems had been created as well, such as Apple Computer's HyperCard (1987). Gopher became the first commonly used hypertext interface to the Internet. While Gopher menu items were examples of hypertext, they were not commonly perceived in that way.
In 1989, while working at CERN, Tim Berners-Lee invented a network-based implementation of the hypertext concept. By releasing his invention to public use, he encouraged widespread use. For his work in developing the World Wide Web, Berners-Lee received the Millennium technology prize in 2004. One early popular web browser, modeled after HyperCard, was ViolaWWW.
A turning point for the World Wide Web began with the introduction of the Mosaic web browser in 1993, a graphical browser developed by a team at the National Center for Supercomputing Applications at the University of Illinois at Urbana–Champaign (NCSA-UIUC), led by Marc Andreessen. Funding for Mosaic came from the High-Performance Computing and Communications Initiative, a funding program initiated by the High Performance Computing and Communication Act of 1991, also known as the "Gore Bill". Mosaic's graphical interface soon became more popular than Gopher, which at the time was primarily text-based, and the WWW became the preferred interface for accessing the Internet. (Gore's reference to his role in "creating the Internet", however, was ridiculed in his presidential election campaign. See the full article Al Gore and information technology).
Mosaic was superseded in 1994 by Andreessen's Netscape Navigator, which replaced Mosaic as the world's most popular browser. While it held this title for some time, eventually competition from Internet Explorer and a variety of other browsers almost completely displaced it. Another important event held on January 11, 1994, was "The Superhighway Summit" at UCLA's Royce Hall. This was the "first public conference bringing together all of the major industry, government and academic leaders in the field [and] also began the national dialogue about the "Information Superhighway" and its implications."
"24 Hours in Cyberspace", "the largest one-day online event" (February 8, 1996) up to that date, took place on the then-active website, "cyber24.com." It was headed by photographer Rick Smolan. A photographic exhibition was unveiled at the Smithsonian Institution's National Museum of American History on January 23, 1997, featuring 70 photos from the project.
Even before the World Wide Web, there were search engines that attempted to organize the Internet. The first of these was the Archie search engine from McGill University in 1990, followed in 1991 by WAIS and Gopher. All three of those systems predated the invention of the World Wide Web but all continued to index the Web and the rest of the Internet for several years after the Web appeared. There are still Gopher servers as of 2006, although there are a great many more web servers.
As the Web grew, search engines and Web directories were created to track pages on the Web and allow people to find things. The first full-text Web search engine was WebCrawler in 1994. Before WebCrawler, only Web page titles were searched. Another early search engine, Lycos, was created in 1993 as a university project, and was the first to achieve commercial success. During the late 1990s, both Web directories and Web search engines were popular—Yahoo! (founded 1994) and Altavista (founded 1995) were the respective industry leaders. By August 2001, the directory model had begun to give way to search engines, tracking the rise of Google (founded 1998), which had developed new approaches to relevancy ranking. Directory features, while still commonly available, became after-thoughts to search engines.
Database size, which had been a significant marketing feature through the early 2000s, was similarly displaced by emphasis on relevancy ranking, the methods by which search engines attempt to sort the best results first. Relevancy ranking first became a major issue circa 1996, when it became apparent that it was impractical to review full lists of results. Consequently, algorithms for relevancy ranking have continuously improved. Google's PageRank method for ordering the results has received the most press, but all major search engines continually refine their ranking methodologies with a view toward improving the ordering of results. As of 2006, search engine rankings are more important than ever, so much so that an industry has developed ("search engine optimizers", or "SEO") to help web-developers improve their search ranking, and an entire body of case law has developed around matters that affect search engine rankings, such as use of trademarks in metatags. The sale of search rankings by some search engines has also created controversy among librarians and consumer advocates.
On June 3, 2009, Microsoft launched its new search engine, Bing. The following month Microsoft and Yahoo! announced a deal in which Bing would power Yahoo! Search.
Resource or file sharing has been an important activity on computer networks from well before the Internet was established and was supported in a variety of ways including bulletin board systems (1978), Usenet (1980), Kermit (1981), and many others. The File Transfer Protocol (FTP) for use on the Internet was standardized in 1985 and is still in use today. A variety of tools were developed to aid the use of FTP by helping users discover files they might want to transfer, including the Wide Area Information Server (WAIS) in 1991, Gopher in 1991, Archie in 1991, Veronica in 1992, Jughead in 1993, Internet Relay Chat (IRC) in 1988, and eventually the World Wide Web (WWW) in 1991 with Web directories and Web search engines.
In 1999, Napster became the first peer-to-peer file sharing system. Napster used a central server for indexing and peer discovery, but the storage and transfer of files was decentralized. A variety of peer-to-peer file sharing programs and services with different levels of decentralization and anonymity followed, including: Gnutella, eDonkey2000, and Freenet in 2000, FastTrack, Kazaa, Limewire, and BitTorrent in 2001, and Poisoned in 2003.
All of these tools are general purpose and can be used to share a wide variety of content, but sharing of music files, software, and later movies and videos are major uses. And while some of this sharing is legal, large portions are not. Lawsuits and other legal actions caused Napster in 2001, eDonkey2000 in 2005, Kazaa in 2006, and Limewire in 2010 to shut down or refocus their efforts. The Pirate Bay, founded in Sweden in 2003, continues despite a trial and appeal in 2009 and 2010 that resulted in jail terms and large fines for several of its founders. File sharing remains contentious and controversial with charges of theft of intellectual property on the one hand and charges of censorship on the other.
Suddenly the low price of reaching millions worldwide, and the possibility of selling to or hearing from those people at the same moment when they were reached, promised to overturn established business dogma in advertising, mail-order sales, customer relationship management, and many more areas. The web was a new killer app—it could bring together unrelated buyers and sellers in seamless and low-cost ways. Entrepreneurs around the world developed new business models, and ran to their nearest venture capitalist. While some of the new entrepreneurs had experience in business and economics, the majority were simply people with ideas, and did not manage the capital influx prudently. Additionally, many dot-com business plans were predicated on the assumption that by using the Internet, they would bypass the distribution channels of existing businesses and therefore not have to compete with them; when the established businesses with strong existing brands developed their own Internet presence, these hopes were shattered, and the newcomers were left attempting to break into markets dominated by larger, more established businesses. Many did not have the ability to do so.
The dot-com bubble burst in March 2000, with the technology heavy NASDAQ Composite index peaking at 5,048.62 on March 10 (5,132.52 intraday), more than double its value just a year before. By 2001, the bubble's deflation was running full speed. A majority of the dot-coms had ceased trading, after having burnt through their venture capital and IPO capital, often without ever making a profit. But despite this, the Internet continues to grow, driven by commerce, ever greater amounts of online information and knowledge and social networking.
The first mobile phone with Internet connectivity was the Nokia 9000 Communicator, launched in Finland in 1996. The viability of Internet services access on mobile phones was limited until prices came down from that model, and network providers started to develop systems and services conveniently accessible on phones. NTT DoCoMo in Japan launched the first mobile Internet service, i-mode, in 1999 and this is considered the birth of the mobile phone Internet services. In 2001, the mobile phone email system by Research in Motion (now BlackBerry Limited) for their BlackBerry product was launched in America. To make efficient use of the small screen and tiny keypad and one-handed operation typical of mobile phones, a specific document and networking model was created for mobile devices, the Wireless Application Protocol (WAP). Most mobile device Internet services operate using WAP. The growth of mobile phone services was initially a primarily Asian phenomenon with Japan, South Korea and Taiwan all soon finding the majority of their Internet users accessing resources by phone rather than by PC. Developing countries followed, with India, South Africa, Kenya, the Philippines, and Pakistan all reporting that the majority of their domestic users accessed the Internet from a mobile phone rather than a PC. The European and North American use of the Internet was influenced by a large installed base of personal computers, and the growth of mobile phone Internet access was more gradual, but had reached national penetration levels of 20–30% in most Western countries. The cross-over occurred in 2008, when more Internet access devices were mobile phones than personal computers. In many parts of the developing world, the ratio is as much as 10 mobile phone users to one PC user.
Web pages were initially conceived as structured documents based upon Hypertext Markup Language (HTML) which can allow access to images, video, and other content. Hyperlinks in the page permit users to navigate to other pages. In the earliest browsers, images opened in a separate "helper" application. Marc Andreessen's 1993 Mosaic and 1994 Netscape introduced mixed text and images for non-technical users. HTML evolved during the 1990s, leading to HTML 4 which introduced large elements of CSS styling and, later, extensions to allow browser code to make calls and ask for content from servers in a structured way (AJAX).
There are nearly insurmountable problems in supplying a historiography of the Internet's development. The process of digitization represents a twofold challenge both for historiography in general and, in particular, for historical communication research. A sense of the difficulty in documenting early developments that led to the internet can be gathered from the quote: | https://en.wikipedia.org/wiki?curid=13692 |
Horace
Quintus Horatius Flaccus (8 December 65 BC – 27 November 8 BC), known in the English-speaking world as Horace (), was the leading Roman lyric poet during the time of Augustus (also known as Octavian). The rhetorician Quintilian regarded his "Odes" as just about the only Latin lyrics worth reading: "He can be lofty sometimes, yet he is also full of charm and grace, versatile in his figures, and felicitously daring in his choice of words."
Horace also crafted elegant hexameter verses ("Satires" and "Epistles") and caustic iambic poetry ("Epodes"). The hexameters are amusing yet serious works, friendly in tone, leading the ancient satirist Persius to comment: "as his friend laughs, Horace slyly puts his finger on his every fault; once let in, he plays about the heartstrings".
His career coincided with Rome's momentous change from a republic to an empire. An officer in the republican army defeated at the Battle of Philippi in 42 BC, he was befriended by Octavian's right-hand man in civil affairs, Maecenas, and became a spokesman for the new regime. For some commentators, his association with the regime was a delicate balance in which he maintained a strong measure of independence (he was "a master of the graceful sidestep") but for others he was, in John Dryden's phrase, "a well-mannered court slave".
Horace can be regarded as the world's first autobiographer. In his writings, he tells us far more about himself, his character, his development, and his way of life, than any other great poet of antiquity. Some of the biographical material contained in his work can be supplemented from the short but valuable "Life of Horace" by Suetonius (in his "Lives of the Poets").
He was born on 8 December 65 BC in the Samnite south of Italy. His home town, Venusia, lay on a trade route in the border region between Apulia and Lucania (Basilicata). Various Italic dialects were spoken in the area and this perhaps enriched his feeling for language. He could have been familiar with Greek words even as a young boy and later he poked fun at the jargon of mixed Greek and Oscan spoken in neighbouring Canusium. One of the works he probably studied in school was the "Odyssia" of Livius Andronicus, taught by teachers like the 'Orbilius' mentioned in one of his poems. Army veterans could have been settled there at the expense of local families uprooted by Rome as punishment for their part in the Social War (91–88 BC). Such state-sponsored migration must have added still more linguistic variety to the area. According to a local tradition reported by Horace, a colony of Romans or Latins had been installed in Venusia after the Samnites had been driven out early in the third century. In that case, young Horace could have felt himself to be a Roman though there are also indications that he regarded himself as a Samnite or Sabellus by birth. Italians in modern and ancient times have always been devoted to their home towns, even after success in the wider world, and Horace was no different. Images of his childhood setting and references to it are found throughout his poems.
Horace's father was probably a Venutian taken captive by Romans in the Social War, or possibly he was descended from a Sabine captured in the Samnite Wars. Either way, he was a slave for at least part of his life. He was evidently a man of strong abilities however and managed to gain his freedom and improve his social position. Thus Horace claimed to be the free-born son of a prosperous 'coactor'. The term 'coactor' could denote various roles, such as tax collector, but its use by Horace was explained by scholia as a reference to 'coactor argentareus' i.e. an auctioneer with some of the functions of a banker, paying the seller out of his own funds and later recovering the sum with interest from the buyer.
The father spent a small fortune on his son's education, eventually accompanying him to Rome to oversee his schooling and moral development. The poet later paid tribute to him in a poem that one modern scholar considers the best memorial by any son to his father. The poem includes this passage:
If my character is flawed by a few minor faults, but is otherwise decent and moral, if you can point out only a few scattered blemishes on an otherwise immaculate surface, if no one can accuse me of greed, or of prurience, or of profligacy, if I live a virtuous life, free of defilement (pardon, for a moment, my self-praise), and if I am to my friends a good friend, my father deserves all the credit... As it is now, he deserves from me unstinting gratitude and praise. I could never be ashamed of such a father, nor do I feel any need, as many people do, to apologize for being a freedman's son. "Satires 1.6.65–92"
He never mentioned his mother in his verses and he might not have known much about her. Perhaps she also had been a slave.
Horace left Rome, possibly after his father's death, and continued his formal education in Athens, a great centre of learning in the ancient world, where he arrived at nineteen years of age, enrolling in The Academy. Founded by Plato, The Academy was now dominated by Epicureans and Stoics, whose theories and practises made a deep impression on the young man from Venusia. Meanwhile, he mixed and lounged about with the elite of Roman youth, such as Marcus, the idle son of Cicero, and the Pompeius to whom he later addressed a poem. It was in Athens too that he probably acquired deep familiarity with the ancient tradition of Greek lyric poetry, at that time largely the preserve of grammarians and academic specialists (access to such material was easier in Athens than in Rome, where the public libraries had yet to be built by Asinius Pollio and Augustus).
Rome's troubles following the assassination of Julius Caesar were soon to catch up with him. Marcus Junius Brutus came to Athens seeking support for the republican cause. Brutus was fêted around town in grand receptions and he made a point of attending academic lectures, all the while recruiting supporters among the young men studying there, including Horace. An educated young Roman could begin military service high in the ranks and Horace was made tribunus militum (one of six senior officers of a typical legion), a post usually reserved for men of senatorial or equestrian rank and which seems to have inspired jealousy among his well-born confederates. He learned the basics of military life while on the march, particularly in the wilds of northern Greece, whose rugged scenery became a backdrop to some of his later poems. It was there in 42 BC that Octavian (later Augustus) and his associate Mark Antony crushed the republican forces at the Battle of Philippi. Horace later recorded it as a day of embarrassment for himself, when he fled without his shield, but allowance should be made for his self-deprecating humour. Moreover, the incident allowed him to identify himself with some famous poets who had long ago abandoned their shields in battle, notably his heroes Alcaeus and Archilochus. The comparison with the latter poet is uncanny: Archilochus lost his shield in a part of Thrace near Philippi, and he was deeply involved in the Greek colonization of Thasos, where Horace's die-hard comrades finally surrendered.
Octavian offered an early amnesty to his opponents and Horace quickly accepted it. On returning to Italy, he was confronted with yet another loss: his father's estate in Venusia was one of many throughout Italy to be confiscated for the settlement of veterans (Virgil lost his estate in the north about the same time). Horace later claimed that he was reduced to poverty and this led him to try his hand at poetry. In reality, there was no money to be had from versifying. At best, it offered future prospects through contacts with other poets and their patrons among the rich. Meanwhile, he obtained the sinecure of "scriba quaestorius", a civil service position at the "aerarium" or Treasury, profitable enough to be purchased even by members of the "ordo equester" and not very demanding in its work-load, since tasks could be delegated to "scribae" or permanent clerks. It was about this time that he began writing his "Satires" and "Epodes".
The "Epodes" belong to iambic poetry. Iambic poetry features insulting and obscene language; sometimes, it is referred to as "blame poetry". "Blame poetry", or "shame poetry", is poetry written to blame and shame fellow citizens into a sense of their social obligations. Horace modelled these poems on the poetry of Archilochus. Social bonds in Rome had been decaying since the destruction of Carthage a little more than a hundred years earlier, due to the vast wealth that could be gained by plunder and corruption. These social ills were magnified by rivalry between Julius Caesar, Mark Antony and confederates like Sextus Pompey, all jockeying for a bigger share of the spoils. One modern scholar has counted a dozen civil wars in the hundred years leading up to 31 BC, including the Spartacus rebellion, eight years before Horace's birth. As the heirs to Hellenistic culture, Horace and his fellow Romans were not well prepared to deal with these problems:
Horace's Hellenistic background is clear in his Satires, even though the genre was unique to Latin literature. He brought to it a style and outlook suited to the social and ethical issues confronting Rome but he changed its role from public, social engagement to private meditation. Meanwhile, he was beginning to interest Octavian's supporters, a gradual process described by him in one of his satires. The way was opened for him by his friend, the poet Virgil, who had gained admission into the privileged circle around Maecenas, Octavian's lieutenant, following the success of his "Eclogues". An introduction soon followed and, after a discreet interval, Horace too was accepted. He depicted the process as an honourable one, based on merit and mutual respect, eventually leading to true friendship, and there is reason to believe that his relationship was genuinely friendly, not just with Maecenas but afterwards with Augustus as well. On the other hand, the poet has been unsympathetically described by one scholar as "a sharp and rising young man, with an eye to the main chance." There were advantages on both sides: Horace gained encouragement and material support, the politicians gained a hold on a potential dissident. His republican sympathies, and his role at Philippi, may have caused him some pangs of remorse over his new status. However most Romans considered the civil wars to be the result of "contentio dignitatis", or rivalry between the foremost families of the city, and he too seems to have accepted the principate as Rome's last hope for much needed peace.
In 37 BC, Horace accompanied Maecenas on a journey to Brundisium, described in one of his poems as a series of amusing incidents and charming encounters with other friends along the way, such as Virgil. In fact the journey was political in its motivation, with Maecenas en route to negotiatie the Treaty of Tarentum with Antony, a fact Horace artfully keeps from the reader (political issues are largely avoided in the first book of satires). Horace was probably also with Maecenas on one of Octavian's naval expeditions against the piratical Sextus Pompeius, which ended in a disastrous storm off Palinurus in 36 BC, briefly alluded to by Horace in terms of near-drowning. There are also some indications in his verses that he was with Maecenas at the Battle of Actium in 31 BC, where Octavian defeated his great rival, Antony. By then Horace had already received from Maecenas the famous gift of his Sabine farm, probably not long after the publication of the first book of "Satires". The gift, which included income from five tenants, may have ended his career at the Treasury, or at least allowed him to give it less time and energy. It signalled his identification with the Octavian regime yet, in the second book of "Satires" that soon followed, he continued the apolitical stance of the first book. By this time, he had attained the status of "eques Romanus", perhaps as a result of his work at the Treasury.
"Odes" 1–3 were the next focus for his artistic creativity. He adapted their forms and themes from Greek lyric poetry of the seventh and sixth centuries BC. The fragmented nature of the Greek world had enabled his literary heroes to express themselves freely and his semi-retirement from the Treasury in Rome to his own estate in the Sabine hills perhaps empowered him to some extent also yet even when his lyrics touched on public affairs they reinforced the importance of private life. Nevertheless, his work in the period 30–27 BC began to show his closeness to the regime and his sensitivity to its developing ideology. In "Odes" 1.2, for example, he eulogized Octavian in hyperboles that echo Hellenistic court poetry. The name "Augustus", which Octavian assumed in January 27 BC, is first attested in "Odes" 3.3 and 3.5. In the period 27–24 BC, political allusions in the "Odes" concentrated on foreign wars in Britain (1.35), Arabia (1.29) Spain (3.8) and Parthia (2.2). He greeted Augustus on his return to Rome in 24 BC as a beloved ruler upon whose good health he depended for his own happiness (3.14).
The public reception of "Odes" 1–3 disappointed him, however. He attributed the lack of success to jealousy among imperial courtiers and to his isolation from literary cliques. Perhaps it was disappointment that led him to put aside the genre in favour of verse letters. He addressed his first book of "Epistles" to a variety of friends and acquaintances in an urbane style reflecting his new social status as a knight. In the opening poem, he professed a deeper interest in moral philosophy than poetry but, though the collection demonstrates a leaning towards stoic theory, it reveals no sustained thinking about ethics. Maecenas was still the dominant confidante but Horace had now begun to assert his own independence, suavely declining constant invitations to attend his patron. In the final poem of the first book of "Epistles", he revealed himself to be forty-four years old in the consulship of Lollius and Lepidus i.e. 21 BC, and "of small stature, fond of the sun, prematurely grey, quick-tempered but easily placated".
According to Suetonius, the second book of "Epistles" was prompted by Augustus, who desired a verse epistle to be addressed to himself. Augustus was in fact a prolific letter-writer and he once asked Horace to be his personal secretary. Horace refused the secretarial role but complied with the emperor's request for a verse letter. The letter to Augustus may have been slow in coming, being published possibly as late as 11 BC. It celebrated, among other things, the 15 BC military victories of his stepsons, Drusus and Tiberius, yet it and the following letter were largely devoted to literary theory and criticism. The literary theme was explored still further in "Ars Poetica", published separately but written in the form of an epistle and sometimes referred to as "Epistles" 2.3 (possibly the last poem he ever wrote). He was also commissioned to write odes commemorating the victories of Drusus and Tiberius and one to be sung in a temple of Apollo for the Secular Games, a long-abandoned festival that Augustus revived in accordance with his policy of recreating ancient customs ("Carmen Saeculare").
Suetonius recorded some gossip about Horace's sexual activities late in life, claiming that the walls of his bedchamber were covered with obscene pictures and mirrors, so that he saw erotica wherever he looked. The poet died at 56 years of age, not long after his friend Maecenas, near whose tomb he was laid to rest. Both men bequeathed their property to Augustus, an honour that the emperor expected of his friends.
The dating of Horace's works isn't known precisely and scholars often debate the exact order in which they were first 'published'. There are persuasive arguments for the following chronology:
Horace composed in traditional metres borrowed from Archaic Greece, employing hexameters in his "Satires" and "Epistles", and iambs in his "Epodes", all of which were relatively easy to adapt into Latin forms. His "Odes" featured more complex measures, including alcaics and sapphics, which were sometimes a difficult fit for Latin structure and syntax. Despite these traditional metres, he presented himself as a partisan in the development of a new and sophisticated style. He was influenced in particular by Hellenistic aesthetics of brevity, elegance and polish, as modelled in the work of Callimachus.
In modern literary theory, a distinction is often made between immediate personal experience ("Urerlebnis") and experience mediated by cultural vectors such as literature, philosophy and the visual arts ("Bildungserlebnis"). The distinction has little relevance for Horace however since his personal and literary experiences are implicated in each other. "Satires" 1.5, for example, recounts in detail a real trip Horace made with Virgil and some of his other literary friends, and which parallels a Satire by Lucilius, his predecessor. Unlike much Hellenistic-inspired literature, however, his poetry was not composed for a small coterie of admirers and fellow poets, nor does it rely on abstruse allusions for many of its effects. Though elitist in its literary standards, it was written for a wide audience, as a public form of art. Ambivalence also characterizes his literary persona, since his presentation of himself as part of a small community of philosophically aware people, seeking true peace of mind while shunning vices like greed, was well adapted to Augustus's plans to reform public morality, corrupted by greed—his personal plea for moderation was part of the emperor's grand message to the nation.
Horace generally followed the examples of poets established as classics in different genres, such as Archilochus in the "Epodes", Lucilius in the "Satires" and Alcaeus in the "Odes", later broadening his scope for the sake of variation and because his models weren't actually suited to the realities confronting him. Archilochus and Alcaeus were aristocratic Greeks whose poetry had a social and religious function that was immediately intelligible to their audiences but which became a mere artifice or literary motif when transposed to Rome. However, the artifice of the "Odes" is also integral to their success, since they could now accommodate a wide range of emotional effects, and the blend of Greek and Roman elements adds a sense of detachment and universality. Horace proudly claimed to introduce into Latin the spirit and iambic poetry of Archilochus but (unlike Archilochus) without persecuting anyone ("Epistles" 1.19.23–25). It was no idle boast. His "Epodes" were modelled on the verses of the Greek poet, as 'blame poetry', yet he avoided targeting real scapegoats. Whereas Archilochus presented himself as a serious and vigorous opponent of wrong-doers, Horace aimed for comic effects and adopted the persona of a weak and ineffectual critic of his times (as symbolized for example in his surrender to the witch Canidia in the final epode). He also claimed to be the first to introduce into Latin the lyrical methods of Alcaeus ("Epistles" 1.19.32–33) and he actually was the first Latin poet to make consistent use of Alcaic meters and themes: love, politics and the symposium. He imitated other Greek lyric poets as well, employing a 'motto' technique, beginning each ode with some reference to a Greek original and then diverging from it.
The satirical poet Lucilius was a senator's son who could castigate his peers with impunity. Horace was a mere freedman's son who had to tread carefully. Lucilius was a rugged patriot and a significant voice in Roman self-awareness, endearing himself to his countrymen by his blunt frankness and explicit politics. His work expressed genuine freedom or libertas. His style included 'metrical vandalism' and looseness of structure. Horace instead adopted an oblique and ironic style of satire, ridiculing stock characters and anonymous targets. His libertas was the private freedom of a philosophical outlook, not a political or social privilege. His "Satires" are relatively easy-going in their use of meter (relative to the tight lyric meters of the "Odes") but formal and highly controlled relative to the poems of Lucilius, whom Horace mocked for his sloppy standards ("Satires" 1.10.56–61)
The "Epistles" may be considered among Horace's most innovative works. There was nothing like it in Greek or Roman literature. Occasionally poems had had some resemblance to letters, including an elegiac poem from Solon to Mimnermus and some lyrical poems from Pindar to Hieron of Syracuse. Lucilius had composed a satire in the form of a letter, and some epistolary poems were composed by Catullus and Propertius. But nobody before Horace had ever composed an entire collection of verse letters, let alone letters with a focus on philosophical problems. The sophisticated and flexible style that he had developed in his "Satires" was adapted to the more serious needs of this new genre. Such refinement of style was not unusual for Horace. His craftsmanship as a wordsmith is apparent even in his earliest attempts at this or that kind of poetry, but his handling of each genre tended to improve over time as he adapted it to his own needs. Thus for example it is generally agreed that his second book of "Satires", where human folly is revealed through dialogue between characters, is superior to the first, where he propounds his ethics in monologues. Nevertheless, the first book includes some of his most popular poems.
Horace developed a number of inter-related themes throughout his poetic career, including politics, love, philosophy and ethics, his own social role, as well as poetry itself. His "Epodes" and "Satires" are forms of 'blame poetry' and both have a natural affinity with the moralising and diatribes of Cynicism. This often takes the form of allusions to the work and philosophy of Bion of Borysthenes but it is as much a literary game as a philosophical alignment. By the time he composed his "Epistles", he was a critic of Cynicism along with all impractical and "high-falutin" philosophy in general. The "Satires" also include a strong element of Epicureanism, with frequent allusions to the Epicurean poet Lucretius. So for example the Epicurean sentiment "carpe diem" is the inspiration behind Horace's repeated punning on his own name ("Horatius ~ hora") in "Satires" 2.6. The "Satires" also feature some Stoic, Peripatetic and Platonic ("Dialogues") elements. In short, the "Satires" present a medley of philosophical programs, dished up in no particular order—a style of argument typical of the genre. The "Odes" display a wide range of topics. Over time, he becomes more confident about his political voice. Although he is often thought of as an overly intellectual lover, he is ingenious in representing passion. The "Odes" weave various philosophical strands together, with allusions and statements of doctrine present in about a third of the "Odes" Books 1–3, ranging from the flippant (1.22, 3.28) to the solemn (2.10, 3.2, 3.3). Epicureanism is the dominant influence, characterizing about twice as many of these odes as Stoicism. A group of odes combines these two influences in tense relationships, such as "Odes" 1.7, praising Stoic virility and devotion to public duty while also advocating private pleasures among friends. While generally favouring the Epicurean lifestyle, the lyric poet is as eclectic as the satiric poet, and in "Odes" 2.10 even proposes Aristotle's golden mean as a remedy for Rome's political troubles. Many of Horace's poems also contain much reflection on genre, the lyric tradition, and the function of poetry. "Odes" 4, thought to be composed at the emperor's request, takes the themes of the first three books of "Odes" to a new level. This book shows greater poetic confidence after the public performance of his "Carmen saeculare" or "Century hymn" at a public festival orchestrated by Augustus. In it, Horace addresses the emperor Augustus directly with more confidence and proclaims his power to grant poetic immortality to those he praises. It is the least philosophical collection of his verses, excepting the twelfth ode, addressed to the dead Virgil as if he were living. In that ode, the epic poet and the lyric poet are aligned with Stoicism and Epicureanism respectively, in a mood of bitter-sweet pathos. The first poem of the "Epistles" sets the philosophical tone for the rest of the collection: "So now I put aside both verses and all those other games: What is true and what befits is my care, this my question, this my whole concern." His poetic renunciation of poetry in favour of philosophy is intended to be ambiguous. Ambiguity is the hallmark of the "Epistles". It is uncertain if those being addressed by the self-mocking poet-philosopher are being honoured or criticized. Though he emerges as an Epicurean, it is on the understanding that philosophical preferences, like political and social choices, are a matter of personal taste. Thus he depicts the ups and downs of the philosophical life more realistically than do most philosophers.
The reception of Horace's work has varied from one epoch to another and varied markedly even in his own lifetime. "Odes" 1–3 were not well received when first 'published' in Rome, yet Augustus later commissioned a ceremonial ode for the Centennial Games in 17 BC and also encouraged the publication of "Odes" 4, after which Horace's reputation as Rome's premier lyricist was assured. His Odes were to become the best received of all his poems in ancient times, acquiring a classic status that discouraged imitation: no other poet produced a comparable body of lyrics in the four centuries that followed (though that might also be attributed to social causes, particularly the parasitism that Italy was sinking into). In the seventeenth and eighteenth centuries, ode-writing became highly fashionable in England and a large number of aspiring poets imitated Horace both in English and in Latin.
In a verse epistle to Augustus (Epistle 2.1), in 12 BC, Horace argued for classic status to be awarded to contemporary poets, including Virgil and apparently himself. In the final poem of his third book of Odes he claimed to have created for himself a monument more durable than bronze ("Exegi monumentum aere perennius", "Carmina" 3.30.1). For one modern scholar, however, Horace's personal qualities are more notable than the monumental quality of his achievement:
Yet for men like Wilfred Owen, scarred by experiences of World War I, his poetry stood for discredited values:
The same motto, "Dulce et decorum est pro patria mori", had been adapted to the ethos of martyrdom in the lyrics of early Christian poets like Prudentius.
These preliminary comments touch on a small sample of developments in the reception of Horace's work. More developments are covered epoch by epoch in the following sections.
Horace's influence can be observed in the work of his near contemporaries, Ovid and Propertius. Ovid followed his example in creating a completely natural style of expression in hexameter verse, and Propertius cheekily mimicked him in his third book of elegies. His "Epistles" provided them both with a model for their own verse letters and it also shaped Ovid's exile poetry.
His influence had a perverse aspect. As mentioned before, the brilliance of his "Odes" may have discouraged imitation. Conversely, they may have created a vogue for the lyrics of the archaic Greek poet Pindar, due to the fact that Horace had neglected that style of lyric (see Influence and Legacy of Pindar). The iambic genre seems almost to have disappeared after publication of Horace's "Epodes". Ovid's "Ibis" was a rare attempt at the form but it was inspired mainly by Callimachus, and there are some iambic elements in Martial but the main influence there was Catullus. A revival of popular interest in the satires of Lucilius may have been inspired by Horace's criticism of his unpolished style. Both Horace and Lucilius were considered good role-models by Persius, who critiqued his own satires as lacking both the acerbity of Lucillius and the gentler touch of Horace. Juvenal's caustic satire was influenced mainly by Lucilius but Horace by then was a school classic and Juvenal could refer to him respectfully and in a round-about way as ""the Venusine lamp"".
Statius paid homage to Horace by composing one poem in Sapphic and one in Alcaic meter (the verse forms most often associated with "Odes"), which he included in his collection of occasional poems, "Silvae". Ancient scholars wrote commentaries on the lyric meters of the "Odes", including the scholarly poet Caesius Bassus. By a process called "derivatio", he varied established meters through the addition or omission of syllables, a technique borrowed by Seneca the Younger when adapting Horatian meters to the stage.
Horace's poems continued to be school texts into late antiquity. Works attributed to Helenius Acro and Pomponius Porphyrio are the remnants of a much larger body of Horatian scholarship. Porphyrio arranged the poems in non-chronological order, beginning with the "Odes", because of their general popularity and their appeal to scholars (the "Odes" were to retain this privileged position in the medieval manuscript tradition and thus in modern editions also). Horace was often evoked by poets of the fourth century, such as Ausonius and Claudian. Prudentius presented himself as a Christian Horace, adapting Horatian meters to his own poetry and giving Horatian motifs a Christian tone. On the other hand, St Jerome, modelled an uncompromising response to the pagan Horace, observing: ""What harmony can there be between Christ and the Devil? What has Horace to do with the Psalter?"" By the early sixth century, Horace and Prudentius were both part of a classical heritage that was struggling to survive the disorder of the times. Boethius, the last major author of classical Latin literature, could still take inspiration from Horace, sometimes mediated by Senecan tragedy. It can be argued that Horace's influence extended beyond poetry to dignify core themes and values of the early Christian era, such as self-sufficiency, inner contentment and courage.
Classical texts almost ceased being copied in the period between the mid sixth century and the Carolingian revival. Horace's work probably survived in just two or three books imported into northern Europe from Italy. These became the ancestors of six extant manuscripts dated to the ninth century. Two of those six manuscripts are French in origin, one was produced in Alsace, and the other three show Irish influence but were probably written in continental monasteries (Lombardy for example). By the last half of the ninth century, it was not uncommon for literate people to have direct experience of Horace's poetry. His influence on the Carolingian Renaissance can be found in the poems of Heiric of Auxerre and in some manuscripts marked with neumes, mysterious notations that may have been an aid to the memorization and discussion of his lyric meters. "Ode" is neumed with the melody of a hymn to John the Baptist, "Ut queant laxis", composed in Sapphic stanzas. This hymn later became the basis of the solfege system ("Do, re, mi...")an association with western music quite appropriate for a lyric poet like Horace, though the language of the hymn is mainly Prudentian. Lyons argues that the melody in question was linked with Horace's Ode well before Guido d'Arezzo fitted Ut queant laxis to it. However, the melody is unlikely to be a survivor from classical times, although Ovid testifies to Horace's use of the lyre while performing his Odes.
The German scholar, Ludwig Traube, once dubbed the tenth and eleventh centuries "The age of Horace" ("aetas Horatiana"), and placed it between the "aetas Vergiliana" of the eighth and ninth centuries, and the "aetas Ovidiana" of the twelfth and thirteenth centuries, a distinction supposed to reflect the dominant classical Latin influences of those times. Such a distinction is over-schematized since Horace was a substantial influence in the ninth century as well. Traube had focused too much on Horace's "Satires". Almost all of Horace's work found favour in the Medieval period. In fact medieval scholars were also guilty of over-schematism, associating Horace's different genres with the different ages of man. A twelfth-century scholar encapsulated the theory: "...Horace wrote four different kinds of poems on account of the four ages, the "Odes" for boys, the "Ars Poetica" for young men, the "Satires" for mature men, the "Epistles" for old and complete men." It was even thought that Horace had composed his works in the order in which they had been placed by ancient scholars. Despite its naivety, the schematism involved an appreciation of Horace's works as a collection, the "Ars Poetica", "Satires" and "Epistles" appearing to find favour as well as the "Odes". The later Middle Ages however gave special significance to "Satires" and "Epistles", being considered Horace's mature works. Dante referred to Horace as "Orazio satiro", and he awarded him a privileged position in the first circle of Hell, with Homer, Ovid and Lucan.
Horace's popularity is revealed in the large number of quotes from all his works found in almost every genre of medieval literature, and also in the number of poets imitating him in quantitative Latin meter. The most prolific imitator of his "Odes" was the Bavarian monk, Metellus of Tegernsee, who dedicated his work to the patron saint of Tegernsee Abbey, St Quirinus, around the year 1170. He imitated all Horace's lyrical meters then followed these up with imitations of other meters used by Prudentius and Boethius, indicating that variety, as first modelled by Horace, was considered a fundamental aspect of the lyric genre. The content of his poems however was restricted to simple piety. Among the most successful imitators of "Satires" and "Epistles" was another Germanic author, calling himself Sextus Amarcius, around 1100, who composed four books, the first two exemplifying vices, the second pair mainly virtues.
Petrarch is a key figure in the imitation of Horace in accentual meters. His verse letters in Latin were modelled on the "Epistles" and he wrote a letter to Horace in the form of an ode. However he also borrowed from Horace when composing his Italian sonnets. One modern scholar has speculated that authors who imitated Horace in accentual rhythms (including stressed Latin and vernacular languages) may have considered their work a natural sequel to Horace's metrical variety. In France, Horace and Pindar were the poetic models for a group of vernacular authors called the Pléiade, including for example Pierre de Ronsard and Joachim du Bellay. Montaigne made constant and inventive use of Horatian quotes. The vernacular languages were dominant in Spain and Portugal in the sixteenth century, where Horace's influence is notable in the works of such authors as Garcilaso de la Vega, Juan Boscán, Sá de Miranda, Antonio Ferreira and Fray Luis de León, the last writing odes on the Horatian theme "beatus ille" ("happy the man"). The sixteenth century in western Europe was also an age of translations (except in Germany, where Horace wasn't translated into the vernacular until well into the seventeenth century). The first English translator was Thomas Drant, who placed translations of Jeremiah and Horace side by side in "Medicinable Morall", 1566. That was also the year that the Scot George Buchanan paraphrased the Psalms in a Horatian setting. Ben Jonson put Horace on the stage in 1601 in "Poetaster", along with other classical Latin authors, giving them all their own verses to speak in translation. Horace's part evinces the independent spirit, moral earnestness and critical insight that many readers look for in his poems.
During the seventeenth and eighteenth centuries, or the Age of Enlightenment, neoclassical culture was pervasive. English literature in the middle of that period has been dubbed Augustan. It is not always easy to distinguish Horace's influence during those centuries (the mixing of influences is shown for example in one poet's pseudonym, "Horace Juvenal"). However a measure of his influence can be found in the diversity of the people interested in his works, both among readers and authors.
New editions of his works were published almost yearly. There were three new editions in 1612 (two in Leiden, one in Frankfurt) and again in 1699 (Utrecht, Barcelona, Cambridge). Cheap editions were plentiful and fine editions were also produced, including one whose entire text was engraved by John Pine in copperplate. The poet James Thomson owned five editions of Horace's work and the physician James Douglas had five hundred books with Horace-related titles. Horace was often commended in periodicals such as The Spectator, as a hallmark of good judgement, moderation and manliness, a focus for moralising. His verses offered a fund of mottoes, such as "simplex munditiis" (elegance in simplicity), "splendide mendax" (nobly untruthful), "sapere aude" (dare to know), "nunc est bibendum" (now is the time to drink), "carpe diem" (seize the day, perhaps the only one still in common use today). These were quoted even in works as prosaic as Edmund Quincy's "A treatise of hemp-husbandry" (1765). The fictional hero Tom Jones recited his verses with feeling. His works were also used to justify commonplace themes, such as patriotic obedience, as in James Parry's English lines from an Oxford University collection in 1736:
Horatian-style lyrics were increasingly typical of Oxford and Cambridge verse collections for this period, most of them in Latin but some like the previous ode in English. John Milton's Lycidas first appeared in such a collection. It has few Horatian echoes yet Milton's associations with Horace were lifelong. He composed a controversial version of "Odes" 1.5, and Paradise Lost includes references to Horace's 'Roman' "Odes" 3.1–6 (Book 7 for example begins with echoes of "Odes" 3.4). Yet Horace's lyrics could offer inspiration to libertines as well as moralists, and neo-Latin sometimes served as a kind of discrete veil for the risqué. Thus for example Benjamin Loveling authored a catalogue of Drury Lane and Covent Garden prostitutes, in Sapphic stanzas, and an encomium for a dying lady "of salacious memory". Some Latin imitations of Horace were politically subversive, such as a marriage ode by Anthony Alsop that included a rallying cry for the Jacobite cause. On the other hand, Andrew Marvell took inspiration from Horace's "Odes" 1.37 to compose his English masterpiece Horatian Ode upon Cromwell's Return from Ireland, in which subtly nuanced reflections on the execution of Charles I echo Horace's ambiguous response to the death of Cleopatra (Marvell's ode was suppressed in spite of its subtlety and only began to be widely published in 1776). Samuel Johnson took particular pleasure in reading "The Odes". Alexander Pope wrote direct "Imitations" of Horace (published with the original Latin alongside) and also echoed him in "Essays" and The Rape of the Lock. He even emerged as "a quite Horatian Homer" in his translation of the "Iliad". Horace appealed also to female poets, such as Anna Seward ("Original sonnets on various subjects, and odes paraphrased from Horace", 1799) and Elizabeth Tollet, who composed a Latin ode in Sapphic meter to celebrate her brother's return from overseas, with tea and coffee substituted for the wine of Horace's sympotic settings:
Horace's "Ars Poetica" is second only to Aristotle's "Poetics" in its influence on literary theory and criticism. Milton recommended both works in his treatise "of Education". Horace's "Satires" and "Epistles" however also had a huge impact, influencing theorists and critics such as John Dryden. There was considerable debate over the value of different lyrical forms for contemporary poets, as represented on one hand by the kind of four-line stanzas made familiar by Horace's Sapphic and Alcaic "Odes" and, on the other, the loosely structured Pindarics associated with the odes of Pindar. Translations occasionally involved scholars in the dilemmas of censorship. Thus Christopher Smart entirely omitted "Odes" and re-numbered the remaining odes. He also removed the ending of "Odes" . Thomas Creech printed "Epodes" and in the original Latin but left out their English translations. Philip Francis left out both the English and Latin for those same two epodes, a gap in the numbering the only indication that something was amiss. French editions of Horace were influential in England and these too were regularly bowdlerized.
Most European nations had their own 'Horaces': thus for example Friedrich von Hagedorn was called "The German Horace" and Maciej Kazimierz Sarbiewski "The Polish Horace" (the latter was much imitated by English poets such as Henry Vaughan and Abraham Cowley). Pope Urban VIII wrote voluminously in Horatian meters, including an ode on gout.
Horace maintained a central role in the education of English-speaking elites right up until the 1960s. A pedantic emphasis on the formal aspects of language-learning at the expense of literary appreciation may have made him unpopular in some quarters yet it also confirmed his influencea tension in his reception that underlies Byron's famous lines from "Childe Harold" (Canto iv, 77):
William Wordsworth's mature poetry, including the preface to "Lyrical Ballads", reveals Horace's influence in its rejection of false ornament and he once expressed "a wish / to meet the shade of Horace...". John Keats echoed the opening of Horace's "Epodes" 14 in the opening lines of "Ode to a Nightingale".
The Roman poet was presented in the nineteenth century as an honorary English gentleman. William Thackeray produced a version of "Odes" in which Horace's 'boy' became 'Lucy', and Gerard Manley Hopkins translated the boy innocently as 'child'. Horace was translated by Sir Theodore Martin (biographer of Prince Albert) but minus some ungentlemanly verses, such as the erotic "Odes" and "Epodes" 8 and 12. Edward Bulwer-Lytton produced a popular translation and William Gladstone also wrote translations during his last days as Prime Minister.
Edward FitzGerald's "Rubaiyat of Omar Khayyam", though formally derived from the Persian "ruba'i", nevertheless shows a strong Horatian influence, since, as one modern scholar has observed, ""...the quatrains inevitably recall the stanzas of the 'Odes', as does the narrating first person of the world-weary, ageing Epicurean Omar himself, mixing sympotic exhortation and 'carpe diem' with splendid moralising and 'memento mori' nihilism."" Matthew Arnold advised a friend in verse not to worry about politics, an echo of "Odes" , yet later became a critic of Horace's inadequacies relative to Greek poets, as role models of Victorian virtues, observing: ""If human life were complete without faith, without enthusiasm, without energy, Horace...would be the perfect interpreter of human life."" Christina Rossetti composed a sonnet depicting a woman willing her own death steadily, drawing on Horace's depiction of 'Glycera' in "Odes" and Cleopatra in "Odes" . A. E. Housman considered "Odes" , in Archilochian couplets, the most beautiful poem of antiquity and yet he generally shared Horace's penchant for quatrains, being readily adapted to his own elegiac and melancholy strain. The most famous poem of Ernest Dowson took its title and its heroine's name from a line of "Odes" , "Non sum qualis eram bonae sub regno Cynarae", as well as its motif of nostalgia for a former flame. Kipling wrote a famous parody of the "Odes", satirising their stylistic idiosyncrasies and especially the extraordinary syntax, but he also used Horace's Roman patriotism as a focus for British imperialism, as in the story "Regulus" in the school collection "Stalky & Co.", which he based on "Odes" . Wilfred Owen's famous poem, quoted above, incorporated Horatian text to question patriotism while ignoring the rules of Latin scansion. However, there were few other echoes of Horace in the war period, possibly because war is not actually a major theme of Horace's work.
Both W.H.Auden and Louis MacNeice began their careers as teachers of classics and both responded as poets to Horace's influence. Auden for example evoked the fragile world of the 1930s in terms echoing "Odes" , where Horace advises a friend not to let worries about frontier wars interfere with current pleasures.
The American poet, Robert Frost, echoed Horace's "Satires" in the conversational and sententious idiom of some of his longer poems, such as "The Lesson for Today" (1941), and also in his gentle advocacy of life on the farm, as in "Hyla Brook" (1916), evoking Horace's "fons Bandusiae" in "Ode" . Now at the start of the third millennium, poets are still absorbing and re-configuring the Horatian influence, sometimes in translation (such as a 2002 English/American edition of the "Odes" by thirty-six poets) and sometimes as inspiration for their own work (such as a 2003 collection of odes by a New Zealand poet).
Horace's "Epodes" have largely been ignored in the modern era, excepting those with political associations of historical significance. The obscene qualities of some of the poems have repulsed even scholars yet more recently a better understanding of the nature of Iambic poetry has led to a re-evaluation of the "whole" collection. A re-appraisal of the "Epodes" also appears in creative adaptations by recent poets (such as a 2004 collection of poems that relocates the ancient context to a 1950s industrial town).
The Oxford Latin Course textbooks use the life of Horace to illustrate an average Roman's life in the late Republic to Early Empire. | https://en.wikipedia.org/wiki?curid=13693 |
Microsoft Windows version history
Microsoft Windows was announced by Bill Gates on November 10, 1983. Microsoft introduced Windows as a graphical user interface for MS-DOS, which had been introduced a couple of years earlier. In the 1990s, the product line evolved from an operating environment into a fully complete, modern operating system over two lines of development, each with their own separate codebase.
The first versions of Windows (1.0 through to 3.11) were graphical shells that ran from MS-DOS. Later on, Windows 95, though still being based on MS-DOS, was its own operating system, using a 16-bit DOS-based kernel and a 32-bit user space. Windows 95 introduced many features that have been part of the product ever since, including the Start menu, the taskbar, and Windows Explorer (renamed File Explorer in Windows 8). In 1997, Microsoft released Internet Explorer 4 which included the (at the time) controversial Windows Desktop Update. It aimed to integrate Internet Explorer and the web into the user interface and also brought many new features into Windows, such as the ability to display JPEG images as the desktop wallpaper and single window navigation in Windows Explorer. In 1998, Microsoft released Windows 98, which also included the Windows Desktop Update and Internet Explorer 4 by default. The inclusion of Internet Explorer 4 and the Desktop Update led to an anti-trust case in the United States. Windows 98 also includes plug and play, which allows devices to work when plugged in without requiring a system reboot or manual configuration, and USB support out of the box. Windows Me, the last DOS-based version of Windows, was aimed at consumers and released in 2000. It introduced System Restore, Help and Support Center, updated versions of the Disk Defragmenter and other system tools.
In 1993, Microsoft released Windows NT 3.1, the first version of the newly-developed Windows NT operating system. Unlike the Windows 9x series of operating systems, it is a fully 32-bit operating system. NT 3.1 introduced NTFS, a file system designed to replace the older File Allocation Table (FAT) which was used by DOS and the DOS-based Windows operating systems. In 1996, Windows NT 4.0 was released, which includes a fully 32-bit version of Windows Explorer written specifically for it, making the operating system work just like Windows 95. Windows NT was originally designed to be used on high-end systems and servers, however with the release of Windows 2000, many consumer-oriented features from Windows 95 and Windows 98 were included, such as the Windows Desktop Update, Internet Explorer 5, USB support and Windows Media Player. These consumer-oriented features were continued and further extended in Windows XP, which introduced a new theme called Luna, a more user-friendly interface, updated versions of Windows Media Player and Internet Explorer, and extended features from Windows Me, such as the Help and Support Center and System Restore. Windows Vista focused on securing the Windows operating system against computer viruses and other malicious software by introducing features such as User Account Control. New features include Windows Aero, updated versions of the standard games (e.g. Solitaire), Windows Movie Maker, and Windows Mail to replace Outlook Express. Despite this, Windows Vista was critically panned for its poor performance on older hardware and its at-the-time high system requirements. Windows 7 followed two and a half years later, and despite technically having higher system requirements, reviewers noted that it ran better than Windows Vista. Windows 7 also removed many extra features, such as Windows Movie Maker, Windows Photo Gallery and Windows Mail, instead requiring users download a separate Windows Live Essentials to gain those features and other online services. Windows 8 and Windows 8.1, a free upgrade for Windows 8, introduced many controversial changes, such as the replacement of the Start menu with the Start Screen, the removal of the Aero glass interface in favor of a flat, colored interface as well as the introduction of "Metro" apps (later renamed to Universal Windows Platform apps) and the Charms Bar user interface element, all of which received considerable criticism from reviewers.
The current version of Windows, Windows 10, reintroduced the Start menu and added the ability to run Universal Windows Platform apps in a window instead of always in full screen. Windows 10 was well-received, with many reviewers stating that Windows 10 is what Windows 8 should have been. Windows 10 also marks the last version of Windows to be traditionally released. Instead, "feature updates" are released twice a year with names such as "Creators Update" and "Fall Creators Update" that introduce new capabilities.
The first independent version of Microsoft Windows, version 1.0, released on November 20, 1985, achieved little popularity. The project was briefly codenamed "Interface Manager" before the windowing system was implemented - contrary to popular belief that it was the original name for Windows and Rowland Hanson, the head of marketing at Microsoft, convinced the company that the name "Windows" would be more appealing to customers.
Windows 1.0 was not a complete operating system, but rather an "operating environment" that extended MS-DOS, and shared the latter's inherent flaws and errors.
The first version of Microsoft Windows included a simple graphics painting program called Windows Paint; Windows Write, a simple word processor; an appointment calendar; a card-filer; a notepad; a clock; a control panel; a computer terminal; Clipboard; and RAM driver. It also included the MS-DOS Executive and a game called Reversi.
Microsoft had worked with Apple Computer to develop applications for Apple's new Macintosh computer, which featured a graphical user interface. As part of the related business negotiations, Microsoft had licensed certain aspects of the Macintosh user interface from Apple; in later litigation, a district court summarized these aspects as "screen displays".
In the development of Windows 1.0, Microsoft intentionally limited its borrowing of certain GUI elements from the Macintosh user interface, to comply with its license. For example, windows were only displayed "tiled" on the screen; that is, they could not overlap or overlie one another.
Microsoft Windows version 2 came out on December 9, 1987, and proved slightly more popular than its predecessor.
Much of the popularity for Windows 2.0 came by way of its inclusion as a "run-time version" with Microsoft's new graphical applications, Excel and Word for Windows. They could be run from MS-DOS, executing Windows for the duration of their activity, and closing down Windows upon exit.
Microsoft Windows received a major boost around this time when Aldus PageMaker appeared in a Windows version, having previously run only on Macintosh. Some computer historians date this, the first appearance of a significant "and" non-Microsoft application for Windows, as the start of the success of Windows.
Versions 2.0x used the real-mode memory model, which confined it to a maximum of 1 megabyte of memory.
In such a configuration, it could run under another multitasker like DESQview, which used the 286 protected mode.
Later, two new versions were released: Windows/286 2.1 and Windows/386 2.1. Like prior versions of Windows, Windows/286 2.1 used the real-mode memory model, but was the first version to support the High Memory Area. Windows/386 2.1 had a protected mode kernel with LIM-standard EMS emulation. All Windows and DOS-based applications at the time were real mode, running over the protected mode kernel by using the virtual 8086 mode, which was new with the 80386 processor.
Version 2.03, and later 3.0, faced challenges from Apple over its overlapping windows and other features Apple charged mimicked the ostensibly copyrighted "look and feel" of its operating system and "embodie[d] and generated a copy of the Macintosh" in its OS. Judge William Schwarzer dropped all but 10 of Apple's 189 claims of copyright infringement, and ruled that most of the remaining 10 were over uncopyrightable ideas.
Windows 3.0, released in May 1990, improved capabilities given to native applications. It also allowed users to better multitask older MS-DOS based software compared to Windows/386, thanks to the introduction of virtual memory.
Windows 3.0's user interface finally resembled a serious competitor to the user interface of the Macintosh computer. PCs had improved graphics by this time, due to VGA video cards, and the protected/enhanced mode allowed Windows applications to use more memory in a more painless manner than their DOS counterparts could. Windows 3.0 could run in real, standard, or 386 enhanced modes, and was compatible with any Intel processor from the 8086/8088 up to the 80286 and 80386. This was the first version to run Windows programs in protected mode, although the 386 enhanced mode kernel was an enhanced version of the protected mode kernel in Windows/386.
Windows 3.0 received two updates. A few months after introduction, Windows 3.0a was released as a maintenance release, resolving bugs and improving stability. A "multimedia" version, Windows 3.0 with Multimedia Extensions 1.0, was released in October 1991. This was bundled with "multimedia upgrade kits", comprising a CD-ROM drive and a sound card, such as the Creative Labs Sound Blaster Pro. This version was the precursor to the multimedia features available in Windows 3.1 (first released in April 1992) and later, and was part of Microsoft's specification for the Multimedia PC.
The features listed above and growing market support from application software developers made Windows 3.0 wildly successful, selling around 10 million copies in the two years before the release of version 3.1. Windows 3.0 became a major source of income for Microsoft, and led the company to revise some of its earlier plans. Support was discontinued on December 31, 2001.
During the mid to late 1980s, Microsoft and IBM had cooperatively been developing OS/2 as a successor to DOS. OS/2 would take full advantage of the aforementioned protected mode of the Intel 80286 processor and up to 16 MB of memory. OS/2 1.0, released in 1987, supported swapping and multitasking and allowed running of DOS executables.
IBM licensed Windows's GUI for OS/2 as Presentation Manager, and the two companies stated that it and Windows 2.0 would be almost identical. Presentation Manager was not available with OS/2 until version 1.1, released in 1988. Its API was incompatible with Windows. Version 1.2, released in 1989, introduced a new file system, HPFS, to replace the FAT file system.
By the early 1990s, conflicts developed in the Microsoft/IBM relationship. They cooperated with each other in developing their PC operating systems, and had access to each other's code. Microsoft wanted to further develop Windows, while IBM desired for future work to be based on OS/2. In an attempt to resolve this tension, IBM and Microsoft agreed that IBM would develop OS/2 2.0, to replace OS/2 1.3 and Windows 3.0, while Microsoft would develop a new operating system, OS/2 3.0, to later succeed OS/2 2.0.
This agreement soon fell apart however, and the Microsoft/IBM relationship was terminated. IBM continued to develop OS/2, while Microsoft changed the name of its (as yet unreleased) OS/2 3.0 to Windows NT. Both retained the rights to use OS/2 and Windows technology developed up to the termination of the agreement; Windows NT, however, was to be written anew, mostly independently (see below).
After an interim 1.3 version to fix up many remaining problems with the 1.x series, IBM released OS/2 version 2.0 in 1992. This was a major improvement: it featured a new, object-oriented GUI, the Workplace Shell (WPS), that included a desktop and was considered by many to be OS/2's best feature. Microsoft would later imitate much of it in Windows 95. Version 2.0 also provided a full 32-bit API, offered smooth multitasking and could take advantage of the 4 gigabytes of address space provided by the Intel 80386. Still, much of the system had 16-bit code internally which required, among other things, device drivers to be 16-bit code as well. This was one of the reasons for the chronic shortage of OS/2 drivers for the latest devices. Version 2.0 could also run DOS and Windows 3.0 programs, since IBM had retained the right to use the DOS and Windows code as a result of the breakup.
In response to the impending release of OS/2 2.0, Microsoft developed Windows 3.1 (first released in April 1992), which included several improvements to Windows 3.0, such as display of TrueType scalable fonts (developed jointly with Apple), improved disk performance in 386 Enhanced Mode, multimedia support, and bugfixes. It also removed Real Mode, and only ran on an 80286 or better processor. Later Microsoft also released Windows 3.11, a touch-up to Windows 3.1 which included all of the patches and updates that followed the release of Windows 3.1 in 1992.
In 1992 and 1993, Microsoft released Windows for Workgroups (WfW), which was available both as an add-on for existing Windows 3.1 installations and in a version that included the base Windows environment and the networking extensions all in one package. Windows for Workgroups included improved network drivers and protocol stacks, and support for peer-to-peer networking. There were two versions of Windows for Workgroups, WfW 3.1 and WfW 3.11. Unlike prior versions, Windows for Workgroups 3.11 ran in 386 Enhanced Mode only, and needed at least an 80386SX processor. One optional download for WfW was the "Wolverine" TCP/IP protocol stack, which allowed for easy access to the Internet through corporate networks.
All these versions continued version 3.0's impressive sales pace. Even though the 3.1x series still lacked most of the important features of OS/2, such as long file names, a desktop, or protection of the system against misbehaving applications, Microsoft quickly took over the OS and GUI markets for the IBM PC. The Windows API became the de facto standard for consumer software.
Meanwhile, Microsoft continued to develop Windows NT. The main architect of the system was Dave Cutler, one of the chief architects of VMS at Digital Equipment Corporation (later acquired by Compaq, now part of Hewlett-Packard). Microsoft hired him in October 1988 to create a successor to OS/2, but Cutler created a completely new system instead. Cutler had been developing a follow-on to VMS at DEC called Mica, and when DEC dropped the project he brought the expertise and around 20 engineers with him to Microsoft. DEC also believed he brought Mica's code to Microsoft and sued. Microsoft eventually paid US$150 million and agreed to support DEC's Alpha CPU chip in NT.
Windows NT Workstation (Microsoft marketing wanted Windows NT to appear to be a continuation of Windows 3.1) arrived in Beta form to developers at the July 1992 Professional Developers Conference in San Francisco. Microsoft announced at the conference its intentions to develop a successor to both Windows NT and Windows 3.1's replacement (Windows 95, codenamed Chicago), which would unify the two into one operating system. This successor was codenamed Cairo. In hindsight, Cairo was a much more difficult project than Microsoft had anticipated and, as a result, NT and Chicago would not be unified until Windows XP—albeit Windows 2000, oriented to business, had already unified most of the system's bolts and gears, it was XP that was sold to home consumers like Windows 95 and came to be viewed as the final unified OS. Parts of Cairo have still not made it into Windows as of 2020 - most notably, the WinFS file system, which was the much touted Object File System of Cairo. Microsoft announced that they have discontinued the separate release of WinFS for Windows XP and Windows Vista and will gradually incorporate the technologies developed for WinFS in other products and technologies, notably Microsoft SQL Server.
Driver support was lacking due to the increased programming difficulty in dealing with NT's superior hardware abstraction model. This problem plagued the NT line all the way through Windows 2000. Programmers complained that it was too hard to write drivers for NT, and hardware developers were not going to go through the trouble of developing drivers for a small segment of the market. Additionally, although allowing for good performance and fuller exploitation of system resources, it was also resource-intensive on limited hardware, and thus was only suitable for larger, more expensive machines.
However, these same features made Windows NT perfect for the LAN server market (which in 1993 was experiencing a rapid boom, as office networking was becoming common). NT also had advanced network connectivity options and NTFS, an efficient file system. Windows NT version 3.51 was Microsoft's entry into this field, and took away market share from Novell (the dominant player) in the following years.
One of Microsoft's biggest advances initially developed for Windows NT was a new 32-bit API, to replace the legacy 16-bit Windows API. This API was called Win32, and from then on Microsoft referred to the older 16-bit API as Win16. The Win32 API had three levels of implementation: the complete one for Windows NT, a subset for Chicago (originally called Win32c) missing features primarily of interest to enterprise customers (at the time) such as security and Unicode support, and a more limited subset called Win32s which could be used on Windows 3.1 systems. Thus Microsoft sought to ensure some degree of compatibility between the Chicago design and Windows NT, even though the two systems had radically different internal architectures. Windows NT was the first Windows operating system based on a hybrid kernel.
As released, Windows NT 3.x went through three versions (3.1, 3.5, and 3.51), changes were primarily internal and reflected back end changes. The 3.5 release added support for new types of hardware and improved performance and data reliability; the 3.51 release was primarily to update the Win32 APIs to be compatible with software being written for the Win32c APIs in what became Windows 95.
After Windows 3.11, Microsoft began to develop a new consumer oriented version of the operating system codenamed Chicago. Chicago was designed to have support for 32-bit preemptive multitasking like OS/2 and Windows NT, although a 16-bit kernel would remain for the sake of backward compatibility. The Win32 API first introduced with Windows NT was adopted as the standard 32-bit programming interface, with Win16 compatibility being preserved through a technique known as "thunking". A new object oriented GUI was not originally planned as part of the release, although elements of the Cairo user interface were borrowed and added as other aspects of the release (notably Plug and Play) slipped.
Microsoft did not change all of the Windows code to 32-bit, parts of it remained 16-bit (albeit not directly using real mode) for reasons of compatibility, performance, and development time. Additionally it was necessary to carry over design decisions from earlier versions of Windows for reasons of backwards compatibility, even if these design decisions no longer matched a more modern computing environment. These factors eventually began to impact the operating system's efficiency and stability.
Microsoft marketing adopted Windows 95 as the product name for Chicago when it was released on August 24, 1995. Microsoft had a double gain from its release: first, it made it impossible for consumers to run Windows 95 on a cheaper, non-Microsoft DOS, secondly, although traces of DOS were never completely removed from the system and MS DOS 7 would be loaded briefly as a part of the booting process, Windows 95 applications ran solely in 386 enhanced mode, with a flat 32-bit address space and virtual memory. These features make it possible for Win32 applications to address up to 2 gigabytes of virtual RAM (with another 2 GB reserved for the operating system), and in theory prevented them from inadvertently corrupting the memory space of other Win32 applications. In this respect the functionality of Windows 95 moved closer to Windows NT, although Windows 95/98/Me did not support more than 512 megabytes of physical RAM without obscure system tweaks.
IBM continued to market OS/2, producing later versions in OS/2 3.0 and 4.0 (also called Warp). Responding to complaints about OS/2 2.0's high demands on computer hardware, version 3.0 was significantly optimized both for speed and size. Before Windows 95 was released, OS/2 Warp 3.0 was even shipped pre-installed with several large German hardware vendor chains. However, with the release of Windows 95, OS/2 began to lose market share.
It is probably impossible to choose one specific reason why OS/2 failed to gain much market share. While OS/2 continued to run Windows 3.1 applications, it lacked support for anything but the Win32s subset of Win32 API (see above). Unlike with Windows 3.1, IBM did not have access to the source code for Windows 95 and was unwilling to commit the time and resources to emulate the moving target of the Win32 API. IBM later introduced OS/2 into the United States v. Microsoft case, blaming unfair marketing tactics on Microsoft's part.
Microsoft went on to release five different versions of Windows 95:
OSR2, OSR2.1, and OSR2.5 were not released to the general public, rather, they were available only to OEMs that would preload the OS onto computers. Some companies sold new hard drives with OSR2 preinstalled (officially justifying this as needed due to the hard drive's capacity).
The first Microsoft Plus! add-on pack was sold for Windows 95.
Windows NT 4.0 was the successor of 3.5 (1994) and 3.51 (1995). Microsoft released Windows NT 4.0 to manufacturing in July 1996, one year after the release of Windows 95. Major new features included the new Explorer shell from Windows 95, scalability and feature improvements to the core architecture, kernel, USER32, COM and MSRPC.
Windows NT 4.0 came in four versions:
On June 25, 1998, Microsoft released Windows 98 (code-named Memphis). It included new hardware drivers and the FAT32 file system which supports disk partitions that are larger than 2 GB (first introduced in Windows 95 OSR2). USB support in Windows 98 is marketed as a vast improvement over Windows 95. The release continued the controversial inclusion of the Internet Explorer browser with the operating system that started with Windows 95 OEM Service Release 1. The action eventually led to the filing of the United States v. Microsoft case, dealing with the question of whether Microsoft was introducing unfair practices into the market in an effort to eliminate competition from other companies such as Netscape.
In 1999, Microsoft released Windows 98 Second Edition, an interim release. One of the more notable new features was the addition of Internet Connection Sharing, a form of network address translation, allowing several machines on a LAN (Local Area Network) to share a single Internet connection. Hardware support through device drivers was increased and this version shipped with Internet Explorer 5. Many minor problems that existed in the first edition were fixed making it, according to many, the most stable release of the Windows 9x family.
Microsoft released Windows 2000 on February 17, 2000. It has the version number Windows NT 5.0. Windows 2000 has had four official service packs. It was successfully deployed both on the server and the workstation markets. Amongst Windows 2000's most significant new features was Active Directory, a near-complete replacement of the NT 4.0 Windows Server domain model, which built on industry-standard technologies like DNS, LDAP, and Kerberos to connect machines to one another. Terminal Services, previously only available as a separate edition of NT 4, was expanded to all server versions. A number of features from Windows 98 were incorporated also, such as an improved Device Manager, Windows Media Player, and a revised DirectX that made it possible for the first time for many modern games to work on the NT kernel. Windows 2000 is also the last NT-kernel Windows operating system to lack product activation.
While Windows 2000 upgrades were available for Windows 95 and Windows 98, it was not intended for home users.
Windows 2000 was available in four editions:
In September 2000, Microsoft released a successor to Windows 98 called Windows Me, short for "Millennium Edition". It was the last DOS-based operating system from Microsoft. Windows Me introduced a new multimedia-editing application called Windows Movie Maker, came standard with Internet Explorer 5.5 and Windows Media Player 7, and debuted the first version of System Restore – a recovery utility that enables the operating system to revert system files back to a prior date and time. System Restore was a notable feature that would continue to thrive in all later versions of Windows.
Windows Me was conceived as a quick one-year project that served as a stopgap release between Windows 98 and Windows XP. Many of the new features were available from the Windows Update site as updates for older Windows versions ("System Restore" and "Windows Movie Maker" were exceptions). Windows Me was criticized for stability issues, as well as for lacking real mode DOS support, to the point of being referred to as the "Mistake Edition." Windows Me was the last operating system to be based on the Windows 9x (monolithic) kernel and MS-DOS.
On October 25, 2001, Microsoft released Windows XP (codenamed "Whistler"). The merging of the Windows NT/2000 and Windows 95/98/Me lines was finally achieved with Windows XP. Windows XP uses the Windows NT 5.1 kernel, marking the entrance of the Windows NT core to the consumer market, to replace the aging Windows 9x branch. The initial release was met with considerable criticism, particularly in the area of security, leading to the release of three major Service Packs. Windows XP SP1 was released in September 2002, SP2 was released in August 2004 and SP3 was released in April 2008. Service Pack 2 provided significant improvements and encouraged widespread adoption of XP among both home and business users. Windows XP lasted longer as Microsoft's flagship operating system than any other version of Windows, from October 25, 2001 to January 30, 2007 when it was succeeded by Windows Vista.
Windows XP is available in a number of versions:
On April 25, 2003, Microsoft launched Windows Server 2003, a notable update to Windows 2000 Server encompassing many new security features, a new "Manage Your Server" wizard that simplifies configuring a machine for specific roles, and improved performance. It has the version number NT 5.2. A few services not essential for server environments are disabled by default for stability reasons, most noticeable are the "Windows Audio" and "Themes" services; users have to enable them manually to get sound or the "Luna" look as per Windows XP. The hardware acceleration for display is also turned off by default, users have to turn the acceleration level up themselves if they trust the display card driver.
In December 2005, Microsoft released Windows Server 2003 R2, which is actually Windows Server 2003 with SP1 (Service Pack 1), together with an add-on package.
Among the new features are a number of management features for branch offices, file serving, printing and company-wide identity integration.
Windows Server 2003 is available in six editions:
Windows Server 2003 R2, an update of Windows Server 2003, was released to manufacturing on December 6, 2005. It is distributed on two CDs, with one CD being the Windows Server 2003 SP1 CD. The other CD adds many optionally installable features for Windows Server 2003. The R2 update was released for all x86 and x64 versions, except Windows Server 2003 R2 Enterprise Edition, which was not released for Itanium.
On April 25, 2005, Microsoft released Windows XP Professional x64 Edition and Windows Server 2003, x64 Editions in Standard, Enterprise and Datacenter SKUs. Windows XP Professional x64 Edition is an edition of Windows XP for x86-64 personal computers. It is designed to use the expanded 64-bit memory address space provided by the x86-64 architecture.
Windows XP Professional x64 Edition is based on the Windows Server 2003 codebase, with the server features removed and client features added. Both "Windows Server 2003 x64" and Windows XP Professional x64 Edition use identical kernels.
Windows XP "Professional" "x64 Edition" is not to be confused with Windows XP "64-bit Edition", as the latter was designed for Intel Itanium processors. During the initial development phases, Windows XP Professional x64 Edition was named "Windows XP 64-Bit Edition for 64-Bit Extended Systems".
In July 2005, Microsoft released a thin-client version of Windows XP Service Pack 2, called Windows Fundamentals for Legacy PCs (WinFLP). It is only available to Software Assurance customers. The aim of WinFLP is to give companies a viable upgrade option for older PCs that are running Windows 95, 98, and Me that will be supported with patches and updates for the next several years. Most user applications will typically be run on a remote machine using Terminal Services or Citrix.
While being visually the same as Windows XP, it has some differences. For example, if the screen has been set to 16 bit colors, the Windows 2000 recycle bin icon and some XP 16-bit icons will show. Paint and some games like Solitaire aren't present too.
Windows Home Server (code-named Q, Quattro) is a server product based on Windows Server 2003, designed for consumer use. The system was announced on January 7, 2007 by Bill Gates. Windows Home Server can be configured and monitored using a console program that can be installed on a client PC. Such features as Media Sharing, local and remote drive backup and file duplication are all listed as features. The release of Windows Home Server Power Pack 3 added support for Windows 7 to Windows Home Server.
Windows Vista was released on November 30, 2006 to business customers - consumer versions followed on January 30, 2007. Windows Vista intended to have enhanced security by introducing a new restricted user mode called User Account Control, replacing the "administrator-by-default" philosophy of Windows XP. Vista was the target of much criticism and negative press, and in general was not well regarded, this was seen as leading to the relatively swift release of Windows 7.
One major difference between Vista and earlier versions of Windows, Windows 95 and later, is that the original start button was replaced with the Windows icon in a circle (called the Start Orb). Vista also features new graphics features, the Windows Aero GUI, new applications (such as Windows Calendar, Windows DVD Maker and some new games including Chess, Mahjong, and Purble Place), Internet Explorer 7, Windows Media Player 11, and a large number of underlying architectural changes. Windows Vista has the version number NT 6.0. Since its release, Windows Vista has had two service packs.
Windows Vista ships in six editions:
All editions (except Starter edition) are currently available in both 32-bit and 64-bit versions. The biggest advantage of the 64-bit version is breaking the 4 gigabyte memory barrier, which 32-bit computers cannot fully access.
Windows Server 2008, released on February 27, 2008, was originally known as Windows Server Codename "Longhorn". Windows Server 2008 builds on the technological and security advances first introduced with Windows Vista, and is significantly more modular than its predecessor, Windows Server 2003.
Windows Server 2008 ships in ten editions:
Windows 7 was released to manufacturing on July 22, 2009, and reached general retail availability on October 22, 2009. It was previously known by the codenames Blackcomb and Vienna. Windows 7 has the version number NT 6.1. Since its release, Windows 7 has had one service pack.
Some features of Windows 7 are faster booting, Device Stage, Windows PowerShell, less obtrusive User Account Control, multi-touch, and improved window management. Features included with Windows Vista and not in Windows 7 include the sidebar (although gadgets remain) and several programs that were removed in favor of downloading their Windows Live counterparts.
Windows 7 ships in six editions:
In some countries (Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, United Kingdom, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Liechtenstein, Lithuania, Luxembourg, Malta, Netherlands, Norway, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden, and Switzerland), there are other editions that lack some features such as Windows Media Player, Windows Media Center and Internet Explorer - these editions were called names such as "Windows 7 N."
Microsoft focuses on selling Windows 7 Home Premium and Professional. All editions, except the Starter edition, are available in both 32-bit and 64-bit versions.
Unlike the corresponding Vista editions, the Professional and Enterprise editions are supersets of the Home Premium edition.
At the Professional Developers Conference (PDC) 2008, Microsoft also announced Windows Server 2008 R2, as the server variant of Windows 7. Windows Server 2008 R2 ships in 64-bit versions (x64 and Itanium) only.
In 2010, Microsoft released Windows Thin PC or WinTPC, which is a feature-and size-reduced locked-down version of Windows 7 expressly designed to turn older PCs into thin clients. WinTPC is available for software assurance customers and relies on cloud computing in a business network. Wireless operation is supported since WinTPC has full wireless stack integration, but wireless operation may not be as good as the operation on a wired connection.
Windows Home Server 2011 code named 'Vail' was released on April 6, 2011. Windows Home Server 2011 is built on the Windows Server 2008 R2 code base and removed the Drive Extender drive pooling technology in the original Windows Home Server release. Windows Home Server 2011 is considered a "major release". Its predecessor was built on Windows Server 2003. WHS 2011 only supports x86-64 hardware.
Microsoft decided to discontinue Windows Home Server 2011 on July 5, 2012 while including its features into Windows Server 2012 Essentials. Windows Home Server 2011 was supported until April 12, 2016.
On October 26, 2012, Microsoft released Windows 8 to the public. One edition, Windows RT, runs on some system-on-a-chip devices with mobile 32-bit ARM (ARMv7) processors. Windows 8 features a redesigned user interface, designed to make it easier for touchscreen users to use Windows. The interface introduced an updated Start menu known as the Start screen, and a new full-screen application platform. The desktop interface is also present for running windowed applications, although Windows RT will not run any desktop applications not included in the system. On the Building Windows 8 blog, it was announced that a computer running Windows 8 can boot up much faster than Windows 7. New features also include USB 3.0 support, the Windows Store, the ability to run from USB drives with Windows To Go, and others. Windows 8 was given the kernel number NT 6.2, with its successor 8.1 receiving the kernel number 6.3. So far, neither has had any service packs yet, although many consider Windows 8.1 to be a service pack for Windows 8.
Windows 8 is available in the following editions:
The first public preview of Windows Server 2012 and was also shown by Microsoft at the 2011 Microsoft Worldwide Partner Conference.
Windows 8 Release Preview and Windows Server 2012 Release Candidate were both released on May 31, 2012. Product development on Windows 8 was completed on August 1, 2012, and it was released to manufacturing the same day. Windows Server 2012 went on sale to the public on September 4, 2012. Windows 8 went on sale October 26, 2012.
Windows 8.1 and Windows Server 2012 R2 were released on October 17, 2013. Windows 8.1 is available as an update in the Windows store for Windows 8 users only and also available to download for clean installation. The update adds new options for resizing the live tiles on the Start screen.
Windows 10 is the current release of the Microsoft Windows operating system. Unveiled on September 30, 2014, it was released on July 29, 2015. It was distributed without charge to Windows 7 and 8.1 users for one year after release. A number of new features like Cortana, the Microsoft Edge web browser, the ability to view Windows Store apps as a window instead of fullscreen, virtual desktops, revamped core apps, Continuum, and a unified Settings app were all features debuted in Windows 10. Microsoft has announced that Windows 10 will be the last major version of its series of operating systems to be released. Instead, Microsoft will release major updates to the operating system via download or in Windows Update, similar to the way updates are delivered in macOS.
So far, nine major versions of Windows 10 have been released, with the version 19H2 being the latest stable release, and 20H2 as the latest preview version.
Windows Server 2016 is a release of the Microsoft Windows Server operating system that was unveiled on September 30, 2014. Windows Server 2016 was officially released at Microsoft's Ignite Conference, September 26–30, 2016.
Windows Server 2019 is a release of the Microsoft Windows Server operating system.
Windows Server 2019 was announced on March 20, 2018, and the first Windows Insider preview version was released on the same day. It was released for general availability on October 2, 2018.
On October 6, 2018, distribution of Windows version 1809 (build 17763) was paused while Microsoft investigated an issue with user data being deleted during an in-place upgrade. It affected systems where a user profile folder (e.g. Documents, Music or Pictures) had been moved to another location, but data was left in the original location. As Windows Server 2019 is based on the Windows version 1809 codebase, it too was removed from distribution at the time, but was re-released on November 13, 2018. The software product life cycle for Server 2019 was reset in accordance with the new release date. | https://en.wikipedia.org/wiki?curid=13694 |
Helsinki
Helsinki ( or ; ; , ) is the capital, primate and most populous city of Finland. Located on the shore of the Gulf of Finland, it is the seat of the region of Uusimaa in southern Finland, and has a population of . The city's urban area has a population of , making it by far the most populous urban area in Finland as well as the country's most important center for politics, education, finance, culture, and research; while Tampere in the Pirkanmaa region, located to the north from Helsinki, is the second largest urban area in Finland. Helsinki is located north of Tallinn, Estonia, east of Stockholm, Sweden, and west of Saint Petersburg, Russia. It has close historical ties with these three cities.
Together with the cities of Espoo, Vantaa, and Kauniainen, and surrounding commuter towns, Helsinki forms the Greater Helsinki metropolitan area, which has a population of nearly 1.5 million. Often considered to be Finland's only metropolis, it is the world's northernmost metro area with over one million people as well as the northernmost capital of an EU member state. After Stockholm and Oslo, Helsinki is the third largest municipality in the Nordic countries. Finnish and Swedish are both official languages. The city is served by the international Helsinki Airport, located in the neighboring city of Vantaa, with frequent service to many destinations in Europe and Asia.
Helsinki was the World Design Capital for 2012, the venue for the 1952 Summer Olympics, and the host of the 52nd Eurovision Song Contest in 2007.
Helsinki has one of the world's highest urban standards of living. In 2011, the British magazine "Monocle" ranked Helsinki the world's most liveable city in its liveable cities index. In the Economist Intelligence Unit's 2016 liveability survey, Helsinki was ranked ninth among 140 cities.
According to a theory presented in the 1630s, at the time of Swedish colonisation of coastal areas of Finland, colonists from Hälsingland in central Sweden had arrived at what is now known as the Vantaa River and called it "Helsingå" ("Helsinge River"), which gave rise to the names of Helsinge village and church in the 1300s. This theory is questionable, because dialect research suggests that the settlers arrived from Uppland and nearby areas. Others have proposed the name as having been derived from the Swedish word "helsing", an archaic form of the word "hals" (neck), referring to the narrowest part of a river, the rapids. Other Scandinavian cities at similar geographic locations were given similar names at the time, e.g. Helsingør in Denmark and Helsingborg in Sweden.
When a town was founded in Forsby village in 1548, it was named "Helsinge fors", "Helsinge rapids". The name refers to the Vanhankaupunginkoski rapids at the mouth of the river. The town was commonly known as "Helsinge" or "Helsing", from which the contemporary Finnish name arose.
Official Finnish Government documents and Finnish language newspapers have used the name "Helsinki" since 1819, when the Senate of Finland moved itself into the city from Turku, the former capital of Finland. The decrees issued in Helsinki were dated with Helsinki as the place of issue. This is how the form Helsinki came to be used in written Finnish. As part of the Grand Duchy of Finland in the Russian Empire, Helsinki was known as "Gelsingfors" in Russian.
In Helsinki slang, the city is called "Stadi" (from the Swedish word "stad", meaning "city") or "Hesa" (short for Helsinki). "" is the Northern Sami name of Helsinki.
In the Iron Age the area occupied by present-day Helsinki was inhabited by Tavastians. They used the area for fishing and hunting, but due to a lack of archeological finds it is difficult to say how extensive their settlements were. Pollen analysis has shown that there were cultivating settlements in the area in the 10th century and surviving historical records from the 14th century describe Tavastian settlements in the area.
Swedes colonized the coastline of the Helsinki region in the late 13th century after the successful Second Crusade to Finland, which led to the defeat of the Tavastians.
Helsinki was established as a trading town by King Gustav I of Sweden in 1550 as the town of Helsingfors, which he intended to be a rival to the Hanseatic city of Reval (today known as Tallinn). In order to populate his newly founded town, the King issued an order to resettle the bourgeoisie of Porvoo, Ekenäs, Rauma and Ulvila into the town. Little came of the plans as Helsinki remained a tiny town plagued by poverty, wars, and diseases. The plague of 1710 killed the greater part of the inhabitants of Helsinki. The construction of the naval fortress Sveaborg (in Finnish "Viapori", today also "Suomenlinna") in the 18th century helped improve Helsinki's status, but it was not until Russia defeated Sweden in the Finnish War and annexed Finland as the autonomous Grand Duchy of Finland in 1809 that the town began to develop into a substantial city. Russians besieged the Sveaborg fortress during the war, and about one quarter of the town was destroyed in an 1808 fire.
Russian Emperor Alexander I of Russia moved the Finnish capital from Turku to Helsinki in 1812 to reduce Swedish influence in Finland, and to bring the capital closer to Saint Petersburg. Following the Great Fire of Turku in 1827, the Royal Academy of Turku, which at the time was the country's only university, was also relocated to Helsinki and eventually became the modern University of Helsinki. The move consolidated the city's new role and helped set it on a path of continuous growth. This transformation is highly apparent in the downtown core, which was rebuilt in the neoclassical style to resemble Saint Petersburg, mostly to a plan by the German-born architect C. L. Engel. As elsewhere, technological advancements such as railroads and industrialization were key factors behind the city's growth.
Despite the tumultuous nature of Finnish history during the first half of the 20th century (including the Finnish Civil War and the Winter War which both left marks on the city), Helsinki continued its steady development. A landmark event was the 1952 Olympic Games, held in Helsinki. Finland's rapid urbanization in the 1970s, occurring late relative to the rest of Europe, tripled the population in the metropolitan area, and the Helsinki Metro subway system was built. The relatively sparse population density of Helsinki and its peculiar structure have often been attributed to the lateness of its growth.
Called the "Daughter of the Baltic", Helsinki is on the tip of a peninsula and on 315 islands. The inner city is located on a southern peninsula, "Helsinginniemi" ("Cape of Helsinki), which is rarely referred to by its actual name, Vironniemi ("Cape of Estonia"). Population density in certain parts of Helsinki's inner city area is comparatively higher, reaching in the district of Kallio, but as a whole Helsinki's population density of ranks the city as rather sparsely populated in comparison to other European capital cities. Outside of the inner city, much of Helsinki consists of postwar suburbs separated by patches of forest. A narrow, long Helsinki Central Park, stretching from the inner city to Helsinki's northern border, is an important recreational area for residents. The City of Helsinki has about 11,000 boat berths and possesses over 14,000 hectares (34,595 acres; 54.1 sq mi) of marine fishing waters adjacent to the Capital Region. Some 60 fish species are found in this area and recreational fishing is popular.
Major islands in Helsinki include Seurasaari, Vallisaari, Lauttasaari, and Korkeasaari – the lattermost being the site of Finland's largest zoo. Other noteworthy islands are the fortress island of Suomenlinna (Sveaborg), the military island of Santahamina, and Isosaari. Pihlajasaari island is a favorite summer spot for gay men and naturists, comparable to Fire Island in New York City.
The Helsinki metropolitan area, also known as the Capital Region (Finnish: "Pääkaupunkiseutu", Swedish: "Huvudstadsregionen") comprises four municipalities: Helsinki, Espoo, Vantaa, and Kauniainen. The Helsinki urban area is considered to be the only metropolis in Finland. It has a population of over 1.1 million, and is the most densely populated area of Finland. The Capital Region spreads over a land area of and has a population density of . With over 20 percent of the country's population in just 0.2 percent of its surface area, the area's housing density is high by Finnish standards.
The Helsinki Metropolitan Area (Greater Helsinki) consists of the cities of Helsinki Capital Region and ten surrounding municipalities. The Metropolitan Area covers and has a population of over 1.4 million, or about a fourth of the total population of Finland. The metropolitan area has a high concentration of employment: approximately 750,000 jobs. Despite the intensity of land use, the region also has large recreational areas and green spaces. The Greater Helsinki area is the world's northernmost urban area with a population of over one million people, and the northernmost EU capital city.
The Helsinki urban area is an officially recognized urban area in Finland, defined by its population density. The area stretches throughout 11 municipalities, and is the largest such area in Finland, with a land area of and approximately 1,2 million inhabitants.
Helsinki has a humid continental climate (Köppen: "Dfb") similar to that of Hokkaido or Nova Scotia coastal. Owing to the mitigating influence of the Baltic Sea and North Atlantic Current (see also Extratropical cyclone), temperatures during the winter are higher than the northern location might suggest, with the average in January and February around .
Winters in Helsinki are notably warmer than in the north of Finland, and the snow season is much shorter in the capital, due to it being in extreme Southern Finland and the urban heat island effect. Temperatures below occur a few times a year at most. However, because of the latitude, days last 5 hours and 48 minutes around the winter solstice with very low sun (at noon, the sun is a little bit over 6 degrees in the sky), and the cloudy weather at this time of year exacerbates darkness. Conversely, Helsinki enjoys long daylight during the summer; during the summer solstice, days last 18 hours and 57 minutes.
The average maximum temperature from June to August is around . Due to the marine effect, especially during hot summer days, daily temperatures are a little cooler and night temperatures higher than further inland. The highest temperature ever recorded in the city was , on 28 July 2019 at Kaisaniemi weather station, breaking the previous record of that was observed in July 1945 at Ilmala weather station. The lowest temperature ever recorded in the city was , on 10 January 1987 although an unofficial low of -35 was recorded in December 1876. Helsinki Airport (in Vantaa, north of the Helsinki city centre) recorded a temperature of , on 29 July 2010, and a low of , on 9 January 1987. Precipitation is received from frontal passages and thunderstorms. Thunderstorms are most common in the summer.
Helsinki is divided into three major areas: Helsinki Downtown (, ), North Helsinki (, ) and East Helsinki (, ).
Carl Ludvig Engel, appointed to plan a new city centre on his own, designed several neoclassical buildings in Helsinki. The focal point of Engel's city plan was the Senate Square. It is surrounded by the Government Palace (to the east), the main building of Helsinki University (to the west), and (to the north) the large Helsinki Cathedral, which was finished in 1852, twelve years after Engel's death. Helsinki's epithet, "The White City of the North", derives from this construction era.
Helsinki is also home to numerous Art Nouveau-influenced (Jugend in Finnish) buildings belonging to the Kansallisromantiikka = romantic nationalism trend, designed in the early 20th century and strongly influenced by "Kalevala", which was a common theme of the era. Helsinki's Art Nouveau style is also featured in central residential districts, such as Katajanokka and Ullanlinna. An important architect of the Finnish Art Nouveau style was Eliel Saarinen, whose architectural masterpiece was the Helsinki Central Station.
Helsinki also features several buildings by Finnish architect Alvar Aalto, recognized as one of the pioneers of architectural functionalism. However, some of his works, such as the headquarters of the paper company Stora Enso and the concert venue Finlandia Hall, have been subject to divided opinions from the citizens.
Functionalist buildings in Helsinki by other architects include the Olympic Stadium, the Tennis Palace, the Rowing Stadium, the Swimming Stadium, the Velodrome, the Glass Palace, the Töölö Sports Hall, and Helsinki-Malmi Airport. The sports venues were built to serve the 1940 Helsinki Olympic Games; the games were initially cancelled due to the Second World War, but the venues fulfilled their purpose in the 1952 Olympic Games. Many of them are listed by DoCoMoMo as significant examples of modern architecture. The Olympic Stadium and Helsinki-Malmi Airport are also catalogued by the Finnish National Board of Antiquities as cultural-historical environments of national significance.
Helsinki's neoclassical buildings were often used as a backdrop for scenes set to take place in the Soviet Union in many Cold War era Hollywood movies, when filming in the USSR was not possible. Some of them include "The Kremlin Letter" (1970), "Reds" (1981), and "Gorky Park" (1983). Because some streetscapes were reminiscent of Leningrad's and Moscow's old buildings, they too were used in movie productions. At the same time the government secretly instructed Finnish officials not to extend assistance to such film projects.
The start of the 21st century marked the beginning of highrise construction in Helsinki, when the city decided to allow the construction of skyscrapers. As of April 2017 there are no skyscrapers taller than 100 meters in the Helsinki area, but there are several projects under construction or planning, mainly in Pasila and Kalasatama. An international architecture competition for at least 10 high-rises to be built in Pasila is being held. Construction of the towers will start before 2020. In Kalasatama, the first 35-story (130 m, 427 ft) and 32-story (122 m, 400 ft) residential towers are already under construction. Later they will be joined by a 37-story (140 metres, 459 ft), two 32-story (122 metres, 400 feet), 31-story (120 metres, 394 ft), and 27-story (100 metres, 328 ft) residential buildings. In the Kalasatama area, there will be about 15 high-rises within 10 years.
As is the case with all Finnish municipalities, Helsinki's city council is the main decision-making organ in local politics, dealing with issues such as urban planning, schools, health care, and public transport. The council is chosen in the nationally held municipal elections, which are held every four years.
Helsinki's city council consists of eighty-five members. Following the most recent municipal elections in 2017, the three largest parties are the National Coalition Party (25), the Green League (21), and the Social Democratic Party (12).
The Mayor of Helsinki is Jan Vapaavuori.
At 53 percent of the population, women form a greater proportion of Helsinki residents than the national average of 51 percent. Helsinki's population density of 2,739.36 people per square kilometre makes Helsinki the most densely-populated city in Finland. The life expectancy for men and women is slightly below the national averages: 75.1 years for men as compared to 75.7 years, 81.7 years for women as compared to 82.5 years.
Helsinki has experienced strong growth since the 1810s, when it replaced Turku as the capital of the Grand Duchy of Finland, which later became the sovereign Republic of Finland. The city continued its growth from that time on, with an exception during the Finnish Civil War. From the end of World War II up until the 1970s there was a massive exodus of people from the countryside to the cities of Finland, in particular Helsinki. Between 1944 and 1969 the population of the city nearly doubled from 275,000 to 525,600.
In the 1960s, the population growth of Helsinki began to decrease, mainly due to a lack of housing. Some residents began to move to the neighbouring cities of Espoo and Vantaa, resulting in increased population growth in both municipalities. Espoo's population increased ninefold in sixty years, from 22,874 people in 1950 to 244,353 in 2009. Vantaa saw an even more dramatic change in the same time span: from 14,976 in 1950 to 197,663 in 2009, a thirteenfold increase. These population changes prompted the municipalities of Greater Helsinki into more intense cooperation in areas such as public transportation – resulting in the foundation of HSL – and waste management. The increasing scarcity of housing and the higher costs of living in the capital region have pushed many daily commuters to find housing in formerly rural areas, and even further, to cities such as Lohja, Hämeenlinna, Lahti, and Porvoo.
Finnish and Swedish are the official languages of Helsinki. 79.1% of the citizens speak Finnish as their native language. 5.7% speak Swedish. The remaining 15.3% of the population speaks a native language other than Finnish or Swedish.
Helsinki slang is a regional dialect of the city. It combines influences mainly from Finnish and English, and has traditionally had strong Russian and Swedish influences. Finnish today is the common language of communication between Finnish speakers, Swedish speakers, and speakers of other languages (New Finns) in day-to-day affairs in the public sphere between unknown persons. Swedish is commonly spoken in city or national agencies specifically aimed at Finland-Swedish speakers, such as the Social Services Department on Hämeentie or the Luckan Cultural centre in Kamppi. Knowledge of Finnish is also essential in business and is usually a basic requirement in the employment market.
Finnish speakers surpassed Swedish speakers in 1890 to become the majority of the city's population. At the time, the population of Helsinki was 61,530.
As the crossroads of many international ports and Finland's largest airport, Helsinki is the global gateway to and from Finland. The city has Finland's largest immigrant population in both absolute and relative terms. There are over 140 nationalities represented in Helsinki. It is home to the world's largest Estonian community outside of Estonia.
Foreign citizens make up 9.6% of the population, while the total immigrant population makes up 16%. In 2018, 101,825 residents spoke a native language other than Finnish, Swedish, or one of the three Sami languages spoken in Finland, and 103,499 had a foreign background. The largest groups of residents not of Finnish background come from Russia (14,532), Estonia (9,065), and Somalia (6,845). One third of Finland's immigrant population lives in the city of Helsinki.
The number of people with a foreign mother tongue is expected to be 196,500 in 2035, or 26% of the population. 114,000 will speak non-European languages, which will be 15% of the population.
Greater Helsinki generates approximately one third of Finland's GDP. GDP per capita is roughly 1.3 times the national average. Helsinki profits on serviced-related IT and public sectors. Having moved from heavy industrial works, shipping companies also employ a substantial number of people.
The metropolitan area's gross value added per capita is 200% of the mean of 27 European metropolitan areas, equalling those of Stockholm and Paris. The gross value added annual growth has been around 4%.
83 of the 100 largest Finnish companies have their headquarters in Greater Helsinki. Two-thirds of the 200 highest-paid Finnish executives live in Greater Helsinki and 42% in Helsinki. The average income of the top 50 earners was 1.65 million euro.
The tap water is of excellent quality and it is supplied by long Päijänne Water Tunnel, one of the world's longest continuous rock tunnels.
The Temppeliaukio Church is a Lutheran church in the Töölö neighborhood of the city. The church was designed by architects and brothers Timo and Tuomo Suomalainen and opened in 1969. Built directly into solid rock, it is also known as the Church of the Rock and Rock Church. The Cathedral of the Diocese of Helsinki is the Helsinki Cathedral, completed in 1852. It is a major landmark in the city and has 1,300 seats.
The largest Orthodox congregation is the Orthodox Church of Helsinki. It has 20,000 members. Its main church is the Uspenski Cathedral. The two largest Catholic congregations are Saint Henry's Cathedral Parish, with 4,552 members, established in 1860 and St. Mary Catholic Parish, with 4,107 members, established in 1854. The main Catholic churches are the Cathedral of Saint Henry and St. Mary's Church.
At the end of 2018, 52.4% of the population were affiliated to the Evangelical Lutheran Church of Finland. Helsinki is the least Lutheran municipality in Finland.
There are around 30 mosques in the Helsinki region. Many linguistic and ethnic groups such as Bangladeshis, Kosovars, Kurds and Bosniaks have established their own mosques. The largest congregation in both Helsinki and Finland is the Helsinki Islamic Center, established in 1995. It has over 2,800 members as of 2017, and it received €24,131 in government assistance.
In 2015, imam Anas Hajar estimated that on big celebrations around 10,000 Muslims visit mosques. In 2004, it was estimated that there were 8,000 Muslims in Helsinki, 1.5% of the population at the time.
The main synagogue of Helsinki is the Helsinki Synagogue, located in Kamppi. It has over 1,200 members, out of the 1,800 Jews in Finland. The congregation includes a synagogue, Jewish kindergarten, school, library, Jewish meat shop, two Jewish cemeteries and an retirement home. Many Jewish organizations and societies are based there, and the synagogue publishes the main Jewish magazine in Finland, HaKehila.
Helsinki has 190 comprehensive schools, 41 upper secondary schools, and 15 vocational institutes. Half of the 41 upper secondary schools are private or state-owned, the other half municipal. Higher-level education is given in eight universities (see the section "Universities" below) and four polytechnics.
Helsinki is one of the co-location centres of the Knowledge and Innovation Community (Future information and communication society) of The European Institute of Innovation and Technology (EIT).
The biggest historical museum in Helsinki is the National Museum of Finland, which displays a vast historical collection from prehistoric times to the 21st century. The museum building itself, a national romantic style neomedieval castle, is a tourist attraction. Another major historical museum is the Helsinki City Museum, which introduces visitors to Helsinki's 500-year history. The University of Helsinki also has many significant museums, including the Helsinki University Museum "Arppeanum" and the Finnish Museum of Natural History.
The Finnish National Gallery consists of three museums: Ateneum Art Museum for classical Finnish art, Sinebrychoff Art Museum for classical European art, and Kiasma Art Museum for modern art, in a building by architect Steven Holl. The old Ateneum, a neo-Renaissance palace from the 19th century, is one of the city's major historical buildings. All three museum buildings are state-owned through Senate Properties.
The city of Helsinki hosts its own art collection in the Helsinki Art Museum (HAM), primarily located in its Tennispalatsi gallery. Pieces outside of Tennispalatsi include about 200 public art pieces and all art held in property owned by the city.
Helsinki Art Museum will in 2020 launch the Helsinki Biennial, which will bring art to maritime Helsinki – in its first year to the island of Vallisaari.
The Design Museum is devoted to the exhibition of both Finnish and foreign design, including industrial design, fashion, and graphic design. Other museums in Helsinki include the Military Museum of Finland, Didrichsen Art Museum, Amos Rex Art Museum, and the Tram Museum.
Helsinki has three major theatres: The Finnish National Theatre, the Helsinki City Theatre, and the Swedish Theatre ("Svenska Teatern"). Other notable theatres in the city include the Alexander Theatre, "Q-teatteri", Savoy Theatre, KOM-theatre, and "Teatteri Jurkka".
Helsinki is home to two full-size symphony orchestras, the Helsinki Philharmonic Orchestra and the Finnish Radio Symphony Orchestra, both of which perform at the Helsinki Music Centre concert hall. Acclaimed contemporary composers Kaija Saariaho, Magnus Lindberg, Esa-Pekka Salonen, and Einojuhani Rautavaara, among others, were born and raised in Helsinki, and studied at the Sibelius Academy. The Finnish National Opera, the only full-time, professional opera company in Finland, is located in Helsinki. The opera singer Martti Wallén, one of the company's long-time soloists, was born and raised in Helsinki, as was mezzo-soprano Monica Groop.
Many widely renowned and acclaimed bands have originated in Helsinki, including Nightwish, Children of Bodom, Hanoi Rocks, HIM, Stratovarius, The 69 Eyes, Finntroll, Ensiferum, Wintersun, The Rasmus, Poets of the Fall, and Apocalyptica.
The city's main musical venues are the Finnish National Opera, the Finlandia concert hall, and the Helsinki Music Centre. The Music Centre also houses a part of the Sibelius Academy. Bigger concerts and events are usually held at one of the city's two big ice hockey arenas: the Hartwall Arena or the Helsinki Ice Hall. Helsinki has Finland's largest fairgrounds, the Messukeskus Helsinki.
Helsinki Arena hosted the Eurovision Song Contest 2007, the first Eurovision Song Contest arranged in Finland, following Lordi's win in 2006.
The Helsinki Festival is an annual arts and culture festival, which takes place every August (including the Night of the Arts).
Vappu is an annual carnival for students and workers.
At the Senate Square in fall 2010, Finland's largest open-air art exhibition to date took place: About 1.4 million people saw the international exhibition of "United Buddy Bears".
Helsinki was the 2012 World Design Capital, in recognition of the use of design as an effective tool for social, cultural, and economic development in the city. In choosing Helsinki, the World Design Capital selection jury highlighted Helsinki's use of 'Embedded Design', which has tied design in the city to innovation, "creating global brands, such as Nokia, Kone, and Marimekko, popular events, like the annual Helsinki Design Week, outstanding education and research institutions, such as the Aalto University School of Arts, Design and Architecture, and exemplary architects and designers such as Eliel Saarinen and Alvar Aalto".
Helsinki hosts many film festivals. Most of them are small venues, while some have generated interest internationally. The most prolific of these is the Love & Anarchy film festival, also known as Helsinki International Film Festival, which features films on a wide spectrum. Night Visions, on the other hand, focuses on genre cinema, screening horror, fantasy, and science fiction films in very popular movie marathons that last the entire night. Another popular film festival is DocPoint, a festival that focuses solely on documentary cinema.
Today, there are around 200 newspapers, 320 popular magazines, 2,100 professional magazines, 67 commercial radio stations, three digital radio channels, and one nationwide and five national public service radio channels.
Sanoma publishes Finland's journal of record, "Helsingin Sanomat", the tabloid "Ilta-Sanomat", the commerce-oriented "Taloussanomat", and the television channel Nelonen. Another Helsinki-based media house, Alma Media, publishes over thirty magazines, including the newspaper "Aamulehti", the tabloid "Iltalehti", and the commerce-oriented "Kauppalehti".
Finland's national public-broadcasting institution Yle operates five television channels and thirteen radio channels in both national languages. Yle is headquartered in the neighbourhood of Pasila. All TV channels are broadcast digitally, both terrestrially and on cable.
The commercial television channel MTV3 and commercial radio channel Radio Nova are owned by Nordic Broadcasting (Bonnier and Proventus Industrier).
Helsinki has a long tradition of sports: the city gained much of its initial international recognition during the 1952 Summer Olympics, and the city has arranged sporting events such as the first World Championships in Athletics 1983 and 2005, and the European Championships in Athletics 1971, 1994, and 2012. Helsinki hosts successful local teams in both of the most popular team sports in Finland: football and ice hockey. Helsinki houses HJK Helsinki, Finland's largest and most successful football club, and IFK Helsingfors, their local rivals with 7 championship titles. The fixtures between the two are commonly known as Stadin derby. Helsinki's track and field club Helsingin Kisa-Veikot is also dominant within Finland. Ice hockey is popular among many Helsinki residents, who usually support either of the local clubs IFK Helsingfors (HIFK) or Jokerit. HIFK, with 14 Finnish championships titles, also plays in the highest bandy division, along with Botnia-69. The Olympic stadium hosted the first ever Bandy World Championship in 1957.
Helsinki was elected host-city of the 1940 Summer Olympics, but due to World War II they were canceled. Instead Helsinki was the host of the 1952 Summer Olympics. The Olympics were a landmark event symbolically and economically for Helsinki and Finland as a whole that was recovering from the winter war and the continuation war fought with the Soviet Union. Helsinki was also in 1983 the first ever city to host the World Championships in Athletics. Helsinki also hosted the event in 2005, thus also becoming the first city to ever host the Championships for a second time. The Helsinki City Marathon has been held in the city every year since 1980, usually in August. A Formula 3000 race through the city streets was held on 25 May 1997. In 2009 Helsinki was host of the European Figure Skating Championships, and in 2017 it hosted World Figure Skating Championships. The city will host the 2021 FIBA Under-19 Basketball World Cup.
The backbone of Helsinki's motorway network consists of three semicircular beltways, Ring I, Ring II, and Ring III, which connect expressways heading to other parts of Finland, and the western and eastern arteries of "Länsiväylä" and "Itäväylä" respectively. While variants of a "Keskustatunneli" tunnel under the city centre have been repeatedly proposed, the plan remains on the drawing board.
Many important Finnish highways leave Helsinki for various parts of Finland, most of them in the form of motorways. The most significant highways are:
Helsinki has some 390 cars per 1000 inhabitants. This is less than in cities of similar population and construction density, such as Brussels' 483 per 1000, Stockholm's 401, and Oslo's 413.
The Helsinki Central Railway Station is the main terminus of the rail network in Finland. Two rail corridors lead out of Helsinki, the Main Line to the north (to Tampere, Oulu, Rovaniemi), and the Coastal Line to the west (to Turku). The railway connection to the east branches from the Main Line outside of Helsinki at Kerava, and leads via Lahti to eastern parts of Finland and to Russia.
A majority of intercity passenger services in Finland originate or terminate at the Helsinki Central Railway Station. All major cities in Finland are connected to Helsinki by rail service, with departures several times a day. The most frequent service is to Tampere, with more than 25 intercity departures per day as of 2017. There are international services from Helsinki to Saint Petersburg and to Moscow in Russia. The Saint Petersburg to Helsinki route is operated with the Allegro high-speed trains.
A Helsinki to Tallinn Tunnel has been proposed and agreed upon by representatives of the cities. The rail tunnel would connect Helsinki to the Estonian capital Tallinn, further linking Helsinki to the rest of continental Europe by Rail Baltica.
Air traffic is handled primarily from Helsinki Airport, located approximately north of Helsinki's downtown area, in the neighbouring city of Vantaa. Helsinki's own airport, Helsinki-Malmi Airport, is mainly used for general and private aviation. Charter flights are available from Hernesaari Heliport.
Like many other cities, Helsinki was deliberately founded at a location on the sea in order to take advantage of shipping. The freezing of the sea imposed limitations on sea traffic up to the end of the 19th century. But for the last hundred years, the routes leading to Helsinki have been kept open even in winter with the aid of icebreakers, many of them built in the Helsinki Hietalahti shipyard. The arrival and departure of ships has also been a part of everyday life in Helsinki. Regular route traffic from Helsinki to Stockholm, Tallinn, and Saint Petersburg began as far back as 1837. Over 300 cruise ships and 360,000 cruise passengers visit Helsinki annually. There are international cruise ship docks in South Harbour, Katajanokka, West Harbour, and Hernesaari. Helsinki is the second busiest passenger port in Europe with approximately 11 million passengers in 2013. Ferry connections to Tallinn, Mariehamn, and Stockholm are serviced by various companies. Finnlines passenger-freight ferries to Gdynia, Poland; Travemünde, Germany; and Rostock, Germany are also available. St. Peter Line offers passenger ferry service to Saint Petersburg several times a week.
In the Helsinki metropolitan area, public transportation is managed by the Helsinki Regional Transport Authority, the metropolitan area transportation authority. The diverse public transport system consists of trams, commuter rail, the metro, bus lines, two ferry lines and a public bike system.
Helsinki's tram system has been in operation with electric drive continuously since 1900. 13 routes that cover the inner part of the city are operated. As of 2017, the city is expanding the tram network, with several major tram line construction projects under way. These include the Jokeri light rail (replacing the 550 bus line), roughly along Ring I around the city center, and a new tramway to the island of Laajasalo.
The Helsinki Metro, opened in 1982, is the only metro system in Finland, albeit the Helsinki commuter rail trains operate at metro-like frequencies. In 2006, the construction of the long debated extension of the metro into Western Helsinki and Espoo was approved. The extension finally opened after delays in November 2017. An eastern extension into the planned new district of Östersundom and neighboring Sipoo has also been seriously debated. Helsinki's metro system currently consists of 25 stations, with 14 of them underground.
The commuter rail system includes purpose-built double track for local services in two rail corridors along intercity railways, and the Ring Rail Line, an urban double-track railway with a station at the Helsinki Airport in Vantaa. Electric operation of commuter trains was first begun in 1969, and the system has been gradually expanded since. 15 different services are operated as of 2017, some extending outside of the Helsinki region. The frequent services run at a 10-minute headway in peak traffic.
Helsinki is officially the sister city of Beijing, China "(since 2006)". In addition, the city has a special partnership relation with: | https://en.wikipedia.org/wiki?curid=13696 |
Hobart
Hobart (, Palawa kani: "Nipaluna") is the capital and most populous city of the Australian island state of Tasmania. With a population of approximately 240,342 (over 45% of Tasmania's population), it is the least populated Australian state capital city, and second smallest if territories are taken into account (after Darwin, Northern Territory). The city is located in the state's south-east on the estuary of the River Derwent, making it the most southern of Australia's capital cities. Its skyline is dominated by the Mount Wellington, and its harbour forms the second-deepest natural port in the world, with much of the city's waterfront consisting of reclaimed land. The metropolitan area is often referred to as "Greater Hobart", to differentiate it from the City of Hobart, one of the five local government areas that cover the city.
Founded in 1804 as a British penal colony, Hobart is Australia's second oldest capital city after Sydney, New South Wales. Prior to British settlement, the Hobart area had been occupied for possibly as long as 35,000 years, by the semi-nomadic Mouheneener tribe, a sub-group of the Nuennone, or South-East tribe. The descendants of these Aboriginal Tasmanians often refer to themselves as 'Palawa'. Since its foundation, the city has expanded from the mouth of Sullivans Cove in a generally north-south direction along both banks of the River Derwent, from 22 km inland from the estuary at Storm Bay to the point where the river reverts to fresh water at Bridgewater. Penal transportation ended in the 1850s, after which the city experienced periods of growth and decline. The early 20th century saw an economic boom on the back of mining, agriculture and other primary industries, and the loss of men who served in the world wars was counteracted by an influx of immigration. Despite the rise in migration from Asia and other non-English speaking regions, Hobart's population remains predominantly ethnically Anglo-Celtic, and has the highest percentage of Australian-born residents among Australia's capital cities.
Today, Hobart is the financial and administrative hub of Tasmania, serving as the home port for both Australian and French Antarctic operations and acting as a tourist destination, with over 1.192 million visitors in 2011–12. Well-known drawcards include its convict-era architecture, Salamanca Market and the Museum of Old and New Art (MONA), the Southern Hemisphere's largest private museum.
The first European settlement began in 1803 as a military camp at Risdon Cove on the eastern shores of the River Derwent, amid British concerns over the presence of French explorers. In 1804, along with the military, settlers and convicts from the abandoned Port Phillip settlement, the camp at Risdon Cove was moved by Captain David Collins to a better location at the present site of Hobart at Sullivans Cove. The city, initially known as "Hobart Town" or "Hobarton", was named after Lord Hobart, the British secretary of state for war and the colonies.
The area's indigenous inhabitants were members of the semi-nomadic "Mouheneener" tribe. Violent conflict with the European settlers, and the effects of diseases brought by them, dramatically reduced the aboriginal population, which was rapidly replaced by free settlers and the convict population. Charles Darwin visited Hobart Town in February 1836 as part of the "Beagle" expedition. He writes of Hobart and the Derwent estuary in his "Voyage of the Beagle":
...The lower parts of the hills which skirt the bay are cleared; and the bright yellow fields of corn, and dark green ones of potatoes, appear very luxuriant... I was chiefly struck with the comparative fewness of the large houses, either built or building. Hobart Town, from the census of 1835, contained 13,826 inhabitants, and the whole of Tasmania 36,505.
The River Derwent was one of Australia's finest deepwater ports and was the centre of the Southern Ocean whaling and sealing trades. The settlement rapidly grew into a major port, with allied industries such as shipbuilding.
Hobart Town became a city on 21 August 1842, and was renamed Hobart from the beginning of 1881.
Hobart is located on the estuary of the River Derwent in the state's south-east. Geologically Hobart is built predominantly on Jurassic dolerite around the foothills interspersed with smaller areas of Triassic siltstone and Permian mudstone. Hobart extends along both sides of the River Derwent; on the western shore from the Derwent valley in the north through the flatter areas of Glenorchy which rests on older Triassic sediment and into the hilly areas of New Town, Lenah Valley. Both of these areas rest on the younger Jurassic dolerite deposits, before stretching into the lower areas such as the beaches of Sandy Bay in the south, in the Derwent estuary. South of the Derwent estuary lies Storm Bay and the Tasman Peninsula.
The Eastern Shore also extends from the Derwent valley area in a southerly direction hugging the Meehan Range in the east before sprawling into flatter land in suburbs such as Bellerive. These flatter areas of the eastern shore rest on far younger deposits from the Quaternary. From there the city extends in an easterly direction through the Meehan Range into the hilly areas of Rokeby and Oakdowns, before reaching into the tidal flatland area of Lauderdale.
Hobart has access to a number of beach areas including those in the Derwent estuary itself; Sandy Bay, Cornelian Bay, Nutgrove, Kingston, Bellerive, and Howrah Beaches as well as many more in Frederick Henry Bay such as; Seven Mile, Roaches, Cremorne, Clifton, and Goats Beaches.
Hobart has a mild temperate oceanic climate (Köppen: "Cfb"). The highest temperature recorded was on 4 January 2013 and the lowest was on 25 June 1972 and 11 July 1981. Annually, Hobart receives 40.8 clear days. Compared to other major Australian cities, Hobart has the fewest daily average hours of sunshine, with 5.9 hours per day. However, during the summer it has the most hours of daylight of any Australian city, with 15.3 hours on the summer solstice.
Although Hobart itself rarely receives snow during the winter (the city's geographic position keeps temperatures from plummeting far below zero Celsius), the adjacent kunanyi/Mount Wellington is frequently seen with a snowcap in winter. Mountain snow covering has also been known to occur during the other seasons. During the 20th century, the city itself has received snowfalls at sea level on average only once every 15 years; however, outer suburbs lying higher on the slopes of Mount Wellington receive snow more often, owing to cold air masses arriving from Antarctica coupled with them resting at higher altitude. These snow-bearing winds often carry on through Tasmania and Victoria to the Snowy Mountains in northern Victoria and southern New South Wales.
The average temperature of the sea ranges from in September to in February.
At the 2016 census, there were 222,356 people in the Greater Hobart area making it the second least populated capital city in Australia. The City of Hobart local government area had a population of 50,439.
The most common occupation categories were professionals (22.6%), clerical and administrative workers (14.7%), technicians and trades workers (13.3%), community and personal service workers (12.8%), and managers (11.3%). The median weekly household income was $1,234, compared with $1,438 nationally.
At the 2016 census, the most commonly nominated ancestries were:
20.2% of the population was born overseas at the 2016 census. The five largest groups of overseas-born were from England (3.6%), Mainland China (1.1%), New Zealand (0.9%), India (0.6%) and Germany (0.5%).
3.8% of the population, or 8,534 people, identified as Indigenous Australians (Aboriginal Australians and Torres Strait Islanders) in 2016.
At the 2016 census, 86.5% of the population spoke only English at home. The other languages most commonly spoken at home were Mandarin (1.3%) Greek (0.5%), Nepali (0.4%), German (0.4%) and Italian (0.3%).
In the 2016 census, 52.1% of Greater Hobart residents who responded to the question specified a Christian religion. Major religious affiliations were Anglican (19.8%), Catholic (17.0%) and Uniting Church (2.5%). In addition, 39.9% specified "No Religion" and 9.3% did not answer.
Hobart has a small Mormon community of around 642 (2011), with meetinghouses in Glenorchy, Rosny, and Glen Huon. There is also a synagogue where the Jewish community, of around 111 (2001), or 0.05% of the Hobart population, worships. Hobart has a Bahá'í community, with a Bahá'í Centre of Learning, located within the city.
In 2013, Hillsong Church established a Hillsong Connect campus in Hobart.
Shipping is significant to the city's economy. Hobart is the home port for the Antarctic activities of Australia and France. The port loads around 2,000 tonnes of Antarctic cargo a year for the Australian research vessel "Aurora Australis." The city is also a popular cruise ship destination during the summer months, with 47 such ships docking during the course of the 2016–17 summer season.
The city also supports many other industries. Major local employers include catamaran builder Incat, zinc refinery Nyrstar, Cascade Brewery and Cadbury's Chocolate Factory, Norske Skog and Wrest Point Casino. The city also supports a host of light industry manufacturers, as well as a range of redevelopment projects, including the $689 million Royal Hobart Hospital Redevelopment – standing as the states largest ever Health Infrastructure project. Tourism is a significant part of the economy, with visitors coming to the city to explore its historic inner suburbs and nationally acclaimed restaurants and cafes, as well as its vibrant music and nightlife culture. The two major draw-cards are the weekly market in Salamanca Place, and the Museum of Old and New Art. The city is also used as a base from which to explore the rest of Tasmania.
The last 15–20 years has seen Hobart's wine industry thrive as many vineyards have developed in countryside areas outside of the city in the Coal River Wine Region and D'Entrecasteaux Channel, including Moorilla Estate at Berriedale one of the most awarded vineyards in Australia.
Hobart is an Antarctic gateway city, with geographical proximity to East Antarctica and the Southern Ocean. Infrastructure is provided by the port of Hobart for scientific research and cruise ships, and Hobart International Airport supports an Antarctic Airlink to Wilkins Runway at Casey Station. Hobart is a logistics point for the French icebreaker "L'Astrolabe".
Hobart is the home port for the Australian and French Antarctic programs, and provides port services for other visiting Antarctic nations and Antarctic cruise ships. Antarctic and Southern Ocean expeditions are supported by a specialist cluster offering cold climate products, services and scientific expertise. The majority of these businesses and organisations are members of the Tasmanian polar network, supported in part by the Tasmanian State Government.
Tasmania has a high concentration of Antarctic and Southern Ocean scientists. Hobart is home to the following Antarctic and Southern Ocean scientific institutions:
Hobart serves as a focal point and mecca for tourism in the state of Tasmania. In 2016, Hobart received 1.8 million visitors, surpassing both Perth and Canberra, tying equally with Brisbane.
The Royal Tasmanian Botanical Gardens is a popular recreation area a short distance from the city centre. It is the second-oldest Botanic Gardens in Australia and holds extensive significant plant collections.
Hadley's Orient Hotel, on Hobart's Murray Street, is the oldest continuously operating hotel in Australia.
kunanyi/Mount Wellington, accessible by passing through Fern Tree, is the dominant feature of Hobart's skyline. Indeed, many descriptions of Hobart have used the phrase "nestled amidst the foothills", so undulating is the landscape. At 1,271 metres, the mountain has its own ecosystems, is rich in biodiversity and plays a large part in determining the local weather.
The Tasman Bridge is also a uniquely important feature of the city, connecting the two shores of Hobart and visible from many locations. The Hobart Synagogue is the oldest synagogue in Australia and a rare surviving example of an Egyptian Revival synagogue.
Hobart is known for its well-preserved colonial-era architecture, much of it dating back to the Georgian and Victorian periods, giving the city a distinctly "Old World" feel. For locals, this became a source of discomfiture about the city's convict past, but is now a draw card for tourists. Regions within the city centre, such as Salamanca Place, contain many of the city's heritage-listed buildings. Historic homes and mansions also exist in the suburbs, much of the inner-city neighbourhoods are dotted with weatherboard cottages and two-storey Victorian houses.
Kelly's Steps were built in 1839 by shipwright and adventurer James Kelly to provide a short-cut from Kelly Street and Arthur Circus in Battery Point to the warehouse and dockyards district of Salamanca Place. In 1835, John Lee Archer designed and oversaw the construction of the sandstone Customs House, facing Sullivans Cove. Completed in 1840, it was used as Tasmania's parliament house, and is now commemorated by a pub bearing the same name (built in 1844) which is frequented by yachtsmen after they have completed the Sydney to Hobart yacht race.
Hobart is also home to many historic churches. The Scots Church (formerly known as St Andrew's) was built in Bathurst Street from 1834 to 1836, and a small sandstone building within the churchyard was used as the city's first Presbyterian Church. The Salamanca Place warehouses and the Theatre Royal were also constructed in this period. The Greek revival St George's Anglican Church in Battery Point was completed in 1838, and a classical tower, designed by James Blackburn, was added in 1847. St Joseph's was built in 1840. St David's Cathedral, Hobart's first cathedral, was consecrated in 1874.
Hobart has very few high rise buildings in comparison to other Australian cities. This is partly a result of height limits imposed due to Hobart's proximity to River Derwent and Mount Wellington.
Hobart is home to the Tasmanian Symphony Orchestra, which is resident at the Federation Concert Hall on the city's waterfront. It offers a year-round program of concerts and is thought to be one of the finest small orchestras in the world. Hobart also plays host to the University of Tasmania's acclaimed Australian International Symphony Orchestra Institute (AISOI) which brings pre-professional advanced young musicians to town from all over Australia and internationally. The AISOI plays host to a public concert season during the first two weeks of December every year focusing on large symphonic music. Like the Tasmanian Symphony Orchestra, the AISOI uses the Federation Concert Hall as its performing base.
Hobart is home to Australia's oldest theatre, the Theatre Royal, as well as the Playhouse theatre, the Backspace theatre and many smaller stage theatres. It also has three Village Cinema complexes, one each in Hobart CBD, Glenorchy and Rosny, with the possibility of a fourth being developed in Kingston. The State Cinema in North Hobart specialises in arthouse and foreign films.
Australia's first published novel, " Quintus Servinton", was written and published in Hobart. It was written by a convict, Henry Savery, in a Hobart prison cell in 1830, while serving a sentence for forgery. A generally autobiographical work, it's the story of what happens to a well educated man from a relatively well to do family, who makes poor choices in life.
The city has also long been home to a thriving classical, jazz, folk, punk, hip-hop, electro, metal and rock music scene. Internationally recognised musicians such as metal acts Striborg and Psycroptic, indie-electro bands The Paradise Motel and The Scientists of Modern Music, singer-songwriters Sacha Lucashenko (of The Morning After Girls), Michael Noga (of The Drones), and Monique Brumby, two-thirds of indie rock band Love of Diagrams, post punk band Sea Scouts, theremin player Miles Brown, blues guitarist Phil Manning (of blues-rock band Chain), power-pop group The Innocents are all successful expatriates. In addition, founding member of Violent Femmes, Brian Ritchie, now calls Hobart home, and has formed a local band, The Green Mist. Ritchie also curates the annual international arts festival MONA FOMA, held at Salamanca Place's waterfront venue, Princes Wharf, Shed No. 1. Hobart hosts many significant festivals including summer's Taste of Tasmania celebrating local produce, wine and music, "Dark Mofo" marking the winter solstice, Australia's premier festival celebration of voice the "Festival of Voices", and Tasmania's biennial international arts festival Ten Days On The Island. Other festivals, including the "Hobart Fringe Festival", Hobart Summer Festival, Southern Roots Festival, the Falls Festival in Marion Bay and the Soundscape Festival also capitalise on Hobart's artistic communities.
Hobart is home to the Tasmanian Museum and Art Gallery. The Meadowbank Estate winery and restaurant features a floor mural by Tom Samek, part funded by the Federal Government. The Museum of Old and New Art (MONA) opened in 2011 to coincide with the third annual MONA FOMA festival. The multi-storey MONA gallery was built directly underneath the historic Sir Roy Grounds courtyard house, overlooking the River Derwent. This building serves as the entrance to the MONA Gallery.
Hobart has a growing street art scene thanks to a program called "Hobart Walls", which was launched in association with the "Vibrance Festival", an annual mural-painting event. The City of Hobart and Vibrance Festival launched Hobart's first legal street art wall in Bidencopes Lane in 2018, allowing any artist to paint there, on any day of the week, provided they sign up for a permit and paint between 9am – 10pm.
Designed by the prolific architect Sir Roy Grounds, the 17-storey Wrest Point Hotel Casino in Sandy Bay, opened as Australia's first legal casino in 1973.
The city's nightlife primarily revolves around Salamanca Place, the waterfront area, Elizabeth St in North Hobart and Sandy Bay, but popular pubs, bars and nightclubs exist around the city as well. Major national and international music events are usually held at the Derwent Entertainment Centre, or the Casino. Popular restaurant strips include Elizabeth Street in North Hobart, and Salamanca Place near the waterfront. These include numerous ethnic restaurants including Chinese, Thai, Greek, Pakistani, Italian, Indian and Mexican. The major shopping street in the CBD is Elizabeth Street, with the pedestrianised Elizabeth Mall and the General Post Office.
Close Shave, one of Australia's longest serving male a cappella quartets, is based in Hobart.
Hobart is internationally famous among the yachting community as the finish of the Sydney to Hobart Yacht Race which starts in Sydney on Boxing Day (the day after Christmas Day). The arrival of the yachts is celebrated as part of the Hobart Summer Festival, a food and wine festival beginning just after Christmas and ending in mid-January. The Taste of Tasmania is a major part of the festival, where locals and visitors can taste fine local and international food and wine.
The city is the finishing point of the Targa Tasmania rally car event, which has been held annually in April since 1991.
The annual Tulip Festival at the Royal Tasmanian Botanical Gardens is a popular Spring celebration in the city.
The Australian Wooden Boat Festival is a biennial event held in Hobart celebrating wooden boats. It is held concurrently with the Royal Hobart Regatta, which began in 1830 and is therefore Tasmania's oldest surviving sporting event.
Most professional Hobart-based sports teams represent Tasmania as a whole rather than exclusively the city.
Cricket is a popular game of the city. The Tasmanian Tigers cricket team plays its home games at the Bellerive Oval on the Eastern Shore. A new team, Hobart Hurricanes represent the city in the Big Bash League. Bellerive Oval has been the breeding ground of some world class cricket players including the former Australia captain Ricky Ponting.
Despite Australian rules football's huge popularity in the state of Tasmania, the state does not have a team in the Australian Football League. However, a bid for an Tasmanian AFL team is a popular topic among football fans. The State government is one of the potential sponsors of such a team. Local domestic club football is still played. Tasmanian State League football features five clubs from Hobart, and other leagues such as Southern Football League and the Old Scholars Football Association are also played each Winter.
The city has two local rugby league football teams (Hobart Tigers and South Hobart Storm) that compete in the Tasmanian Rugby League.
Tasmania is not represented by teams in the NRL, Super Rugby, ANZ Championship, A-League, or NBL. However, the Hobart Chargers do represent Hobart in the second-tier South East Australian Basketball League. Besides the bid for an AFL club which was passed over in favour of a second Queensland team, despite several major local businesses and the Premier pioneering for a club, there is also a Hobart bid for entry into the A-League.
Hockey Tasmania has a men's team (the Tasmanian Tigers) and a women's team (the Van Demons) competing in the Australian Hockey League. Hobart hosted the FIH junior men's world cup in 2001.
The city co-hosted the basketball FIBA Oceania Championship 1975.
Five free-to-air television stations service Hobart:
Each station broadcasts a primary channel and several multichannels.
Hobart is served by twenty-eight digital free-to-air television channels:
The majority of pay television services are provided by Foxtel via satellite, although other smaller pay television providers do service Hobart.
Commercial radio stations licensed to cover the Hobart market include Triple M Hobart, HIT 100.9 and 7HO FM. Local community radio stations include Christian radio station Ultra106five, Edge Radio and 92FM which targets the wider community with specialist programmes. The five ABC radio networks available on analogue radio broadcast to Hobart via 936 ABC Hobart, Radio National, Triple J, NewsRadio and ABC Classic FM. Hobart is also home to the video creation company Biteable.
Hobart's major newspaper is "The Mercury", which was founded by John Davies in 1854 and has been continually published ever since. The paper is owned and operated by Rupert Murdoch's News Limited.
Greater Hobart metropolitan area consists of five local government areas of which three, City of Hobart, City of Glenorchy and City of Clarence are designated as cities. Hobart also includes the urbanised local governments of the Municipality of Kingborough and Municipality of Brighton. Each local government services all the suburbs that are within its geographical boundaries and are responsible for their own urban area, up to a certain scale, and residential planning as well as waste management and mains water storage.
Most citywide events such as the Taste of Tasmania and Hobart Summer Festival are funded by the Tasmanian State Government as a joint venture with the Hobart City Council. Urban planning of the Hobart CBD in particular the Heritage listed areas such as Sullivans Cove are also intensely scrutinised by State Government, which is operated out of Parliament House on the waterfront.
Hobart is home to the main campus of the University of Tasmania, located in Sandy Bay. On-site accommodation colleges include Christ College, Jane Franklin Hall and St John Fisher College. Other campuses are in Launceston and Burnie.
The Greater Hobart area contains 122 primary, secondary and pretertiary (College) schools distributed throughout Clarence, Glenorchy and Hobart City Councils and Kingborough and Brighton Municipalities. These schools are made up of a mix of public, catholic, private and independent run, with the heaviest distribution lying in the more densely populated West around the Hobart city core. TasTAFE operates a total of seven polytechnic campuses within the Greater Hobart area that provide vocational education and training.
Royal Hobart Hospital is a major public hospital in central Hobart with 501 beds, which also serves as a teaching hospital for the University of Tasmania.
A private hospital, Hobart Private Hospital is located adjacent to it and operated by Australian healthcare provider Healthscope. The company also owns another hospital in the city, the St. Helen's Private Hospital, which features a mother-baby unit.
The only public transportation within the city of Hobart is via a network of Metro Tasmania buses funded by
the Tasmanian Government and a small number of private bus services. Like many large Australian cities, Hobart once operated passenger tram services, a trolleybus network consisting of six routes which operated until 1968. However, the tramway closed in the early 1960s. The tracks are still visible in the older streets of Hobart.
Suburban passenger trains, run by the Tasmanian Government Railways, were closed in 1974 and the intrastate passenger service, the Tasman Limited, ceased running in 1978. Recently though there has been a push from the city, and increasingly from government, to establish a light rail network, intended to be fast, efficient, and eco-friendly, along existing tracks in a North South corridor; to help relieve the frequent jamming of traffic in Hobart CBD.
The main arterial routes within the urban area are the Brooker Highway to Glenorchy and the northern suburbs, the Tasman Bridge and Bowen Bridge across the river to Rosny and the Eastern Shore. The East Derwent Highway to Lindisfarne, Geilston Bay, and Northwards to Brighton, the South Arm Highway leading to Howrah, Rokeby, Lauderdale and Opossum Bay and the Southern Outlet south to Kingston and the D'Entrecasteaux Channel. Leaving the city, motorists can travel the Lyell Highway to the west coast, Midland Highway to Launceston and the north, Tasman Highway to the east coast, or the Huon Highway to the far south.
Ferry services from Hobart's Eastern Shore into the city were once a common form of public transportation, but with lack of government funding, as well as a lack of interest from the private sector, there has been the demise of a regular commuter ferry service – leaving Hobart's commuters relying solely on travel by automobiles and buses. There is however a water taxi service operating from the Eastern Shore into Hobart which provides an alternative to the Tasman Bridge.
Hobart is served by Hobart International Airport with flights to/from Melbourne (Qantas, Virgin Australia and Jetstar Airways; Sydney (Qantas, Jetstar and Virgin); Brisbane (Virgin); Perth (Virgin); and Adelaide (Jetstar).The smaller Cambridge Aerodrome mainly serves small charter airlines offering local tourist flights. In the past decade, Hobart International Airport received a huge upgrade, with the airport now being a first class airport facility.
In 2009, it was announced that Hobart Airport would receive more upgrades, including a first floor, aerobridges (currently, passengers must walk on the tarmac) and shopping facilities. Possible new international flights to Asia and New Zealand, and possible new domestic flights to Darwin and Cairns have been proposed. A second runway, possibly to be constructed in the next 15 years, would assist with growing passenger numbers to Hobart. Hobart Control Tower may be renovated and fitted with new radar equipment, and the airport's carpark may be extended further. Also, new facilities will be built just outside the airport. A new service station, hotel and day care centre have already been built and the road leading to the airport has been maintained and re-sealed. In 2016, work began on a 500-metre extension of the existing runway in addition to a $100 million upgrade of the airport. The runway extension is expected to allow international flights to land and increase air-traffic with Antarctica. This upgrade was, in part, funded under a promise made during the 2013 federal election by the Abbott government. | https://en.wikipedia.org/wiki?curid=13699 |
Hesiod
Hesiod (; "Hēsíodos") was an ancient Greek poet generally thought to have been active between 750 and 650 BC, around the same time as Homer. He is generally regarded as the first written poet in the Western tradition to regard himself as an individual persona with an active role to play in his subject. Ancient authors credited Hesiod and Homer with establishing Greek religious customs. Modern scholars refer to him as a major source on Greek mythology, farming techniques, early economic thought (he is sometimes considered history's first economist), archaic Greek astronomy and ancient time-keeping.
The dating of Hesiod's life is a contested issue in scholarly circles ("see § Dating below"). Epic narrative allowed poets like Homer no opportunity for personal revelations. However, Hesiod's extant work comprises several didactic poems in which he went out of his way to let his audience in on a few details of his life. There are three explicit references in "Works and Days", as well as some passages in his "Theogony" that support inferences made by scholars. The former poem says that his father came from Cyme in Aeolis (on the coast of Asia Minor, a little south of the island Lesbos) and crossed the sea to settle at a hamlet, near Thespiae in Boeotia, named Ascra, "a cursed place, cruel in winter, hard in summer, never pleasant" ("Works" 640). Hesiod's patrimony there, a small piece of ground at the foot of Mount Helicon, occasioned lawsuits with his brother Perses, who seems, at first, to have cheated him of his rightful share thanks to corrupt authorities or "kings" but later became impoverished and ended up scrounging from the thrifty poet ("Works" 35, 396).
Unlike his father, Hesiod was averse to sea travel, but he once crossed the narrow strait between the Greek mainland and Euboea to participate in funeral celebrations for one Athamas of Chalcis, and there won a tripod in a singing competition. He also describes a meeting between himself and the Muses on Mount Helicon, where he had been pasturing sheep when the goddesses presented him with a laurel staff, a symbol of poetic authority ("Theogony" 22–35). Fanciful though the story might seem, the account has led ancient and modern scholars to infer that he was not a professionally trained rhapsode, or he would have been presented with a lyre instead.
Some scholars have seen Perses as a literary creation, a foil for the moralizing that Hesiod develops in "Works and Days", but there are also arguments against that theory. For example, it is quite common for works of moral instruction to have an imaginative setting, as a means of getting the audience's attention, but it could be difficult to see how Hesiod could have travelled around the countryside entertaining people with a narrative about himself if the account was known to be fictitious. Gregory Nagy, on the other hand, sees both "Pérsēs" ("the destroyer" from , "pérthō") and "Hēsíodos" ("he who emits the voice" from , "híēmi" and , "audḗ") as fictitious names for poetical personae.
It might seem unusual that Hesiod's father migrated from Asia Minor westwards to mainland Greece, the opposite direction to most colonial movements at the time, and Hesiod himself gives no explanation for it. However around 750 BC or a little later, there was a migration of seagoing merchants from his original home in Cyme in Asia Minor to Cumae in Campania (a colony they shared with the Euboeans), and possibly his move west had something to do with that, since Euboea is not far from Boeotia, where he eventually established himself and his family. The family association with Aeolian Cyme might explain his familiarity with eastern myths, evident in his poems, though the Greek world might have already developed its own versions of them.
In spite of Hesiod's complaints about poverty, life on his father's farm could not have been too uncomfortable if "Works and Days" is anything to judge by, since he describes the routines of prosperous yeomanry rather than peasants. His farmer employs a friend ("Works and Days" 370) as well as servants (502, 573, 597, 608, 766), an energetic and responsible ploughman of mature years (469 ff.), a slave boy to cover the seed (441–6), a female servant to keep house (405, 602) and working teams of oxen and mules (405, 607f.). One modern scholar surmises that Hesiod may have learned about world geography, especially the catalogue of rivers in "Theogony" (337–45), listening to his father's accounts of his own sea voyages as a merchant. The father probably spoke in the Aeolian dialect of Cyme but Hesiod probably grew up speaking the local Boeotian, belonging to the same dialect group. However, while his poetry features some Aeolisms there are no words that are certainly Boeotian. His basic language was the main literary dialect of the time, Homer's Ionian.
It is probable that Hesiod wrote his poems down, or dictated them, rather than passed them on orally, as rhapsodes did—otherwise the pronounced personality that now emerges from the poems would surely have been diluted through oral transmission from one rhapsode to another. Pausanias asserted that Boeotians showed him an old tablet made of lead on which the "Works" were engraved. If he did write or dictate, it was perhaps as an aid to memory or because he lacked confidence in his ability to produce poems extempore, as trained rhapsodes could do. It certainly wasn't in a quest for immortal fame since poets in his era had probably no such notions for themselves. However, some scholars suspect the presence of large-scale changes in the text and attribute this to oral transmission. Possibly he composed his verses during idle times on the farm, in the spring before the May harvest or the dead of winter.
The personality behind the poems is unsuited to the kind of "aristocratic withdrawal" typical of a rhapsode but is instead "argumentative, suspicious, ironically humorous, frugal, fond of proverbs, wary of women." He was in fact a misogynist of the same calibre as the later poet Semonides. He resembles Solon in his preoccupation with issues of good versus evil and "how a just and all-powerful god can allow the unjust to flourish in this life". He recalls Aristophanes in his rejection of the idealised hero of epic literature in favour of an idealised view of the farmer. Yet the fact that he could eulogise kings in "Theogony" (80 ff., 430, 434) and denounce them as corrupt in "Works and Days" suggests that he could resemble whichever audience he composed for.
Various legends accumulated about Hesiod and they are recorded in several sources:
Two different—yet early—traditions record the site of Hesiod's grave. One, as early as Thucydides, reported in Plutarch, the "Suda" and John Tzetzes, states that the Delphic oracle warned Hesiod that he would die in Nemea, and so he fled to Locris, where he was killed at the local temple to Nemean Zeus, and buried there. This tradition follows a familiar ironic convention: the oracle predicts accurately after all. The other tradition, first mentioned in an epigram by Chersias of Orchomenus written in the 7th century BC (within a century or so of Hesiod's death) claims that Hesiod lies buried at Orchomenus, a town in Boeotia. According to Aristotle's "Constitution of Orchomenus," when the Thespians ravaged Ascra, the villagers sought refuge at Orchomenus, where, following the advice of an oracle, they collected the ashes of Hesiod and set them in a place of honour in their "agora", next to the tomb of Minyas, their eponymous founder. Eventually they came to regard Hesiod too as their "hearth-founder" (, "oikistēs"). Later writers attempted to harmonize these two accounts.
Greeks in the late 5th and early 4th centuries BC considered their oldest poets to be Orpheus, Musaeus, Hesiod and Homer—in that order. Thereafter, Greek writers began to consider Homer earlier than Hesiod. Devotees of Orpheus and Musaeus were probably responsible for precedence being given to their two cult heroes and maybe the Homeridae were responsible in later antiquity for promoting Homer at Hesiod's expense.
The first known writers to locate Homer earlier than Hesiod were Xenophanes and Heraclides Ponticus, though Aristarchus of Samothrace was the first actually to argue the case. Ephorus made Homer a younger cousin of Hesiod, the 5th century BC historian Herodotus ("Histories" II, 53) evidently considered them near-contemporaries, and the 4th century BC sophist Alcidamas in his work "Mouseion" even brought them together for an imagined poetic "ágōn" (), which survives today as the "Contest of Homer and Hesiod". Most scholars today agree with Homer's priority but there are good arguments on either side.
Hesiod certainly predates the lyric and elegiac poets whose work has come down to the modern era. Imitations of his work have been observed in Alcaeus, Epimenides, Mimnermus, Semonides, Tyrtaeus and Archilochus, from which it has been inferred that the latest possible date for him is about 650 BC.
An upper limit of 750 BC is indicated by a number of considerations, such as the probability that his work was written down, the fact that he mentions a sanctuary at Delphi that was of little national significance before c. 750 BC ("Theogony" 499), and that he lists rivers that flow into the Euxine, a region explored and developed by Greek colonists beginning in the 8th century BC. ("Theogony" 337–45).
Hesiod mentions a poetry contest at Chalcis in Euboea where the sons of one Amphidamas awarded him a tripod ("Works and Days" 654–662). Plutarch identified this Amphidamas with the hero of the Lelantine War between Chalcis and Eretria and he concluded that the passage must be an interpolation into Hesiod's original work, assuming that the Lelantine War was too late for Hesiod. Modern scholars have accepted his identification of Amphidamas but disagreed with his conclusion. The date of the war is not known precisely but estimates placing it around 730–705 BC fit the estimated chronology for Hesiod. In that case, the tripod that Hesiod won might have been awarded for his rendition of "Theogony", a poem that seems to presuppose the kind of aristocratic audience he would have met at Chalcis.
Three works have survived which were attributed to Hesiod by ancient commentators: "Works and Days", "Theogony", and "Shield of Heracles". Only fragments exist of other works attributed to him. The surviving works and fragments were all written in the conventional metre and language of epic. However, the "Shield of Heracles" is now known to be spurious and probably was written in the sixth century BC. Many ancient critics also rejected "Theogony" (e.g., Pausanias 9.31.3), even though Hesiod mentions himself by name in that poem. "Theogony" and "Works and Days" might be very different in subject matter, but they share a distinctive language, metre, and prosody that subtly distinguish them from Homer's work and from the "Shield of Heracles" (see Hesiod's Greek below). Moreover, they both refer to the same version of the Prometheus myth. Yet even these authentic poems may include interpolations. For example, the first ten verses of the "Works and Days" may have been borrowed from an Orphic hymn to Zeus (they were recognised as not the work of Hesiod by critics as ancient as Pausanias).
Some scholars have detected a proto-historical perspective in Hesiod, a view rejected by Paul Cartledge, for example, on the grounds that Hesiod advocates a not-forgetting without any attempt at verification. Hesiod has also been considered the father of gnomic verse. He had "a passion for systematizing and explaining things". Ancient Greek poetry in general had strong philosophical tendencies and Hesiod, like Homer, demonstrates a deep interest in a wide range of 'philosophical' issues, from the nature of divine justice to the beginnings of human society. Aristotle ("Metaphysics" 983b–987a) believed that the question of first causes may even have started with Hesiod ("Theogony" 116–53) and Homer ("Iliad" 14.201, 246).
He viewed the world from outside the charmed circle of aristocratic rulers, protesting against their injustices in a tone of voice that has been described as having a "grumpy quality redeemed by a gaunt dignity" but, as stated in the biography section, he could also change to suit the audience. This ambivalence appears to underlie his presentation of human history in "Works and Days", where he depicts a golden period when life was easy and good, followed by a steady decline in behaviour and happiness through the silver, bronze, and Iron Ages – except that he inserts a heroic age between the last two, representing its warlike men as better than their bronze predecessors. He seems in this case to be catering to two different world-views, one epic and aristocratic, the other unsympathetic to the heroic traditions of the aristocracy.
The "Theogony" is commonly considered Hesiod's earliest work. Despite the different subject matter between this poem and the "Works and Days", most scholars, with some notable exceptions, believe that the two works were written by the same man. As M.L. West writes, "Both bear the marks of a distinct personality: a surly, conservative countryman, given to reflection, no lover of women or life, who felt the gods' presence heavy about him."
The "Theogony" concerns the origins of the world (cosmogony) and of the gods (theogony), beginning with Chaos, Gaia, Tartarus and Eros, and shows a special interest in genealogy. Embedded in Greek myth, there remain fragments of quite variant tales, hinting at the rich variety of myth that once existed, city by city; but Hesiod's retelling of the old stories became, according to Herodotus, the accepted version that linked all Hellenes.
The creation myth in Hesiod has long been held to have Eastern influences, such as the Hittite Song of Kumarbi and the Babylonian Enuma Elis. This cultural crossover would have occurred in the eighth and ninth century Greek trading colonies such as Al Mina in North Syria. (For more discussion, read Robin Lane Fox's "Travelling Heroes" and Walcot's "Hesiod and the Near East.")
The "Works and Days" is a poem of over 800 lines which revolves around two general truths: labour is the universal lot of Man, but he who is willing to work will get by. Scholars have interpreted this work against a background of agrarian crisis in mainland Greece, which inspired a wave of documented colonisations in search of new land. This poem is one of the earliest known musings on economic thought.
This work lays out the five Ages of Man, as well as containing advice and wisdom, prescribing a life of honest labour and attacking idleness and unjust judges (like those who decided in favour of Perses) as well as the practice of usury. It describes immortals who roam the earth watching over justice and injustice. The poem regards labor as the source of all good, in that both gods and men hate the idle, who resemble drones in a hive. In the horror of the triumph of violence over hard work and honor, verses describing the "Golden Age" present the social character and practice of nonviolent diet through agriculture and fruit-culture as a higher path of living sufficiently.
In addition to the "Theogony" and "Works and Days", numerous other poems were ascribed to Hesiod during antiquity. Modern scholarship has doubted their authenticity, and these works are generally referred to as forming part of the "Hesiodic Corpus" whether or not their authorship is accepted. The situation is summed up in this formulation by Glenn Most:
Of these works forming the extended Hesiodic corpus, only the "Shield of Heracles" (, "Aspis Hērakleous") is transmitted intact via a medieval manuscript tradition.
Classical authors also attributed to Hesiod a lengthy genealogical poem known as "Catalogue of Women" or "Ehoiai" (because sections began with the Greek words "ē hoiē," "Or like the one who ..."). It was a mythological catalogue of the mortal women who had mated with gods, and of the offspring and descendants of these unions.
Several additional hexameter poems were ascribed to Hesiod:
In addition to these works, the "Suda" lists an otherwise unknown "dirge for Batrachus, [Hesiod's] beloved".
The Roman bronze bust, the so-called "Pseudo-Seneca," of the late first century BC found at Herculaneum is now thought not to be of Seneca the Younger. It has been identified by Gisela Richter as an imagined portrait of Hesiod. In fact, it has been recognized since 1813 that the bust was not of Seneca, when an inscribed herma portrait of Seneca with quite different features was discovered. Most scholars now follow Richter's identification.
Hesiod employed the conventional dialect of epic verse, which was Ionian. Comparisons with Homer, a native Ionian, can be unflattering. Hesiod's handling of the dactylic hexameter was not as masterful or fluent as Homer's and one modern scholar refers to his "hobnailed hexameters". His use of language and meter in "Works and Days" and "Theogony" distinguishes him also from the author of the "Shield of Heracles". All three poets, for example, employed digamma inconsistently, sometimes allowing it to affect syllable length and meter, sometimes not. The ratio of observance/neglect of digamma varies between them. The extent of variation depends on how the evidence is collected and interpreted but there is a clear trend, revealed for example in the following set of statistics.
Hesiod does not observe digamma as often as the others do. That result is a bit counter-intuitive since digamma was still a feature of the Boeotian dialect that Hesiod probably spoke, whereas it had already vanished from the Ionic vernacular of Homer. This anomaly can be explained by the fact that Hesiod made a conscious effort to compose like an Ionian epic poet at a time when digamma was not heard in Ionian speech, while Homer tried to compose like an older generation of Ionian bards, when it was heard in Ionian speech. There is also a significant difference in the results for "Theogony" and "Works and Days", but that is merely due to the fact that the former includes a catalog of divinities and therefore it makes frequent use of the definite article associated with digamma, oἱ.
Though typical of epic, his vocabulary features some significant differences from Homer's. One scholar has counted 278 un-Homeric words in "Works and Days", 151 in "Theogony" and 95 in "Shield of Heracles". The disproportionate number of un-Homeric words in "W & D" is due to its un-Homeric subject matter. Hesiod's vocabulary also includes quite a lot of formulaic phrases that are not found in Homer, which indicates that he may have been writing within a different tradition. | https://en.wikipedia.org/wiki?curid=13700 |
Hebrew numerals
The system of Hebrew numerals is a quasi-decimal alphabetic numeral system using the letters of the Hebrew alphabet.
The system was adapted from that of the Greek numerals in the late 2nd century BCE.
The current numeral system is also known as the "Hebrew alphabetic numerals" to contrast with earlier systems of writing numerals used in classical antiquity. These systems were inherited from usage in the Aramaic and Phoenician scripts, attested from c. 800 BC in the so-called Samaria ostraca and sometimes known as "Hebrew-Aramaic numerals", ultimately derived from the Egyptian Hieratic numerals.
The Greek system was adopted in Hellenistic Judaism and had been in use in Greece since about the 5th century BC.
In this system, there is no notation for zero, and the numeric values for individual letters are added together. Each unit (1, 2, ..., 9) is assigned a separate letter, each tens (10, 20, ..., 90) a separate letter, and the first four hundreds (100, 200, 300, 400) a separate letter. The later hundreds (500, 600, 700, 800 and 900) are represented by the sum of two or three letters representing the first four hundreds. To represent numbers from 1,000 to 999,999, the same letters are reused to serve as thousands, tens of thousands, and hundreds of thousands. Gematria (Jewish numerology) uses these transformations extensively.
In Israel today, the decimal system of Arabic numerals (ex. 0, 1, 2, 3, etc.) is used in almost all cases (money, age, date on the civil calendar). The Hebrew numerals are used only in special cases, such as when using the Hebrew calendar, or numbering a list (similar to a, b, c, d, etc.), much as Roman numerals are used in the West.
The Hebrew language has names for common numbers that range from zero to one million. Letters of the Hebrew alphabet are used to represent numbers in a few traditional contexts, for example in calendars. In other situations Arabic numerals are used. Cardinal and ordinal numbers must agree in gender with the noun they are describing. If there is no such noun (e.g. telephone numbers), the feminine form is used. For ordinal numbers greater than ten the cardinal is used and numbers above the value 20 have no gender.
Note: For ordinal numbers greater than 10, cardinal numbers are used instead.
Note: For numbers greater than 20, gender does not apply. Officially, numbers greater than a million were represented by the long scale; However, since January 21, 2013, the modified short scale (under which the long scale milliard is substituted for the strict short scale billion), which was already the colloquial standard, became official.
Cardinal and ordinal numbers must agree in gender (masculine or feminine; mixed groups are treated as masculine) with the noun they are describing. If there is no such noun (e.g. a telephone number or a house number in a street address), the feminine form is used. Ordinal numbers must also agree in number and definite status like other adjectives. The cardinal number precedes the noun (e.g., "shlosha yeladim"), except for the number one which succeeds it (e.g., "yeled echad"). The number two is special: "shnayim" (m.) and "shtayim" (f.) become "shney" (m.) and "shtey" (f.) when followed by the noun they count. For ordinal numbers (numbers indicating position) greater than ten the cardinal is used.
The Hebrew numeric system operates on the additive principle in which the numeric values of the letters are added together to form the total. For example, 177 is represented as which (from right to left) corresponds to 100 + 70 + 7 = 177.
Mathematically, this type of system requires 27 letters (1-9, 10-90, 100-900). In practice the last letter, "tav" (which has the value 400) is used in combination with itself and/or other letters from "qof" (100) onwards, to generate numbers from 500 and above. Alternatively, the 22-letter Hebrew numeral set is sometimes extended to 27 by using 5 "sofit" (final) forms of the Hebrew letters.
By convention, the numbers 15 and 16 are represented as (9 + 6) and (9 + 7), respectively, in order to refrain from using the two-letter combinations (10 + 5) and (10 + 6), which are alternate written forms for the Name of God in everyday writing. In the calendar, this manifests every full moon, since all Hebrew months start on a new moon (see for example: Tu BiShvat).
Combinations which would spell out words with negative connotations are sometimes avoided by switching the order of the letters. For instance, 744 which should be written as (meaning "you/it will be destroyed") might instead be written as or (meaning "end to demon").
The Hebrew numeral system has sometimes been extended to include the five final letter forms— for 500, for 600, for 700, for 800, for 900—which are then used to indicate the numbers from 500 to 900.
The ordinary additive forms for 500 to 900 are , , , and .
Gershayim (U+05F4 in Unicode, and resembling a double quote mark) (sometimes erroneously referred to as "merkha'ot", which is Hebrew for double quote) are inserted before (to the right of) the last (leftmost) letter to indicate that the sequence of letters represents a number rather than a word. This is used in the case where a number is represented by two or more Hebrew numerals ("e.g.," 28 → ).
Similarly, a single geresh (U+05F3 in Unicode, and resembling a single quote mark) is appended after (to the left of) a single letter to indicate that the letter represents a number rather than a (one-letter) word. This is used in the case where a number is represented by a single Hebrew numeral ("e.g." 100 → ).
Note that geresh and gershayim merely indicate ""not a (normal) word."" Context usually determines whether they indicate a number or something else (such as ""abbreviation"").
An alternative method found in old manuscripts and still found on modern-day tombstones is to put a dot above each letter of the number.
In print, Arabic numerals are employed in Modern Hebrew for most purposes. Hebrew numerals are used nowadays primarily for writing the days and years of the Hebrew calendar; for references to traditional Jewish texts (particularly for Biblical chapter and verse and for Talmudic folios); for bulleted or numbered lists (similar to "A", "B", "C", "etc.", in English); and in numerology (gematria).
Thousands are counted separately, and the thousands count precedes the rest of the number (to the "right", since Hebrew is read from right to left). There are no special marks to signify that the “count” is starting over with thousands, which can theoretically lead to ambiguity, although a single quote mark is sometimes used after the letter. When specifying years of the Hebrew calendar in the present millennium, writers usually omit the thousands (which is presently 5 []), but if they do not this is accepted to mean 5,000, with no ambiguity. The current Israeli coinage includes the thousands.
“Monday, 15 Adar 5764” (where 5764 = 5(×1000) + 400 + 300 + 60 + 4, and 15 = 9 + 6):
“Thursday, 3 Nisan 5767” (where 5767 = 5(×1000) + 400 + 300 + 60 + 7):
To see how "today's" date in the Hebrew calendar is written, see, for example, Hebcal date converter.
5780 (2019–20) =
5779 (2018–19) =
5772 (2011–12) =
5771 (2010–11) =
5770 (2009–10) =
5769 (2008–09) =
5761 (2000–01) =
5760 (1999–2000) =
The Abjad numerals are equivalent to the Hebrew numerals up to 400. The Greek numerals differ from the Hebrew ones from 90 upwards because in the Greek alphabet there is no equivalent for "tsade" (). | https://en.wikipedia.org/wiki?curid=13702 |
Hero
A hero (heroine in its feminine form) is a real person or a main fictional character who, in the face of danger, combats adversity through feats of ingenuity, courage or strength. Like other formerly solely gender-specific terms (like "actor"), "hero" is often used to refer to any gender, though "heroine" only refers to female. The original hero type of classical epics did such things for the sake of glory and honor. On the other hand, are post-classical and modern heroes, who perform great deeds or selfless acts for the common good instead of the classical goal of wealth, pride, and fame. The antonym of a hero is a villain. Other terms associated with the concept of a hero, may include "good guy" or "white hat".
In classical literature, the hero is the main or revered character in heroic epic poetry celebrated through ancient legends of a people, often striving for military conquest and living by a continually flawed personal honor code. The definition of a hero has changed throughout time. Merriam Webster dictionary defines a hero as "a person who is admired for great or brave acts or fine qualities." Examples of heroes range from mythological figures, such as Gilgamesh, Achilles and Iphigenia, to historical and modern figures, such as Joan of Arc, Giuseppe Garibaldi, Sophie Scholl, Alvin York, Audie Murphy, and Chuck Yeager, and fictional superheroes, including Superman, Spider-Man, Batman, and Captain America.
The word "hero" comes from the Greek ἥρως ("hērōs"), "hero" (literally "protector" or "defender"), particularly one such as Heracles with divine ancestry or later given divine honors. Before the decipherment of Linear B the original form of the word was assumed to be *, "hērōw-", but the Mycenaean compound "ti-ri-se-ro-e" demonstrates the absence of -w-. Hero as a name appears in pre-Homeric Greek mythology, wherein Hero was a priestess of the goddess, Aphrodite, in a myth that has been referred to often in literature.
According to "The American Heritage Dictionary of the English Language", the Proto-Indo-European root is "*ser" meaning "to protect". According to Eric Partridge in "Origins", the Greek word "hērōs" "is akin to" the Latin "seruāre", meaning "to safeguard". Partridge concludes, "The basic sense of both Hera and hero would therefore be 'protector'." R. S. P. Beekes rejects an Indo-European derivation and asserts that the word has a Pre-Greek origin. Hera was a Greek goddess with many attributes, including protection and her worship appears to have similar proto-Indo-European origins.
A classical hero is considered to be a "warrior who lives and dies in the pursuit of honor" and asserts their greatness by "the brilliancy and efficiency with which they kill". Each classical hero's life focuses on fighting, which occurs in war or during an epic quest. Classical heroes are commonly semi-divine and extraordinarily gifted, such as Achilles, evolving into heroic characters through their perilous circumstances. While these heroes are incredibly resourceful and skilled, they are often foolhardy, court disaster, risk their followers' lives for trivial matters, and behave arrogantly in a childlike manner. During classical times, people regarded heroes with the highest esteem and utmost importance, explaining their prominence within epic literature. The appearance of these mortal figures marks a revolution of audiences and writers turning away from immortal gods to mortal mankind, whose heroic moments of glory survive in the memory of their descendants, extending their legacy.
Hector was a Trojan prince and the greatest fighter for Troy in the Trojan War, which is known primarily through Homer's "Iliad". Hector acted as leader of the Trojans and their allies in the defense of Troy, "killing 31,000 Greek fighters," offers Hyginus. Hector was known not only for his courage, but also for his noble and courtly nature. Indeed, Homer places Hector as peace-loving, thoughtful, as well as bold, a good son, husband and father, and without darker motives. However, his familial values conflict greatly with his heroic aspirations in the "Iliad," as he cannot be both the protector of Troy and a father to his child. Hector is ultimately betrayed by the deities when Athena appears disguised as his ally Deiphobus and convinces him challenge Achilles, leading to his death at the hands of a superior warrior.
Achilles was a Greek hero who was considered the most formidable military fighter in the entire Trojan War and the central character of the "Iliad". He was the child of Thetis and Peleus, making him a demi-god. He wielded superhuman strength on the battlefield and was blessed with a close relationship to the deities. Achilles famously refused to fight after his dishonoring at the hands of Agamemnon, and only returned to the war due to unadulterated rage after Hector killed his close friend Patroclus. Achilles was known for uncontrollable rage that defined many of his bloodthirsty actions, such as defiling Hector's corpse by dragging it around the city of Troy. Achilles plays a tragic role in the "Iliad" brought about by constant de-humanization throughout the epic, having his "menis" (wrath) overpower his "philos" (love).
Heroes in myth often had close, but conflicted relationships with the deities. Thus Heracles's name means "the glory of Hera", even though he was tormented all his life by Hera, the Queen of the Greek deities. Perhaps the most striking example is the Athenian king Erechtheus, whom Poseidon killed for choosing Athena rather than him as the city's patron deity. When the Athenians worshiped Erechtheus on the Acropolis, they invoked him as "Poseidon Erechtheus".
Fate, or destiny, plays a massive role in the stories of classical heroes. The classical hero's heroic significance stems from battlefield conquests, an inherently dangerous action. The deities in Greek mythology, when interacting with the heroes, often foreshadow the hero's eventual death on the battlefield. Countless heroes and deities go to great lengths to alter their pre-destined fates, but with no success, as none, neither human or immortal can change their prescribed outcomes by the three powerful Fates. The most characteristic example of this is found in "Oedipus Rex." After learning that his son, Oedipus, will end up killing him, the King of Thebes, Laius, takes huge steps to assure his son's death by removing him from the kingdom. But, Oedipus slays his father without an afterthought when he was unknown to him and he encounters him in a dispute on the road many years later. The lack of recognition enabled Oedipus to slay his father, ironically further binding his father to his fate.
Stories of heroism may serve as moral examples. However, classical heroes often didn't embody the Christian notion of an upstanding, perfectly moral hero. For example, Achilles's character-issues of hateful rage lead to merciless slaughter and his overwhelming pride lead to him only joining the Trojan War because he didn't want his soldiers to win all of the glory. Classical heroes, regardless of their morality, were placed in religion. In classical antiquity, cults that venerated deified heroes such as Heracles, Perseus, and Achilles played an important role in Ancient Greek religion. These ancient Greek hero cults worshipped heroes from oral epic tradition, with these heroes often bestowing blessings, especially healing ones, on individuals.
The concept of the "Mythic Hero Archetype" was first developed by Lord Raglan in his 1936 book, "The Hero, A Study in Tradition, Myth and Drama". It is a set of 22 common traits that he said were shared by many heroes in various cultures, myths, and religions throughout history and around the world. Raglan argued that the higher the score, the more likely the figure is mythical.
The concept of a story archetype of the standard monomythical "hero's quest" that was reputed to be pervasive across all cultures, is somewhat controversial. Expounded mainly by Joseph Campbell in his 1949 work "The Hero with a Thousand Faces", it illustrates several uniting themes of hero stories that hold similar ideas of what a hero represents, despite vastly different cultures and beliefs. The monomyth or Hero's Journey consists of three separate stages including the Departure, Initiation, and Return. Within these stages there are several archetypes that the hero of either gender may follow, including the call to adventure (which they may initially refuse), supernatural aid, proceeding down a road of trials, achieving a realization about themselves (or an apotheosis), and attaining the freedom to live through their quest or journey. Campbell offered examples of stories with similar themes such as Krishna, Buddha, Apollonius of Tyana, and Jesus. One of the themes he explores is the androgynous hero, who combines male and female traits, such as Bodhisattva: "The first wonder to be noted here is the androgynous character of the Bodhisattva: masculine Avalokiteshvara, feminine Kwan Yin." In his 1968 book, "The Masks of God: Occidental Mythology", Campbell writes, "It is clear that, whether accurate or not as to biographical detail, the moving legend of the Crucified and Risen Christ was fit to bring a new warmth, immediacy, and humanity, to the old motifs of the beloved Tammuz, Adonis, and Osiris cycles."
Vladimir Propp, in his analysis of Russian fairy tales, concluded that a fairy tale had only eight "dramatis personæ", of which one was the hero, and his analysis has been widely applied to non-Russian folklore. The actions that fall into such a hero's sphere include:
Propp distinguished between "seekers" and "victim-heroes". A villain could initiate the issue by kidnapping the hero or driving him out; these were victim-heroes. On the other hand, an antagonist could rob the hero, or kidnap someone close to him, or, without the villain's intervention, the hero could realize that he lacked something and set out to find it; these heroes are seekers. Victims may appear in tales with seeker heroes, but the tale does not follow them both.
No history can be written without consideration of the lengthy list of recipients of national medals for bravery, populated by firefighters, policemen and policewomen, ambulance medics, and ordinary have-a-go heroes. These persons risked their lives to try to save or protect the lives of others: for example, the Canadian Cross of Valour (C.V.) "recognizes acts of the most conspicuous courage in circumstances of extreme peril"; examples of recipients are Mary Dohey and David Gordon Cheverie.
The philosopher Hegel gave a central role to the "hero", personalized by Napoleon, as the incarnation of a particular culture's "Volksgeist", and thus of the general "Zeitgeist". Thomas Carlyle's 1841 work, "On Heroes, Hero Worship and the Heroic in History", also accorded a key function to heroes and great men in history. Carlyle centered history on the biography of a few central individuals such as Oliver Cromwell or Frederick the Great. His heroes were political and military figures, the founders or topplers of states. His history of great men included geniuses good and, perhaps for the first time in historical study, evil.
Explicit defenses of Carlyle's position were rare in the second part of the 20th century. Most in the philosophy of history school contend that the motive forces in history may best be described only with a wider lens than the one that Carlyle used for his portraits. For example, Karl Marx argued that history was determined by the massive social forces at play in "class struggles", not by the individuals by whom these forces are played out. After Marx, Herbert Spencer wrote at the end of the 19th century: "You must admit that the genesis of the great man depends on the long series of complex influences which has produced the race in which he appears, and the social state into which that race has slowly grown...[b]efore he can remake his society, his society must make him." Michel Foucault argued in his analysis of societal communication and debate that history was mainly the "science of the sovereign", until its inversion by the "historical and political popular discourse".
Modern examples of the typical hero are, Minnie Vautrin, Norman Bethune, Alan Turing, Raoul Wallenberg, Chiune Sugihara, Martin Luther King, Jr., Mother Teresa, Nelson Mandela, Oswaldo Payá, Óscar Elías Biscet, and Aung San Suu Kyi.
The Annales school, led by Lucien Febvre, Marc Bloch, and Fernand Braudel, would contest the exaggeration of the role of individual subjects in history. Indeed, Braudel distinguished various time scales, one accorded to the life of an individual, another accorded to the life of a few human generations, and the last one to civilizations, in which geography, economics, and demography play a role considerably more decisive than that of individual subjects.
Among noticeable events in the studies of the role of the hero and great man in history one should mention Sidney Hook's book (1943) "The Hero in History". In the second half of the twentieth century such male-focused theory has been contested, among others by feminists writers such as Judith Fetterley in "The Resisting Reader" (1977) and literary theorist Nancy K. Miller, "The Heroine's Text: Readings in the French and English Novel, 1722–1782".
In the epoch of globalization an individual may change the development of the country and of the whole world, so this gives reasons to some scholars to suggest returning to the problem of the role of the hero in history from the viewpoint of modern historical knowledge and using up-to-date methods of historical analysis.
Within the frameworks of developing counterfactual history, attempts are made to examine some hypothetical scenarios of historical development. The hero attracts much attention because most of those scenarios are based on the suppositions: what would have happened if this or that historical individual had or had not been alive.
The word "hero" (or "heroine" in modern times), is sometimes used to describe the protagonist or the romantic interest of a story, a usage which may conflict with the superhuman expectations of heroism. A good example is Anna Karenina, the lead character in the novel of the same title by Leo Tolstoy. In modern literature the hero is more and more a problematic concept. In 1848, for example, William Makepeace Thackeray gave "Vanity Fair" the subtitle, "A Novel without a Hero", and imagined a world in which no sympathetic character was to be found. "Vanity Fair" is a satirical representation of the absence of truly moral heroes in the modern world. The story focuses on the characters, Emmy Sedley and Becky Sharpe (the latter as the clearly defined anti-hero), with the plot focused on the eventual marriage of these two characters to rich men, revealing character flaws as the story progresses. Even the most sympathetic characters, such as Captain Dobbin, are susceptible to weakness, as he is often narcissistic and melancholy.
The larger-than-life hero is a more common feature of fantasy (particularly in comic books and epic fantasy) than more realist works. However, these larger-than life figures remain prevalent in society. The superhero genre is a multibillion-dollar industry that includes comic books, movies, toys, and video games. Superheroes usually possess extraordinary talents and powers that no living human could ever possess. The superhero stories often pit a super villain against the hero, with the hero fighting the crime caused by the super villain. Examples of long-running superheroes include Superman, Wonder Woman, Batman, and Spider-Man.
Research indicates that male writers are more likely to make heroines superhuman, whereas female writers tend to make heroines ordinary humans, as well as making their male heroes more powerful than their heroines, possibly due to sex differences in valued traits.
Social psychology has begun paying attention to heroes and heroism. Zeno Franco and Philip Zimbardo point out differences between heroism and altruism, and they offer evidence that observer perceptions of unjustified risk play a role above and beyond risk type in determining the ascription of heroic status.
Psychologists have also identified the traits of heroes. Elaine Kinsella and her colleagues have identified 12 central traits of heroism, which consist of brave, moral integrity, conviction, courageous, self-sacrifice, protecting, honest, selfless, determined, saves others, inspiring, and helpful. Scott Allison and George Goethals uncovered evidence for "the great eight traits" of heroes consisting of wise, strong, resilient, reliable, charismatic, caring, selfless, and inspiring. These researchers have also identified four primary functions of heroism. Heroes give us wisdom; they enhance us; they provide moral modeling; and they offer protection.
An evolutionary psychology explanation for heroic risk-taking is that it is a costly signal demonstrating the ability of the hero. It may be seen as one form of altruism for which there are several other evolutionary explanations as well.
Roma Chatterji has suggested that the hero or more generally protagonist is first and foremost a symbolic representation of the person who is experiencing the story while reading, listening, or watching; thus the relevance of the hero to the individual relies a great deal on how much similarity there is between them and the character. Chatterji suggested that one reason for the hero-as-self interpretation of stories and myths is the human inability to view the world from any perspective but a personal one.
In the Pulitzer Prize-winning book, "The Denial of Death", Ernest Becker argues that human civilization is ultimately an elaborate, symbolic defense mechanism against the knowledge of our mortality, which in turn acts as the emotional and intellectual response to our basic survival mechanism. Becker explains that a basic duality in human life exists between the physical world of objects and a symbolic world of human meaning. Thus, since humanity has a dualistic nature consisting of a physical self and a symbolic self, he asserts that humans are able to transcend the dilemma of mortality through heroism, by focusing attention mainly on the symbolic selve. This symbolic self-focus takes the form of an individual's "immortality project" (or ""causa sui" project"), which is essentially a symbolic belief-system that ensures that one is believed superior to physical reality. By successfully living under the terms of the immortality project, people feel they can become heroic and, henceforth, part of something eternal; something that will never die as compared to their physical body. This he asserts, in turn, gives people the feeling that their lives have meaning, a purpose, and are significant in the grand scheme of things. Another theme running throughout the book is that humanity's traditional "hero-systems", such as religion, are no longer convincing in the age of reason. Science attempts to serve as an immortality project, something that Becker believes it can never do, because it is unable to provide agreeable, absolute meanings to human life. The book states that we need new convincing "illusions" that enable people to feel heroic in ways that are agreeable. Becker, however, does not provide any definitive answer, mainly because he believes that there is no perfect solution. Instead, he hopes that gradual realization of humanity's innate motivations, namely death, may help to bring about a better world. Terror Management Theory (TMT) has generated evidence supporting this perspective. | https://en.wikipedia.org/wiki?curid=13706 |
Hydroxide
Hydroxide is a diatomic anion with chemical formula OH−. It consists of an oxygen and hydrogen atom held together by a covalent bond, and carries a negative electric charge. It is an important but usually minor constituent of water. It functions as a base, a ligand, a nucleophile, and a catalyst. The hydroxide ion forms salts, some of which dissociate in aqueous solution, liberating solvated hydroxide ions. Sodium hydroxide is a multi-million-ton per annum commodity chemical. A hydroxide attached to a strongly electropositive center may itself ionize, liberating a hydrogen cation (H+), making the parent compound an acid.
The corresponding electrically neutral compound HO• is the hydroxyl radical. The corresponding covalently bound group –OH of atoms is the hydroxy group.
Hydroxide ion and hydroxy group are nucleophiles and can act as a catalysts in organic chemistry.
Many inorganic substances which bear the word "hydroxide" in their names are not ionic compounds of the hydroxide ion, but covalent compounds which contain hydroxy groups.
The hydroxide ion is a natural part of water because of the self-ionization reaction in which its complement, hydronium, is passed hydrogen:
The equilibrium constant for this reaction, defined as
has a value close to 10−14 at 25 °C, so the concentration of hydroxide ions in pure water is close to 10−7 mol∙dm−3, in order to satisfy the equal charge constraint. The pH of a solution is equal to the decimal cologarithm of the hydrogen cation concentration; the pH of pure water is close to 7 at ambient temperatures. The concentration of hydroxide ions can be expressed in terms of pOH, which is close to (14 − pH), so the pOH of pure water is also close to 7. Addition of a base to water will reduce the hydrogen cation concentration and therefore increase the hydroxide ion concentration (increase pH, decrease pOH) even if the base does not itself contain hydroxide. For example, ammonia solutions have a pH greater than 7 due to the reaction NH3 + H+ , which decreases the hydrogen cation concentration, which increases the hydroxide ion concentration. pOH can be kept at a nearly constant value with various buffer solutions.
In aqueous solution the hydroxide ion is a base in the Brønsted–Lowry sense as it can accept a proton from a Brønsted–Lowry acid to form a water molecule. It can also act as a Lewis base by donating a pair of electrons to a Lewis acid. In aqueous solution both hydrogen and hydroxide ions are strongly solvated, with hydrogen bonds between oxygen and hydrogen atoms. Indeed, the bihydroxide ion has been characterized in the solid state. This compound is centrosymmetric and has a very short hydrogen bond (114.5 pm) that is similar to the length in the bifluoride ion (114 pm). In aqueous solution the hydroxide ion forms strong hydrogen bonds with water molecules. A consequence of this is that concentrated solutions of sodium hydroxide have high viscosity due to the formation of an extended network of hydrogen bonds as in hydrogen fluoride solutions.
In solution, exposed to air, the hydroxide ion reacts rapidly with atmospheric carbon dioxide, acting as an acid, to form, initially, the bicarbonate ion.
The equilibrium constant for this reaction can be specified either as a reaction with dissolved carbon dioxide or as a reaction with carbon dioxide gas (see Carbonic acid for values and details). At neutral or acid pH, the reaction is slow, but is catalyzed by the enzyme carbonic anhydrase, which effectively creates hydroxide ions at the active site.
Solutions containing the hydroxide ion attack glass. In this case, the silicates in glass are acting as acids. Basic hydroxides, whether solids or in solution, are stored in airtight plastic containers.
The hydroxide ion can function as a typical electron-pair donor ligand, forming such complexes as tetrahydroxoaluminate/tetrahydroxidoaluminate [Al(OH)4]−. It is also often found in mixed-ligand complexes of the type [ML"x"(OH)"y"]"z"+, where L is a ligand. The hydroxide ion often serves as a bridging ligand, donating one pair of electrons to each of the atoms being bridged. As illustrated by [Pb2(OH)]3+, metal hydroxides are often written in a simplified format. It can even act as a 3-electron-pair donor, as in the tetramer [PtMe3(OH)]4.
When bound to a strongly electron-withdrawing metal centre, hydroxide ligands tend to ionise into oxide ligands. For example, the bichromate ion [HCrO4]− dissociates according to
with a p"K"a of about 5.9.
The infrared spectra of compounds containing the OH functional group have strong absorption bands in the region centered around 3500 cm−1. The high frequency of molecular vibration is a consequence of the small mass of the hydrogen atom as compared to the mass of the oxygen atom, and this makes detection of hydroxyl groups by infrared spectroscopy relatively easy. A band due to an OH group tends to be sharp. However, the band width increases when the OH group is involved in hydrogen bonding. A water molecule has an HOH bending mode at about 1600 cm−1, so the absence of this band can be used to distinguish an OH group from a water molecule.
When the OH group is bound to a metal ion in a coordination complex, an M−OH bending mode can be observed. For example, in [Sn(OH)6]2− it occurs at 1065 cm−1. The bending mode for a bridging hydroxide tends to be at a lower frequency as in [(bipyridine)Cu(OH)2Cu(bipyridine)]2+ (955 cm−1). M−OH stretching vibrations occur below about 600 cm−1. For example, the tetrahedral ion [Zn(OH)4]2− has bands at 470 cm−1 (Raman-active, polarized) and 420 cm−1 (infrared). The same ion has a (HO)–Zn–(OH) bending vibration at 300 cm−1.
Sodium hydroxide solutions, also known as lye and caustic soda, are used in the manufacture of pulp and paper, textiles, drinking water, soaps and detergents, and as a drain cleaner. Worldwide production in 2004 was approximately 60 million tonnes. The principal method of manufacture is the chloralkali process.
Solutions containing the hydroxide ion are generated when a salt of a weak acid is dissolved in water. Sodium carbonate is used as an alkali, for example, by virtue of the hydrolysis reaction
Although the base strength of sodium carbonate solutions is lower than a concentrated sodium hydroxide solution, it has the advantage of being a solid. It is also manufactured on a vast scale (42 million tonnes in 2005) by the Solvay process. An example of the use of sodium carbonate as an alkali is when washing soda (another name for sodium carbonate) acts on insoluble esters, such as triglycerides, commonly known as fats, to hydrolyze them and make them soluble.
Bauxite, a basic hydroxide of aluminium, is the principal ore from which the metal is manufactured. Similarly, goethite (α-FeO(OH)) and lepidocrocite (γ-FeO(OH)), basic hydroxides of iron, are among the principal ores used for the manufacture of metallic iron. Numerous other uses can be found in the articles on individual hydroxides.
Aside from NaOH and KOH, which enjoy very large scale applications, the hydroxides of the other alkali metals also are useful. Lithium hydroxide is a strong base, with a p"K"b of −0.36. Lithium hydroxide is used in breathing gas purification systems for spacecraft, submarines, and rebreathers to remove carbon dioxide from exhaled gas.
The hydroxide of lithium is preferred to that of sodium because of its lower mass. Sodium hydroxide, potassium hydroxide, and the hydroxides of the other alkali metals are also strong bases.
Beryllium hydroxide Be(OH)2 is amphoteric. The hydroxide itself is insoluble in water, with a solubility product log "K"*sp of −11.7. Addition of acid gives soluble hydrolysis products, including the trimeric ion [Be3(OH)3(H2O)6]3+, which has OH groups bridging between pairs of beryllium ions making a 6-membered ring. At very low pH the aqua ion [Be(H2O)4]2+ is formed. Addition of hydroxide to Be(OH)2 gives the soluble tetrahydroxoberyllate/tetrahydroxidoberyllate anion, [Be(OH)4]2−.
The solubility in water of the other hydroxides in this group increases with increasing atomic number. Magnesium hydroxide Mg(OH)2 is a strong base (up to the limit of its solubility, which is very low in pure water), as are the hydroxides of the heavier alkaline earths: calcium hydroxide, strontium hydroxide, and barium hydroxide. A solution or suspension of calcium hydroxide is known as limewater and can be used to test for the weak acid carbon dioxide. The reaction Ca(OH)2 + CO2 Ca2+ + + OH− illustrates the basicity of calcium hydroxide. Soda lime, which is a mixture of the strong bases NaOH and KOH with Ca(OH)2, is used as a CO2 absorbent.
The simplest hydroxide of boron B(OH)3, known as boric acid, is an acid. Unlike the hydroxides of the alkali and alkaline earth hydroxides, it does not dissociate in aqueous solution. Instead, it reacts with water molecules acting as a Lewis acid, releasing protons.
A variety of oxyanions of boron are known, which, in the protonated form, contain hydroxide groups.
Aluminium hydroxide Al(OH)3 is amphoteric and dissolves in alkaline solution.
In the Bayer process for the production of pure aluminium oxide from bauxite minerals this equilibrium is manipulated by careful control of temperature and alkali concentration. In the first phase, aluminium dissolves in hot alkaline solution as , but other hydroxides usually present in the mineral, such as iron hydroxides, do not dissolve because they are not amphoteric. After removal of the insolubles, the so-called red mud, pure aluminium hydroxide is made to precipitate by reducing the temperature and adding water to the extract, which, by diluting the alkali, lowers the pH of the solution. Basic aluminium hydroxide AlO(OH), which may be present in bauxite, is also amphoteric.
In mildly acidic solutions, the hydroxo/hydroxido complexes formed by aluminium are somewhat different from those of boron, reflecting the greater size of Al(III) vs. B(III). The concentration of the species [Al13(OH)32]7+ is very dependent on the total aluminium concentration. Various other hydroxo complexes are found in crystalline compounds. Perhaps the most important is the basic hydroxide AlO(OH), a polymeric material known by the names of the mineral forms boehmite or diaspore, depending on crystal structure. Gallium hydroxide, indium hydroxide, and thallium(III) hydroxide are also amphoteric. Thallium(I) hydroxide is a strong base.
Carbon forms no simple hydroxides. The hypothetical compound C(OH)4 (orthocarbonic acid or methanetetrol) is unstable in aqueous solution:
Carbon dioxide is also known as carbonic anhydride, meaning that it forms by dehydration of carbonic acid H2CO3 (OC(OH)2).
Silicic acid is the name given to a variety of compounds with a generic formula [SiO"x"(OH)4−2"x"]"n". "Orthosilicic acid" has been identified in very dilute aqueous solution. It is a weak acid with p"K"a1 = 9.84, p"K"a2 = 13.2 at 25 °C. It is usually written as H4SiO4, but the formula Si(OH)4 is generally accepted. Other silicic acids such as "metasilicic acid" (H2SiO3), "disilicic acid" (H2Si2O5), and "pyrosilicic acid" (H6Si2O7) have been characterized. These acids also have hydroxide groups attached to the silicon; the formulas suggest that these acids are protonated forms of polyoxyanions.
Few hydroxo complexes of germanium have been characterized. Tin(II) hydroxide Sn(OH)2 was prepared in anhydrous media. When tin(II) oxide is treated with alkali the pyramidal hydroxo complex is formed. When solutions containing this ion are acidified, the ion [Sn3(OH)4]2+ is formed together with some basic hydroxo complexes. The structure of [Sn3(OH)4]2+ has a triangle of tin atoms connected by bridging hydroxide groups. Tin(IV) hydroxide is unknown but can be regarded as the hypothetical acid from which stannates, with a formula [Sn(OH)6]2−, are derived by reaction with the (Lewis) basic hydroxide ion.
Hydrolysis of Pb2+ in aqueous solution is accompanied by the formation of various hydroxo-containing complexes, some of which are insoluble. The basic hydroxo complex [Pb6O(OH)6]4+ is a cluster of six lead centres with metal–metal bonds surrounding a central oxide ion. The six hydroxide groups lie on the faces of the two external Pb4 tetrahedra. In strongly alkaline solutions soluble plumbate ions are formed, including [Pb(OH)6]2−.
In the higher oxidation states of the pnictogens, chalcogens, halogens, and noble gases there are oxoacids in which the central atom is attached to oxide ions and hydroxide ions. Examples include phosphoric acid H3PO4, and sulfuric acid H2SO4. In these compounds one or more hydroxide groups can dissociate with the liberation of hydrogen cations as in a standard Brønsted–Lowry acid. Many oxoacids of sulfur are known and all feature OH groups that can dissociate.
Telluric acid is often written with the formula H2TeO4·2H2O but is better described structurally as Te(OH)6.
"Ortho"-periodic acid can lose all its protons, eventually forming the periodate ion [IO4]−. It can also be protonated in strongly acidic conditions to give the octahedral ion [I(OH)6]+, completing the isoelectronic series, [E(OH)6]"z", E = Sn, Sb, Te, I; "z" = −2, −1, 0, +1. Other acids of iodine(VII) that contain hydroxide groups are known, in particular in salts such as the "meso"periodate ion that occurs in K4[I2O8(OH)2]·8H2O.
As is common outside of the alkali metals, hydroxides of the elements in lower oxidation states are complicated. For example, phosphorous acid H3PO3 predominantly has the structure OP(H)(OH)2, in equilibrium with a small amount of P(OH)3.
The oxoacids of chlorine, bromine, and iodine have the formula OA(OH), where "n" is the oxidation number: +1, +3, +5, or +7, and A = Cl, Br, or I. The only oxoacid of fluorine is F(OH), hypofluorous acid. When these acids are neutralized the hydrogen atom is removed from the hydroxide group.
The hydroxides of the transition metals and post-transition metals usually have the metal in the +2 (M = Mn, Fe, Co, Ni, Cu, Zn) or +3 (M = Fe, Ru, Rh, Ir) oxidation state. None are soluble in water, and many are poorly defined. One complicating feature of the hydroxides is their tendency to undergo further condensation to the oxides, a process called olation. Hydroxides of metals in the +1 oxidation state are also poorly defined or unstable. For example, silver hydroxide Ag(OH) decomposes spontaneously to the oxide (Ag2O). Copper(I) and gold(I) hydroxides are also unstable, although stable adducts of CuOH and AuOH are known. The polymeric compounds M(OH)2 and M(OH)3 are in general prepared by increasing the pH of an aqueous solutions of the corresponding metal cations until the hydroxide precipitates out of solution. On the converse, the hydroxides dissolve in acidic solution. Zinc hydroxide Zn(OH)2 is amphoteric, forming the tetrahydroxidozincate ion in strongly alkaline solution.
Numerous mixed ligand complexes of these metals with the hydroxide ion exist. In fact these are in general better defined than the simpler derivatives. Many can be made by deprotonation of the corresponding metal aquo complex.
Vanadic acid H3VO4 shows similarities with phosphoric acid H3PO4 though it has a much more complex vanadate oxoanion chemistry. Chromic acid H2CrO4, has similarities with sulfuric acid H2SO4; for example, both form acid salts A+[HMO4]−. Some metals, e.g. V, Cr, Nb, Ta, Mo, W, tend to exist in high oxidation states. Rather than forming hydroxides in aqueous solution, they convert to oxo clusters by the process of olation, forming polyoxometalates.
In some cases the products of partial hydrolysis of metal ion, described above, can be found in crystalline compounds. A striking example is found with zirconium(IV). Because of the high oxidation state, salts of Zr4+ are extensively hydrolyzed in water even at low pH. The compound originally formulated as ZrOCl2·8H2O was found to be the chloride salt of a tetrameric cation [Zr4(OH)8(H2O)16]8+ in which there is a square of Zr4+ ions with two hydroxide groups bridging between Zr atoms on each side of the square and with four water molecules attached to each Zr atom.
The mineral malachite is a typical example of a basic carbonate. The formula, Cu2CO3(OH)2 shows that it is halfway between copper carbonate and copper hydroxide. Indeed, in the past the formula was written as CuCO3·Cu(OH)2. The crystal structure is made up of copper, carbonate and hydroxide ions. The mineral atacamite is an example of a basic chloride. It has the formula, Cu2Cl(OH)3. In this case the composition is nearer to that of the hydroxide than that of the chloride CuCl2·3Cu(OH)2. Copper forms hydroxyphosphate (libethenite), arsenate (olivenite), sulfate (brochantite), and nitrate compounds. White lead is a basic lead carbonate, (PbCO3)2·Pb(OH)2, which has been used as a white pigment because of its opaque quality, though its use is now restricted because it can be a source for lead poisoning.
The hydroxide ion appears to rotate freely in crystals of the heavier alkali metal hydroxides at higher temperatures so as to present itself as a spherical ion, with an effective ionic radius of about 153 pm. Thus, the high-temperature forms of KOH and NaOH have the sodium chloride structure, which gradually freezes in a monoclinically distorted sodium chloride structure at temperatures below about 300 °C. The OH groups still rotate even at room temperature around their symmetry axes and, therefore, cannot be detected by X-ray diffraction. The room-temperature form of NaOH has the thallium iodide structure. LiOH, however, has a layered structure, made up of tetrahedral Li(OH)4 and (OH)Li4 units. This is consistent with the weakly basic character of LiOH in solution, indicating that the Li–OH bond has much covalent character.
The hydroxide ion displays cylindrical symmetry in hydroxides of divalent metals Ca, Cd, Mn, Fe, and Co. For example, magnesium hydroxide Mg(OH)2 (brucite) crystallizes with the cadmium iodide layer structure, with a kind of close-packing of magnesium and hydroxide ions.
The amphoteric hydroxide Al(OH)3 has four major crystalline forms: gibbsite (most stable), bayerite, nordstrandite, and doyleite.
All these polymorphs are built up of double layers of hydroxide ions – the aluminium atoms on two-thirds of the octahedral holes between the two layers – and differ only in the stacking sequence of the layers. The structures are similar to the brucite structure. However, whereas the brucite structure can be described as a close-packed structure in gibbsite the OH groups on the underside of one layer rest on the groups of the layer below. This arrangement led to the suggestion that there are directional bonds between OH groups in adjacent layers. This is an unusual form of hydrogen bonding since the two hydroxide ion involved would be expected to point away from each other. The hydrogen atoms have been located by neutron diffraction experiments on α-AlO(OH) (diaspore). The O–H–O distance is very short, at 265 pm; the hydrogen is not equidistant between the oxygen atoms and the short OH bond makes an angle of 12° with the O–O line. A similar type of hydrogen bond has been proposed for other amphoteric hydroxides, including Be(OH)2, Zn(OH)2, and Fe(OH)3.
A number of mixed hydroxides are known with stoichiometry A3MIII(OH)6, A2MIV(OH)6, and AMV(OH)6. As the formula suggests these substances contain M(OH)6 octahedral structural units. Layered double hydroxides may be represented by the formula . Most commonly, "z" = 2, and M2+ = Ca2+, Mg2+, Mn2+, Fe2+, Co2+, Ni2+, Cu2+, or Zn2+; hence "q" = "x".
Potassium hydroxide and sodium hydroxide are two well-known reagents in organic chemistry.
The hydroxide ion may act as a base catalyst. The base abstracts a proton from a weak acid to give an intermediate that goes on to react with another reagent. Common substrates for proton abstraction are alcohols, phenols, amines, and carbon acids. The p"K"a value for dissociation of a C–H bond is extremely high, but the pKa alpha hydrogens of a carbonyl compound are about 3 log units lower. Typical p"K"a values are 16.7 for acetaldehyde and 19 for acetone. Dissociation can occur in the presence of a suitable base.
The base should have a p"K"a value not less than about 4 log units smaller, or the equilibrium will lie almost completely to the left.
The hydroxide ion by itself is not a strong enough base, but it can be converted in one by adding sodium hydroxide to ethanol
to produce the ethoxide ion. The pKa for self-dissociation of ethanol is about 16, so the alkoxide ion is a strong enough base. The addition of an alcohol to an aldehyde to form a hemiacetal is an example of a reaction that can be catalyzed by the presence of hydroxide. Hydroxide can also act as a Lewis-base catalyst.
The hydroxide ion is intermediate in nucleophilicity between the fluoride ion F−, and the amide ion . The hydrolysis of an ester
also known as saponification is an example of a nucleophilic acyl substitution with the hydroxide ion acting as a nucleophile. In this case the leaving group is an alkoxide ion, which immediately removes a proton from a water molecule to form an alcohol. In the manufacture of soap, sodium chloride is added to salt out the sodium salt of the carboxylic acid; this is an example of the application of the common ion effect.
Other cases where hydroxide can act as a nucleophilic reagent are amide hydrolysis, the Cannizzaro reaction, nucleophilic aliphatic substitution, nucleophilic aromatic substitution, and in elimination reactions. The reaction medium for KOH and NaOH is usually water but with a phase-transfer catalyst the hydroxide anion can be shuttled into an organic solvent as well, for example in the generation of the reactive intermediate dichlorocarbene. | https://en.wikipedia.org/wiki?curid=13711 |
H. R. Giger
Hans Ruedi Giger ( ; ; 5 February 1940 – 12 May 2014) was a Swiss artist best known for his airbrush images of humans and machines connected in cold biomechanical relationships. Giger later abandoned airbrush for pastels, markers and ink. He was part of the special effects team that won an Academy Award for the visual design of Ridley Scott's 1979 sci-fi horror film "Alien". His work is on permanent display at the H.R. Giger Museum in Gruyères. His style has been adapted to many forms of media, including record album covers, furniture and tattoos.
Giger was born in 1940 in Chur, the capital city of Graubünden, the largest and easternmost Swiss canton. His father, a pharmacist, viewed art as a "breadless profession" and strongly encouraged him to enter pharmacy. He moved to Zürich in 1962, where he studied architecture and industrial design at the School of Applied Arts until 1970.
Giger's first success was when H. H. Kunz, co-owner of Switzerland's first poster publishing company, printed and distributed Giger's first posters, beginning in 1969.
Giger's style and thematic execution were influential. He was part of the special effects team that won an Academy Award for Best Achievement in Visual Effects for their design work on the film "Alien". His design for the Alien was inspired by his painting "Necronom IV" and earned him an Oscar in 1980. His books of paintings, particularly "Necronomicon" and "Necronomicon II" (1985) and the frequent appearance of his art in "Omni" magazine contributed to his rise to international prominence. Giger was admitted to the Science Fiction and Fantasy Hall of Fame in 2013. He is also well known for artwork on several music recording albums including "" by Danzig, "Brain Salad Surgery" by Emerson, Lake & Palmer, "Attahk" by Magma, "Heartwork" by Carcass, "To Mega Therion" by Celtic Frost, "Eparistera Daimones" and "Melana Chasmata" by Triptykon, Deborah Harry's "KooKoo," and "Frankenchrist", by the Dead Kennedys.
In 1998, Giger acquired the Saint-Germain Castle in Gruyères, Switzerland, which now houses the H.R. Giger Museum, a permanent repository of his work.
Giger had a relationship with Swiss actress Li Tobler until she committed suicide in 1975. Tobler's image appears in many of his paintings. He married Mia Bonzanigo in 1979; they divorced a year and a half later.
Giger lived and worked in Zürich with his second wife, Carmen Maria Scheifele Giger, who is the director of the H.R. Giger Museum.
On 12 May 2014, Giger died in a Zürich hospital after suffering injuries from a fall.
In addition to his awards, Giger was recognized by a variety of festivals and institutions. On the one year anniversary of his death, the Museum of Arts and Design in New York City staged the series "The Unseen Cinema of HR Giger" in May 2015.
"", a biographical documentary by Belinda Sallin, debuted 27 September 2014 in Zurich, Switzerland.
On 11 July 2018, the asteroid 109712 Giger was named in his memory.
Giger started with small ink drawings before progressing to oil paintings. For most of his career, he worked predominantly in airbrush, creating monochromatic canvasses depicting surreal, nightmarish dreamscapes. He also worked with pastels, markers and ink.
Giger's most distinctive stylistic innovation was that of a representation of human bodies and machines in cold, interconnected relationships, which he described as "biomechanical". His main influences were painters Dado, Ernst Fuchs and Salvador Dalí. He was introduced to Dali by painter Robert Venosa. Giger was also influenced by Polish sculptor Stanislaw Szukalski, and by painters Austin Osman Spare and Mati Klarwein, and was a personal friend of Timothy Leary. He studied interior and industrial design at the School of Commercial Art in Zurich from 1962–1965, and made his first paintings as art therapy.
Giger directed a number of films, including "Swiss Made" (1968), "Tagtraum" (1973), "Giger's Necronomicon" (1975) and "Giger's Alien" (1979).
Giger created furniture designs, particularly the Harkonnen Capo Chair for a film of the novel "Dune" that was to be directed by Alejandro Jodorowsky. Many years later, David Lynch directed the film, using only rough concepts by Giger. Giger had wished to work with Lynch, as he stated in one of his books that Lynch's film "Eraserhead" was closer than even Giger's own films to realizing his vision.
Giger applied his biomechanical style to interior design. One "Giger Bar" appeared in Tokyo, but the realization of his designs was a great disappointment to him, since the Japanese organization behind the venture did not wait for his final designs, and instead used Giger's rough preliminary sketches. For that reason Giger disowned the Tokyo bar. The two Giger Bars in his native Switzerland, in Gruyères and Chur, were built under Giger's close supervision and they accurately reflect his original concepts. At The Limelight in Manhattan, Giger's artwork was licensed to decorate the VIP room, the uppermost chapel of the landmarked church, but it was never intended to be a permanent installation and bore no similarity to the bars in Switzerland. The arrangement was terminated after two years when the Limelight closed.
Giger's art has greatly influenced tattooists and fetishists worldwide. Under a licensing deal Ibanez guitars released an H. R. Giger signature series: the Ibanez ICHRG2, an Ibanez Iceman, features "NY City VI", the Ibanez RGTHRG1 has "NY City XI" printed on it, the S Series SHRG1Z has a metal-coated engraving of "Biomechanical Matrix" on it, and a 4-string SRX bass, SRXHRG1, has "N.Y. City X" on it.
Giger is often referred to in popular culture, especially in science fiction and cyberpunk. William Gibson (who wrote an early script for "Alien 3") seems particularly fascinated: A minor character in "Virtual Light", Lowell, is described as having "New York XXIV" tattooed across his back, and in "Idoru" a secondary character, Yamazaki, describes the buildings of nanotech Japan as Giger-esque. | https://en.wikipedia.org/wiki?curid=13713 |
Halle Berry
Halle Maria Berry (born Maria Halle Berry; August 14, 1966) is an American actress. Berry won the Academy Award for Best Actress for her performance in the romantic drama film "Monster's Ball" (2001), becoming the only woman of African American descent and the only woman of color to have won the award.
Before becoming an actress, Berry was a model and entered several beauty contests, finishing as the first runner-up in the Miss USA pageant and coming in sixth in the Miss World 1986. Her breakthrough film role was in the romantic comedy "Boomerang" (1992), alongside Eddie Murphy, which led to roles in films, such as the family comedy "The Flintstones" (1994), the political comedy-drama "Bulworth" (1998) and the television film "Introducing Dorothy Dandridge" (1999), for which she won a Primetime Emmy Award and a Golden Globe Award.
In addition to her Academy Award, Berry garnered high-profile roles in the 2000s, such as Storm in "X-Men" (2000), the thrillers "Swordfish" (2001) and "Gothika" (2003), and the spy film "Die Another Day" (2002), where she played Bond girl Jinx. She then appeared in the "X-Men" sequels, "X2" (2003) and "" (2006). In the 2010s, she has featured in the science-fiction film "Cloud Atlas" (2012), the crime thriller "The Call" (2013) and the action films "" (2014), "" (2017) and "" (2019).
Berry was one of the highest-paid actresses in Hollywood during the 2000s, and has been involved in the production of several of the films in which she performed. Berry is also a Revlon spokesmodel. She was formerly married to baseball player David Justice, singer-songwriter Eric Benét, and actor Olivier Martinez. She has a child each with Martinez and model Gabriel Aubry.
Berry was born Maria Halle Berry; her name was legally changed to Halle Maria Berry at age five. Her parents selected her middle name from Halle's Department Store, which was then a local landmark in her birthplace of Cleveland, Ohio. Her mother, Judith Ann (née Hawkins), is white and was born in Liverpool, England. Judith Ann worked as a psychiatric nurse. Her father, Jerome Jesse Berry, was an African-American hospital attendant in the psychiatric ward where her mother worked; he later became a bus driver. Berry's parents divorced when she was four years old; she and her older sister, Heidi Berry-Henderson, were raised exclusively by their mother.
Berry has said in published reports that she has been estranged from her father since her childhood, noting in 1992, "I haven't heard from him since [he left]. Maybe he's not alive." Her father was very abusive to her mother. Berry has recalled witnessing her mother being beaten daily, kicked down stairs and hit in the head with a wine bottle.
Berry grew up in Oakwood, Ohio and graduated from Bedford High School where she was a cheerleader, honor student, editor of the school newspaper and prom queen. She worked in the children's department at Higbee's Department store. She then studied at Cuyahoga Community College. In the 1980s, she entered several beauty contests, winning Miss Teen All American in 1985 and Miss Ohio USA in 1986. She was the 1986 Miss USA first runner-up to Christy Fichtner of Texas. In the Miss USA 1986 pageant interview competition, she said she hoped to become an entertainer or to have something to do with the media. Her interview was awarded the highest score by the judges. She was the first African-American Miss World entrant in 1986, where she finished sixth and Trinidad and Tobago's Giselle Laronde was crowned Miss World. According to the "Current Biography Yearbook", Berry "...pursued a modeling career in New York... Berry's first weeks in New York were less than auspicious: She slept in a homeless shelter and then in a YMCA".
In 1989, Berry moved to New York City to pursue her acting ambitions. During her early time there, she ran out of money and had to live briefly in a homeless shelter. Her situation improved by the end of that year, and she was cast in the role of model Emily Franklin in the short-lived ABC television series "Living Dolls", which was shot in New York and was a spin-off of the hit series "Who's the Boss?". During the taping of "Living Dolls", she lapsed into a coma and was diagnosed with Type 1 diabetes. After the cancellation of "Living Dolls", she moved to Los Angeles. She went on to have a recurring role on the long-running primetime serial "Knots Landing".
Berry's film debut was in a small role for Spike Lee's "Jungle Fever" (1991), in which she played Vivian, a drug addict. That same year, Berry had her first co-starring role in "Strictly Business". In 1992, Berry portrayed a career woman who falls for the lead character played by Eddie Murphy in the romantic comedy "Boomerang". The following year, she caught the public's attention as a headstrong biracial slave in the TV adaptation of "", based on the book by Alex Haley. Berry was in the live-action "Flintstones" movie playing the part of "Sharon Stone", a sultry secretary who seduced Fred Flintstone.
Berry tackled a more serious role, playing a former drug addict struggling to regain custody of her son in "Losing Isaiah" (1995), starring opposite Jessica Lange. She portrayed Sandra Beecher in "Race the Sun" (1996), which was based on a true story, shot in Australia, and co-starred alongside Kurt Russell in "Executive Decision". Beginning in 1996, she was a Revlon spokeswoman for seven years and renewed her contract in 2004.
She starred alongside Natalie Deselle Reid in the 1997 comedy film "B*A*P*S". In 1998, Berry received praise for her role in "Bulworth" as an intelligent woman raised by activists who gives a politician (Warren Beatty) a new lease on life. The same year, she played the singer Zola Taylor, one of the three wives of pop singer Frankie Lymon, in the biopic "Why Do Fools Fall in Love". In the 1999 HBO biopic "Introducing Dorothy Dandridge", she portrayed the first black woman to be nominated for the Academy Award for Best Actress, and it was to Berry a heart-felt project that she introduced, co-produced and fought intensely for it to come through. Berry's performance was recognized with several awards, including a Primetime Emmy Award and Golden Globe Award.
Berry portrayed the mutant superhero Storm in the film adaptation of the comic book series "X-Men" (2000) and its sequels, "X2" (2003), "" (2006) and "" (2014). In 2001, Berry appeared in the film "Swordfish", which featured her first topless scene. At first, she was opposed to a sunbathing scene in the film in which she would appear topless, but Berry eventually agreed. Some people attributed her change of heart to a substantial increase in the amount Warner Bros. offered her; she was reportedly paid an additional $500,000 for the short scene. Berry denied these stories, telling one interviewer that they amused her and "made for great publicity for the movie". After turning down numerous roles that required nudity, she said she decided to make "Swordfish" because her then-husband, Eric Benét, supported her and encouraged her to take risks.
Berry appeared as Leticia Musgrove, the troubled wife of an executed murderer (Sean Combs), in the 2001 feature film "Monster's Ball". Her performance was awarded the National Board of Review and the Screen Actors Guild Award for Best Actress; in an interesting coincidence she became the first woman of color to win the Academy Award for Best Actress (earlier in her career, she portrayed Dorothy Dandridge, the first African American to be nominated for Best Actress, and who was born at the same hospital as Berry, in Cleveland, Ohio). The NAACP issued the statement: "Congratulations to Halle Berry and Denzel Washington for giving us hope and making us proud. If this is a sign that Hollywood is finally ready to give opportunity and judge performance based on skill and not on skin color then it is a good thing." This role generated controversy. Her graphic nude love scene with a racist character played by co-star Billy Bob Thornton was the subject of much media chatter and discussion among African Americans. Many in the African-American community were critical of Berry for taking the part. Berry responded: "I don't really see a reason to ever go that far again. That was a unique movie. That scene was special and pivotal and needed to be there, and it would be a really special script that would require something like that again."
Berry asked for a higher fee for Revlon advertisements after winning the Oscar. Ron Perelman, the cosmetics firm's chief, congratulated her, saying how happy he was that she modeled for his company. She replied, "Of course, you'll have to pay me more." Perelman stalked off in a rage. In accepting her award, she gave an acceptance speech honoring previous black actresses who had never had the opportunity. She said, "This moment is so much bigger than me. This is for every nameless, faceless woman of color who now has a chance tonight because this door has been opened."
As Bond girl Giacinta 'Jinx' Johnson in the 2002 blockbuster "Die Another Day", Berry recreated a scene from "Dr. No", emerging from the surf to be greeted by James Bond as Ursula Andress had 40 years earlier. Lindy Hemming, costume designer on "Die Another Day", had insisted that Berry wear a bikini and knife as a homage. Berry has said of the scene: "It's splashy", "exciting", "sexy", "provocative" and "it will keep me still out there after winning an Oscar". The bikini scene was shot in Cadiz; the location was reportedly cold and windy, and footage has been released of Berry wrapped in thick towels in between takes to try to stay warm. According to an ITV news poll, Jinx was voted the fourth toughest girl on screen of all time. Berry was hurt during filming when debris from a smoke grenade flew into her eye. It was removed in a 30-minute operation. After Berry won the Academy Award, rewrites were commissioned to give her more screentime for "X2".
She starred in the psychological thriller "Gothika" opposite Robert Downey, Jr. in November 2003, during which she broke her arm in a scene with Downey, who twisted her arm too hard. Production was halted for eight weeks. It was a moderate hit at the United States box office, taking in $60 million; it earned another $80 million abroad. Berry appeared in the nu metal band Limp Bizkit's music video for "Behind Blue Eyes" for the motion picture soundtrack for the film. The same year, she was named #1 in "FHM"s 100 Sexiest Women in the World poll.
Berry starred as the title role in the film "Catwoman", for which she received US$12.5 million. An over-US$100 million movie; it grossed only US$17 million on its first weekend, and is widely regarded by critics as one of the worst films ever made. She was awarded the Worst Actress Razzie Award for her performance; she appeared at the ceremony to accept the award in person (while holding her Oscar from "Monster's Ball") with a sense of humor, considering it an experience of the "rock bottom" in order to be "at the top". Holding the Academy Award in one hand and the Razzie in the other she said, "I never in my life thought that I would be up here, winning a Razzie! It's not like I ever aspired to be here, but thank you. When I was a kid, my mother told me that if you could not be a good loser, then there's no way you could be a good winner."
Her next film appearance was in the Oprah Winfrey-produced ABC television movie "Their Eyes Were Watching God" (2005), an adaptation of Zora Neale Hurston's novel, with Berry portraying a free-spirited woman whose unconventional sexual mores upset her 1920s contemporaries in a small community. She received her second Primetime Emmy Award nomination for her role. Also in 2005, she served as an executive producer in "Lackawanna Blues", and landed her voice for the character of Cappy, one of the many mechanical beings in the animated feature "Robots".
In the thriller "Perfect Stranger" (2007), Berry starred with Bruce Willis, playing a reporter who goes undercover to uncover the killer of her childhood friend. The film grossed a modest US$73 million worldwide, and received lukewarm reviews from critics, who felt that despite the presence of Berry and Willis, it is "too convoluted to work, and features a twist ending that's irritating and superfluous". Her next 2007 film release was the drama "Things We Lost in the Fire", co-starring Benicio del Toro, where she took on the role of a recent widow befriending the troubled friend of her late husband. The film was the first time in which she worked with a female director, Danish Susanne Bier, giving her a new feeling of "thinking the same way", which she appreciated. While the film made US$8.6 million in its global theatrical run, it garnered positive reviews from writers; "The Austin Chronicle" found the film to be "an impeccably constructed and perfectly paced drama of domestic and internal volatility" and felt that "Berry is brilliant here, as good as she's ever been".
In April 2007, Berry was awarded a star on the Hollywood Walk of Fame in front of the Kodak Theatre at 6801 Hollywood Boulevard for her contributions to the film industry, and by the end of the decade, she established herself as one of the highest-paid actresses in Hollywood, earning an estimated $10 million per film.
In the independent drama "Frankie and Alice" (2010), Berry played the leading role of a young multiracial American woman with dissociative identity disorder struggling against her alter personality to retain her true self. The film received a limited theatrical release, to a mixed critical response. "The Hollywood Reporter" nevertheless described the film as "a well-wrought psychological drama that delves into the dark side of one woman's psyche" and found Berry to be "spellbinding" in it. She earned the African-American Film Critics Association Award for Best Actress and a Golden Globe Award nomination for Best Actress – Motion Picture Drama. She next made part of a large ensemble cast in Garry Marshall's romantic comedy "New Year's Eve" (2011), with Michelle Pfeiffer, Jessica Biel, Robert De Niro, Josh Duhamel, Zac Efron, Sarah Jessica Parker, and Sofía Vergara, among many others. In the film, she took on the supporting role of a nurse befriending a man in the final stages (De Niro). While the film was panned by critics, it made US$142 million worldwide.
In 2012, Berry starred as an expert diver tutor alongside then-husband Oliver Martinez in the little-seen thriller "Dark Tide", and led an ensemble cast opposite Tom Hanks and Jim Broadbent in The Wachowskis's epic science fiction film "Cloud Atlas" (2012), with each of the actors playing six different characters across a period of five centuries. Budgeted at US$128.8 million, "Cloud Atlas" made US$130.4 million worldwide, and garnered polarized reactions from both critics and audiences.
Berry appeared in a segment of the independent anthology comedy "Movie 43" (2013), which the "Chicago Sun-Times" called "the "Citizen Kane" of awful". Berry found greater success with her next performance, as a 9-1-1 operator receiving a call from a girl kidnapped by a serial killer, in the crime thriller "The Call" (2013). Berry was drawn to "the idea of being a part of a movie that was so empowering for women. We don't often get to play roles like this, where ordinary people become heroic and do something extraordinary." Manohla Dargis of "The New York Times" found the film to be "an effectively creepy thriller", while reviewer Dwight Brown felt that "the script gives Berry a blue-collar character she can make accessible, vulnerable and gutsy[...]". "The Call" was a sleeper hit, grossing US$68.6 million around the globe.
In 2014, Berry signed on to star and serve as a co-executive producer in CBS drama series "Extant", where she took on the role of Molly Woods, an astronaut who struggles to reconnect with her husband and android son after spending 13 months in space. The show ran for two seasons until 2015, receiving largely positive reviews from critics. "USA Today" remarked: "She [Halle Berry] brings a dignity and gravity to Molly, a projected intelligence that allows you to buy her as an astronaut and to see what has happened to her as frightening rather than ridiculous. Berry's all in, and you float along". Also in 2014, Berry launched a new production company, 606 Films, with producing partner Elaine Goldsmith-Thomas. It is named after the Anti-Paparazzi Bill, SB 606, that the actress pushed for and which was signed into law by California Governor Jerry Brown in the fall of 2013. The new company emerged as part of a deal for Berry to work in "Extant".
In the stand-up comedy concert film "" (2016), Berry appeared as herself, opposide Kevin Hart, attending a poker game event that goes horribly wrong. "Kidnap", an abduction thriller Berry filmed in 2014, was released in 2017. In the film, she starred as a diner waitress tailing a vehicle when her son is kidnapped by its occupants. "Kidnap" grossed US$34 million and garnered mixed reviews from writers, who felt that it "strays into poorly scripted exploitation too often to take advantage of its pulpy premise — or the still-impressive talents of [Berry]." She next played an agent employed by a secret American spy organisation in the action comedy sequel "" (2017), as part of an ensemble cast, consisting of Colin Firth, Taron Egerton, Mark Strong, Julianne Moore, and Elton John. While critical response towards the film was mixed, it made US$414 million worldwide.
Alongside Daniel Craig, Berry starred as a working-class mother during the 1992 Los Angeles riots in Deniz Gamze Ergüven's drama "Kings" (2017). The film found a limited theatrical release following its initial screening at the Toronto International Film Festival, and as part of an overall lukewarm reception, "Variety" noted: "It should be said that Berry has given some of the best and worst performances of the past quarter-century, but this is perhaps the only one that swings to both extremes in the same movie". She played Sofia, an assassin, in the film "", which was released on May 17, 2019 by Lionsgate.
In 2017, she provided uncredited vocals to the song, "Calling All My Lovelies" by Bruno Mars from his third studio album, 24K Magic.
Berry competed against James Corden in the first rap battle on the first episode of TBS's "Drop the Mic", originally aired on October 24, 2017.
She is, as of February 2019, executive producer of the BET television series "Boomerang", based on the film in which she starred. The series premiered February 12, 2019.
Berry will make directorial debut with "Bruised" in which she plays a disgraced MMA fighter named Jackie Justice, who reconnects with her estranged son. Filming began in 2019 with shooting in Atlantic City and Newark.
Berry dated Chicago dentist John Ronan from March 1989 to October 1991. In November 1993, Ronan sued Berry for $80,000 in what he claimed were unpaid loans to help launch her career. Berry contended that the money was a gift, and a judge dismissed the case because Ronan did not list Berry as a debtor when he filed for bankruptcy in 1992. According to Berry, a beating from a former abusive boyfriend during the filming of "The Last Boy Scout" in 1991 punctured her eardrum and caused her to lose eighty percent of her hearing in her left ear. Berry has never named the abuser but has said that he is someone well known in Hollywood. In 2004, former boyfriend Christopher Williams accused Wesley Snipes of being responsible for the incident, saying "I'm so tired of people thinking I'm the guy [who did it]. Wesley Snipes busted her eardrum, not me."
Berry first saw baseball player David Justice on TV playing in an MTV celebrity baseball game in February 1992. When a reporter from Justice's hometown of Cincinnati told her that Justice was a fan, Berry gave her phone number to the reporter to give to Justice. Berry married Justice shortly after midnight on January 1, 1993. Following their separation in February 1996, Berry stated publicly that she was so depressed that she considered taking her own life. Berry and Justice were officially divorced on June 24, 1997.
Berry married her second husband, singer-songwriter Eric Benét, on January 24, 2001, following a two-year courtship. Benét underwent treatment for sex addiction in 2002, and by early October 2003 they had separated, with the divorce finalized on January 3, 2005.
In November 2005, Berry began dating French Canadian model Gabriel Aubry, whom she met at a Versace photoshoot. Berry gave birth to their daughter in March 2008. On April 30, 2010, Berry and Aubry announced their relationship had ended some months earlier. In January 2011, Berry and Aubry became involved in a highly publicized custody battle, centered primarily on Berry's desire to move with their daughter from Los Angeles, where Berry and Aubry resided, to France, the home of French actor Olivier Martinez, whom Berry had started dating in 2010 after they met while filming "Dark Tide" in South Africa. Aubry objected to the move on the grounds that it would interfere with their joint custody arrangement. In November 2012, a judge denied Berry's request to move the couple's daughter to France in light of Aubry's objections. Less than two weeks later, on November 22, 2012, Aubry and Martinez were both treated at a hospital for injuries after engaging in a physical altercation at Berry's residence. Martinez performed a citizen's arrest on Aubry, and because it was considered a domestic violence incident, was granted a temporary emergency protective order preventing Aubry from coming within 100 yards of Berry, Martinez, and the child with whom he shares custody with Berry, until November 29, 2012. In turn, Aubry obtained a temporary restraining order against Martinez on November 26, 2012, asserting that the fight began when Martinez threatened to kill Aubry if he did not allow the couple to move to France. Leaked court documents included photos showing significant injuries to Aubry's face, which were widely displayed in the media.
On November 29, 2012, Berry's lawyer announced that Berry and Aubry had reached an amicable custody agreement in court. In June 2014, a Superior Court ruling called for Berry to pay Aubry $16,000 a month in child support (around 200k/year) as well as a retroactive payment of $115,000 and a sum of $300,000 for Aubry's attorney fees.
Berry and Martinez confirmed their engagement in March 2012, and married in France on July 13, 2013. In October 2013, Berry gave birth to their son. In 2015, after two years of marriage, the couple announced they were divorcing. The divorce became final in December 2016.
In February 2000, Berry was involved in a traffic collision and left the scene. She pleaded no contest to misdemeanor leaving the scene of an accident.
Along with Pierce Brosnan, Cindy Crawford, Jane Seymour, Dick Van Dyke, Téa Leoni, and Daryl Hannah, Berry successfully fought in 2006 against the Cabrillo Port Liquefied Natural Gas facility that was proposed off the coast of Malibu. Berry said, "I care about the air we breathe, I care about the marine life and the ecosystem of the ocean." In May 2007, Governor Arnold Schwarzenegger vetoed the facility. Hasty Pudding Theatricals gave her its 2006 "Woman of The Year" award. Berry took part in a nearly 2,000-house cell-phone bank campaign for Barack Obama in February 2008. In April 2013, she appeared in a video clip for Gucci's "Chime for Change" campaign that aims to raise funds and awareness of women's issues in terms of education, health, and justice. In August 2013, Berry testified alongside Jennifer Garner before the California State Assembly's Judiciary Committee in support of a bill that would protect celebrities' children from harassment by photographers. The bill passed in September.
Berry was ranked No. 1 on "People" "50 Most Beautiful People in the World" list in 2003 after making the top ten seven times and appeared No. 1 on "FHM" "100 Sexiest Women in the World" the same year. She was named "Esquire" magazine's "Sexiest Woman Alive" in October 2008, about which she stated: "I don't know exactly what it means, but being 42 and having just had a baby, I think I'll take it." "Men's Health" ranked her at No. 35 on their "100 Hottest Women of All-Time" list. In 2009, she was voted #23 on "Empire"'s 100 Sexiest Film Stars. The same year, rapper Hurricane Chris released a song entitled "Halle Berry (She's Fine)", extolling Berry's beauty and sex appeal. At the age of 42 (in 2008), she was named the "Sexiest Black Woman" by Access Hollywood's "TV One Access" survey. Born to an African-American father and a white mother, Berry has stated that her biracial background was "painful and confusing" when she was a young woman, and she made the decision early on to identify as a black woman because she knew that was how she would be perceived.
!colspan="3" style="background:#C1D8FF;"| Acting roles | https://en.wikipedia.org/wiki?curid=13717 |
Robert Koch
Heinrich Hermann Robert Koch (; ; 11 December 1843 – 27 May 1910) was a German physician and microbiologist. As one of the main founders of modern bacteriology, he identified the specific causative agents of tuberculosis, cholera, and anthrax and also gave experimental support for the concept of infectious disease, which included experiments on humans and animals. Koch created and improved laboratory technologies and techniques in the field of microbiology, and made key discoveries in public health. His research led to the creation of Koch's postulates, a series of four generalized principles linking specific microorganisms to specific diseases that proved influential on subsequent epidemiological principles such as the Bradford Hill criteria. For his research on tuberculosis, Koch received the Nobel Prize in Physiology or Medicine in 1905. The Robert Koch Institute is named in his honour.
Koch was born in Clausthal, Germany, on 11 December 1843, to Hermann Koch (1814–1877) and Mathilde Julie Henriette (née Biewend; 1818–1871). Koch excelled academically from an early age. Before entering school in 1848, he had taught himself how to read and write. He graduated from high school in 1862, having excelled in science and math. At the age of 19, Koch entered the University of Göttingen, studying natural science. However, after three semesters, Koch decided to change his area of study to medicine, as he aspired to be a physician. During his fifth semester of medical school, Jacob Henle, an anatomist who had published a theory of contagion in 1840, asked him to participate in his research project on uterine nerve structure. In his sixth semester, Koch began to research at the Physiological Institute, where he studied the secretion of succinic acid, which is a signaling molecule that is also involved in the metabolism of the mitochondria. This would eventually form the basis of his dissertation. In January 1866, Koch graduated from medical school, earning honors of the highest distinction.
Several years after his graduation in 1866, he worked as a surgeon in the Franco-Prussian War, and following his service, worked as a physician in in Prussian Posen (now Wolsztyn, Poland). From 1880 to 1885, Koch held a position as government advisor with the Imperial Department of Health. Koch began conducting research on microorganisms in a laboratory connected to his patient examination room. Koch's early research in this laboratory yielded one of his major contributions to the field of microbiology, as he developed the technique of growing bacteria. Furthermore, he managed to isolate and grow selected pathogens in pure laboratory culture.
From 1885 to 1890, he served as an administrator and professor at Berlin University.
In 1891, Koch relinquished his Professorship and became a director of the which consisted of a clinical division and beds for the division of clinical research. For this he accepted harsh conditions. The Prussian Ministry of Health insisted after the 1890 scandal with tuberculin, which Koch had discovered and intended as a remedy for tuberculosis, that any of Koch's inventions would unconditionally belong to the government and he would not be compensated. Koch lost the right to apply for patent protection.
In an attempt to grow bacteria, Koch began to use solid nutrients such as potato slices. Through these initial experiments, Koch observed individual colonies of identical, pure cells. He found that potato slices were not suitable media for all organisms, and later began to use nutrient solutions with gelatin. However, he soon realized that gelatin, like potato slices, was not the optimal medium for bacterial growth, as it did not remain solid at 37 °C, the ideal temperature for growth of most human pathogens. As suggested to him by Walther and Fanny Hesse, Koch began to utilize agar to grow and isolate pure cultures, because this polysaccharide remains solid at 37 °C, is not degraded by most bacteria, and results in a transparent medium.
During his time as government advisor, Koch published a report, in which he stated the importance of pure cultures in isolating disease-causing organisms and explained the necessary steps to obtain these cultures, methods which are summarized in Koch's four postulates. Koch's discovery of the causative agent of anthrax led to the formation of a generic set of postulates which can be used in the determination of the cause of most infectious diseases. These postulates, which not only outlined a method for linking cause and effect of an infectious disease but also established the significance of laboratory culture of infectious agents, are listed here:
Robert Koch is widely known for his work with anthrax, discovering the causative agent of the fatal disease to be "Bacillus anthracis". He discovered the formation of spores in anthrax bacteria, which could remain dormant under specific conditions. However, under optimal conditions, the spores were activated and caused disease. To determine this causative agent, he dry-fixed bacterial cultures onto glass slides, used dyes to stain the cultures, and observed them through a microscope. His work with anthrax is notable in that he was the first to link a specific microorganism with a specific disease, rejecting the idea of spontaneous generation and supporting the germ theory of disease.
During his time as the government advisor with the Imperial Department of Health in Berlin in the 1880s, Robert Koch became interested in tuberculosis research. At the time, it was widely believed that tuberculosis was an inherited disease. However, Koch was convinced that the disease was caused by a bacterium and was infectious, and tested his four postulates using guinea pigs. Through these experiments, he found that his experiments with tuberculosis satisfied all four of his postulates. In 1882, he published his findings on tuberculosis, in which he reported the causative agent of the disease to be the slow-growing "Mycobacterium tuberculosis". Later, Koch's attempt at developing a drug to treat tuberculosis, tuberculin, led to a scandalous failure: he did not divulge the exact composition, and the claimed treatment success did not materialize; the substance is today used for tuberculosis diagnosis.
Koch and his relationship to Paul Ehrlich, who developed a mechanism to diagnose TB, were portrayed in the 1940 movie "Dr. Ehrlich's Magic Bullet".
Koch next turned his attention to cholera, and began to conduct research in Egypt in the hopes of isolating the causative agent of the disease. However, he was not able to complete the task before the epidemic in Egypt ended, and, after a short trip to Persia, traveled to India to continue with the study. In 1884 in Bombay state of India (the present day State of Maharastra, India), Koch resided and researched at Grant Medical College, (or by some accounts in Kolkata, formerly Calcutta in undivided British India) where he was able to determine the causative agent of cholera, isolating "Vibrio cholerae". The bacterium had originally been isolated in 1854 by Italian anatomist Filippo Pacini, | https://en.wikipedia.org/wiki?curid=13722 |
Hogshead
A hogshead (abbreviated "hhd", plural "hhds") is a large cask of liquid (or, less often, of a food commodity). More specifically, it refers to a specified volume, measured in either imperial or US customary measures, primarily applied to alcoholic beverages, such as wine, ale, or cider.
A tobacco hogshead was used in British and American colonial times to transport and store tobacco. It was a very large wooden barrel. A standardized hogshead measured long and in diameter at the head (at least , depending on the width in the middle). Fully packed with tobacco, it weighed about .
A hogshead in Britain contains about .
The "Oxford English Dictionary" (OED) notes that the hogshead was first standardized by an act of Parliament in 1423, though the standards continued to vary by locality and content. For example, the OED cites an 1897 edition of "Whitaker's Almanack", which specified the gallons of wine in a hogshead varying most particularly across fortified wines: claret/Madeira , port , sherry . The "American Heritage Dictionary" claims that a hogshead can consist of anything from (presumably) .
Eventually, a hogshead of wine came to be , while a hogshead of beer or ale is 54 gallons (250 L if old beer/ale gallons, 245 L if imperial).
A hogshead was also used as unit of measurement for sugar in Louisiana for most of the 19th century. Plantations were listed in sugar schedules as having produced "x" number of hogsheads of sugar or molasses. A hogshead was also used for the measurement of herring fished for sardines in Blacks Harbour, New Brunswick and Cornwall.
English philologist Walter William Skeat (1835–1912) noted the origin is to be found in the name for a cask or liquid measure appearing in various forms in Germanic languages, in Dutch "oxhooft" (modern "okshoofd"), Danish "oxehoved", Old Swedish "oxhuvud", etc. The word should therefore be "oxheaved", "hogshead" being a mere corruption.
A hogshead of Madeira wine was approximately equal to 45–48 gallons (0.205–0.218 m3). A hogshead of brandy was approximately equal to 56–61 gallons (0.255–0.277) m3. | https://en.wikipedia.org/wiki?curid=13726 |
Honda
Honda has been the world's largest motorcycle manufacturer since 1959, reaching a production of 400 million by the end of 2019, as well as the world's largest manufacturer of internal combustion engines measured by volume, producing more than 14 million internal combustion engines each year. Honda became the second-largest Japanese automobile manufacturer in 2001. Honda was the eighth largest automobile manufacturer in the world in 2015.
Honda was the first Japanese automobile manufacturer to release a dedicated luxury brand, Acura, in 1986. Aside from their core automobile and motorcycle businesses, Honda also manufactures garden equipment, marine engines, personal watercraft and power generators, and other products. Since 1986, Honda has been involved with artificial intelligence/robotics research and released their ASIMO robot in 2000. They have also ventured into aerospace with the establishment of GE Honda Aero Engines in 2004 and the Honda HA-420 HondaJet, which began production in 2012. Honda has three joint-ventures in China: Honda China, Dongfeng Honda, and Guangqi Honda.
In 2013, Honda invested about 5.7% (US$6.8 billion) of its revenues in research and development. Also in 2013, Honda became the first Japanese automaker to be a net exporter from the United States, exporting 108,705 Honda and Acura models, while importing only 88,357.
Throughout his life, Honda's founder, Soichiro Honda, had an interest in automobiles. He worked as a mechanic at the Art Shokai garage, where he tuned cars and entered them in races. In 1937, with financing from his acquaintance Kato Shichirō, Honda founded Tōkai Seiki (Eastern Sea Precision Machine Company) to make piston rings working out of the Art Shokai garage. After initial failures, Tōkai Seiki won a contract to supply piston rings to Toyota, but lost the contract due to the poor quality of their products. After attending engineering school without graduating, and visiting factories around Japan to better understand Toyota's quality control processes, by 1941 Honda was able to mass-produce piston rings acceptable to Toyota, using an automated process that could employ even unskilled wartime laborers.
Tōkai Seiki was placed under the control of the Ministry of Commerce and Industry (called the Ministry of Munitions after 1943) at the start of World War II, and Soichiro Honda was demoted from president to senior managing director after Toyota took a 40% stake in the company. Honda also aided the war effort by assisting other companies in automating the production of military aircraft propellers. The relationships Honda cultivated with personnel at Toyota, Nakajima Aircraft Company and the Imperial Japanese Navy would be instrumental in the postwar period. A US B-29 bomber attack destroyed Tōkai Seiki's Yamashita plant in 1944, and the Itawa plant collapsed on 13 January 1945 Mikawa earthquake. Soichiro Honda sold the salvageable remains of the company to Toyota after the war for ¥450,000 and used the proceeds to found the Honda Technical Research Institute in October 1946.
With a staff of 12 men working in a shack, they built and sold improvised motorized bicycles, using a supply of 500 two-stroke "50 cc" Tohatsu war surplus radio generator engines. When the engines ran out, Honda began building their own copy of the Tohatsu engine, and supplying these to customers to attach to their bicycles. This was the Honda A-Type, nicknamed the Bata Bata for the sound the engine made. In 1949, the Honda Technical Research Institute was liquidated for 1,000,000, or about 5,000 today; these funds were used to incorporate Honda Motor Co., Ltd. At about the same time Honda hired engineer Kihachiro Kawashima, and Takeo Fujisawa who provided indispensable business and marketing expertise to complement Soichiro Honda's technical bent. The close partnership between Soichiro Honda and Fujisawa lasted until they stepped down together in October 1973.
The first complete motorcycle, with both the frame and engine made by Honda, was the 1949 D-Type, the first Honda to go by the name Dream. Honda Motor Company grew in a short time to become the world's largest manufacturer of motorcycles by 1964.
The first production automobile from Honda was the T360 mini pick-up truck, which went on sale in August 1963. Powered by a small 356-cc straight-4 gasoline engine, it was classified under the cheaper Kei car tax bracket. The first production car from Honda was the S500 sports car, which followed the T360 into production in October 1963. Its chain-driven rear wheels pointed to Honda's motorcycle origins.
Over the next few decades, Honda worked to expand its product line and expanded operations and exports to numerous countries around the world. In 1986, Honda introduced the successful Acura brand to the American market in an attempt to gain ground in the luxury vehicle market. The year 1991 saw the introduction of the Honda NSX supercar, the first all-aluminum monocoque vehicle that incorporated a mid-engine V6 with variable-valve timing.
CEO Tadashi Kume was succeeded by Nobuhiko Kawamoto in 1990. Kawamoto was selected over Shoichiro Irimajiri, who oversaw the successful establishment of Honda of America Manufacturing, Inc. in Marysville, Ohio. Irimajiri and Kawamoto shared a friendly rivalry within Honda; owing to health issues, Irimajiri would resign in 1992.
Following the death of Soichiro Honda and the departure of Irimajiri, Honda found itself quickly being outpaced in product development by other Japanese automakers and was caught off-guard by the truck and sport utility vehicle boom of the 1990s, all which took a toll on the profitability of the company. Japanese media reported in 1992 and 1993 that Honda was at serious risk of an unwanted and hostile takeover by Mitsubishi Motors, which at the time was a larger automaker by volume and was flush with profits from its successful Pajero and Diamante models.
Kawamoto acted quickly to change Honda's corporate culture, rushing through market-driven product development that resulted in recreational vehicles such as the first-generation Odyssey and the CR-V, and a refocusing away from some of the numerous sedans and coupes that were popular with the company's engineers but not with the buying public. The most shocking change to Honda came when Kawamoto ended the company's successful participation in Formula One after the 1992 season, citing costs in light of the takeover threat from Mitsubishi as well as the desire to create a more environmentally friendly company image.
The Honda Aircraft Company was established in 1995, as a wholly owned subsidiary; its goal was to produce jet aircraft under Honda's name.
On 23 February 2015, Honda announced that CEO and President Takanobu Ito would step down and be replaced by Takahiro Hachigo by June; additional retirements by senior managers and directors were expected.
In October 2019, Honda was reported to be in talks with Hitachi to merge the two companies' car parts businesses, creating a components supplier with almost $17 billion in annual sales.
In January 2020, Honda announced that it would be withdrawing employees working in the city of Wuhan, Hubei, China due to the COVID-19 pandemic. Due to the global spread of the virus, Honda became the first major automaker with operations in the US to suspend production in its factories on March 23, 2020. It resumed automobile, engine and transmission production at its US plants on May 11, 2020.
Honda is headquartered in Minato, Tokyo, Japan. Their shares trade on the Tokyo Stock Exchange and the New York Stock Exchange, as well as exchanges in Osaka, Nagoya, Sapporo, Kyoto, Fukuoka, London, Paris, and Switzerland.
The company has assembly plants around the globe. These plants are located in China, the United States, Pakistan, Canada, England, Japan, Belgium, Brazil, México, New Zealand, Malaysia, Indonesia, India, Philippines, Thailand, Vietnam, Turkey, Taiwan, Perú and Argentina. As of July 2010, 89 percent of Honda and Acura vehicles sold in the United States were built in North American plants, up from 82.2 percent a year earlier. This shields profits from the yen's advance to a 15-year high against the dollar.
American Honda Motor Company is based in Torrance, California. Honda Racing Corporation (HRC) is Honda's motorcycle racing division. Honda Canada Inc. is headquartered in Markham, Ontario, it was originally planned to be located in Richmond Hill, Ontario, but delays led them to look elsewhere. Their manufacturing division, Honda of Canada Manufacturing, is based in Alliston, Ontario. Honda has also created joint ventures around the world, such as Honda Siel Cars and Hero Honda Motorcycles in India, Guangzhou Honda and Dongfeng Honda in China, Boon Siew Honda in Malaysia and Honda Atlas in Pakistan. The company also runs a businesss innovation intitiave called Honda Xcelerator, in order to build relationships with innovators, partner with Silicon Valley startups and entrepreneurs, and help other companies work on prototypes. Xcelerator had worked with reportedly 40 companies as of January 2019. Xcelerator and a developer studio are part of the Honda Innovations group, formed in Spring 2017 and based in Mountain View, California.
Following the Japanese earthquake and tsunami in March 2011 Honda announced plans to halve production at its UK plants. The decision was made to put staff at the Swindon plant on a 2-day week until the end of May as the manufacturer struggled to source supplies from Japan. It's thought around 22,500 cars were produced during this period.
For the fiscal year 2018, Honda reported earnings of US$9.534 billion, with an annual revenue of US$138.250 billion, an increase of 6.2% over the previous fiscal cycle. Honda's shares traded at over $32 per share, and its market capitalization was valued at US$50.4 billion in October 2018.
Honda's Net Sales and Other Operating Revenue by Geographical Regions in 2007
Honda's automotive manufacturing ambitions can be traced back to 1963, with the Honda T360, a kei car truck built for the Japanese market. This was followed by the two-door roadster, the Honda S500 also introduced in 1963. In 1965, Honda built a two-door commercial delivery van, called the Honda L700. Honda's first four-door sedan was not the Accord, but the air-cooled, four-cylinder, gasoline-powered Honda 1300 in 1969. The Civic was a hatchback that gained wide popularity internationally, but it wasn't the first two-door hatchback built. That was the Honda N360, another "Kei car" that was adapted for international sale as the N600. The Civic, which appeared in 1972 and replaced the N600 also had a smaller sibling that replaced the air-cooled N360, called the Honda Life that was water-cooled.
The Honda Life represented Honda's efforts in competing in the "kei" car segment, offering sedan, delivery van and small pick-up platforms on a shared chassis. The Life StepVan had a novel approach that, while not initially a commercial success, appears to be an influence in vehicles with the front passengers sitting behind the engine, a large cargo area with a flat roof and a liftgate installed in back, and utilizing a transversely installed engine with a front-wheel-drive powertrain.
As Honda entered into automobile manufacturing in the late 1960s, where Japanese manufacturers such as Toyota and Nissan had been making cars since before WWII, it appears that Honda instilled a sense of doing things a little differently than its Japanese competitors. Its mainstay products, like the Accord and Civic (with the exception of its USA-market 1993–97 Passport which was part of a vehicle exchange program with Isuzu (part of the Subaru-Isuzu joint venture)), have always employed front-wheel-drive powertrain implementation, which is currently a long-held Honda tradition. Honda also installed new technologies into their products, first as optional equipment, then later standard, like anti lock brakes, speed sensitive power steering, and multi-port fuel injection in the early 1980s. This desire to be the first to try new approaches is evident with the creation of the first Japanese luxury chain Acura, and was also evident with the all aluminum, mid-engined sports car, the Honda NSX, which also introduced variable valve timing technology, Honda calls VTEC.
The Civic is a line of compact cars developed and manufactured by Honda. In North America, the Civic is the second-longest continuously running nameplate from a Japanese manufacturer; only its perennial rival, the Toyota Corolla, introduced in 1968, has been in production longer. The Civic, along with the Accord and Prelude, comprised Honda's vehicles sold in North America until the 1990s when the model lineup was expanded. Having gone through several generational changes, the Civic has become larger and more upmarket, and it currently slots between the Fit and Accord.
Honda produces Civic hybrid, a hybrid electric vehicle that competes with the Toyota Prius, and also produces the Insight and CR-Z.
In 2008, Honda increased global production to meet the demand for small cars and hybrids in the U.S. and emerging markets. The company shuffled U.S. production to keep factories busy and boost car output while building fewer minivans and sport utility vehicles as light truck sales fell.
Its first entrance into the pickup segment, the light-duty Ridgeline, won Truck of the Year from "Motor Trend" magazine in 2006. Also in 2006, the redesigned Civic won Car of the Year from the magazine, giving Honda a rare double win of Motor Trend honors.
It is reported that Honda plans to increase hybrid sales in Japan to more than 20% of its total sales in the fiscal year 2011, from 14.8% in the previous year.
Five of United States Environmental Protection Agency's top ten most fuel-efficient cars from 1984 to 2010 comes from Honda, more than any other automakers. The five models are: 2000–2006 Honda Insight ( combined), 1986–1987 Honda Civic Coupe HF ( combined), 1994–1995 Honda Civic hatchback VX ( combined), 2006– Honda Civic Hybrid ( combined), and 2010– Honda Insight ( combined). The ACEEE has also rated the Civic GX as the greenest car in America for seven consecutive years.
Honda currently builds vehicles in factories located in Japan, the United States of America, Canada, China, Pakistan, the United Kingdom, Belgium, Brazil, Indonesia, India, Thailand, Turkey, Argentina, Mexico, Taiwan, and the Philippines.
Honda is the largest motorcycle manufacturer in Japan and has been since it started production in 1955.
At its peak in 1982, Honda manufactured almost three million motorcycles annually. By 2006 this figure had reduced to around 550,000 but was still higher than its three domestic competitors.
In 2017, India became the largest motorcycle market of Honda. In India, Honda is leading in the scooters segment, with 59 percent market share.
During the 1960s, when it was a small manufacturer, Honda broke out of the Japanese motorcycle market and began exporting to the U.S. Working with the advertising agency Grey Advertising, Honda created an innovative marketing campaign, using the slogan "You meet the nicest people on a Honda." In contrast to the prevailing negative stereotypes of motorcyclists in America as tough, antisocial rebels, this campaign suggested that Honda motorcycles were made for the everyman. The campaign was hugely successful; the ads ran for three years, and by the end of 1963 alone, Honda had sold 90,000 motorcycles.
Taking Honda's story as an archetype of the smaller manufacturer entering a new market already occupied by highly dominant competitors, the story of their market entry, and their subsequent huge success in the U.S. and around the world has been the subject of some academic controversy. Competing explanations have been advanced to explain Honda's strategy and the reasons for their success.
The first of these explanations was put forward when, in 1975, Boston Consulting Group (BCG) was commissioned by the UK government to write a report explaining why and how the British motorcycle industry had been out-competed by its Japanese competitors. The report concluded that the Japanese firms, including Honda, had sought a very high scale of production (they had made a large number of motorbikes) in order to benefit from economies of scale and learning curve effects. It blamed the decline of the British motorcycle industry on the failure of British managers to invest enough in their businesses to profit from economies of scale and scope.
The second explanation was offered in 1984 by Richard Pascale, who had interviewed the Honda executives responsible for the firm's entry into the U.S. market. As opposed to the tightly focused strategy of low cost and high scale that BCG accredited to Honda, Pascale found that their entry into the U.S. market was a story of "miscalculation, serendipity, and organizational learning" – in other words, Honda's success was due to the adaptability and hard work of its staff, rather than any long-term strategy. For example, Honda's initial plan on entering the US was to compete in large motorcycles, around 300 cc. Honda's motorcycles in this class suffered performance and reliability problems when ridden the relatively long distances of the US highways. When the team found that the scooters they were using to get themselves around their U.S. base of San Francisco attracted positive interest from consumers that they fell back on selling the Super Cub instead.
The most recent school of thought on Honda's strategy was put forward by Gary Hamel and C. K. Prahalad in 1989. Creating the concept of core competencies with Honda as an example, they argued that Honda's success was due to its focus on leadership in the technology of internal combustion engines. For example, the high power-to-weight ratio engines Honda produced for its racing bikes provided technology and expertise which was transferable into mopeds. Honda's entry into the U.S. motorcycle market during the 1960s is used as a case study for teaching introductory strategy at business schools worldwide.
Production started in 1953 with H-type engine (prior to motorcycle).
Honda power equipment reached record sales in 2007 with 6.4 million units. By 2010 (Fiscal year ended 31 March) this figure had decreased to 4,7 million units. Cumulative production of power products has exceeded 85 million units (as of September 2008).
Honda power equipment includes:
Honda engines powered the entire 33-car starting field of the 2010 Indianapolis 500 and for the fifth consecutive race, there were no engine-related retirements during the running of the Memorial Day Classic.
In the 1980s Honda developed the GY6 engine for use in motor scooters. Although no longer manufactured by Honda it is still commonly used in many Chinese, Korean and Taiwanese light vehicles.
Honda, despite being known as an engine company, has never built a V8 for passenger vehicles. In the late 1990s, the company resisted considerable pressure from its American dealers for a V8 engine (which would have seen use in top-of-the-line Honda SUVs and Acuras), with American Honda reportedly sending one dealer a shipment of V8 beverages to silence them. Honda considered starting V8 production in the mid-2000s for larger Acura sedans, a new version of the high-end NSX sports car (which previously used DOHC V6 engines with VTEC to achieve its high power output) and possible future ventures into the American full-size truck and SUV segment for both the Acura and Honda brands, but this was canceled in late 2008, with Honda citing environmental and worldwide economic conditions as reasons for the termination of this project.
ASIMO is the part of Honda's Research & Development robotics program. It is the eleventh in a line of successive builds starting in 1986 with Honda E0 moving through the ensuing Honda E series and the Honda P series. Weighing 54 kilograms and standing 130 centimeters tall, ASIMO resembles a small astronaut wearing a backpack, and can walk on two feet in a manner resembling human locomotion, at up to . ASIMO is the world's only humanoid robot able to ascend and descend stairs independently. However, human motions such as climbing stairs are difficult to mimic with a machine, which ASIMO has demonstrated by taking two plunges off a staircase.
Honda's robot ASIMO (see below) as an R&D project brings together expertise to create a robot that walks, dances and navigates steps.
2010 marks the year Honda has developed a machine capable of reading a user's brainwaves to move ASIMO. The system uses a helmet covered with electroencephalography and near-infrared spectroscopy sensors that monitor electrical brainwaves and cerebral blood flow—signals that alter slightly during the human thought process. The user thinks of one of a limited number of gestures it wants from the robot, which has been fitted with a Brain-Machine Interface.
Honda has also pioneered new technology in its HA-420 HondaJet, manufactured by its subsidiary Honda Aircraft Company, which allows new levels of reduced drag, increased aerodynamics and fuel efficiency thus reducing operating costs.
Honda has also built a downhill racing bicycle known as the Honda RN-01. It is not available for sale to the public. The bike has a gearbox, which replaces the standard derailleur found on most bikes.
Honda has hired several people to pilot the bike, among them Greg Minnaar. The team is known as Team G Cross Honda.
Honda also builds all-terrain vehicles (ATV).
420
450r
400ex
300ex
250r
Honda's solar cell subsidiary company Honda Soltec (Headquarters: Kikuchi-gun, Kumamoto; President and CEO: Akio Kazusa) started sales throughout Japan of thin-film solar cells for public and industrial use on 24 October 2008, after selling solar cells for residential use since October 2007. Honda announced in the end of October 2013 that Honda Soltec would cease the business operation except for support for existing customers in Spring 2014 and the subsidiary would be dissolved.
Honda has been active in motorsports, like Formula One, the Motorcycle Grand Prix and others.
Honda entered Formula One as a constructor for the first time in the 1964 season at the German Grand Prix with Ronnie Bucknum at the wheel. 1965 saw the addition of Richie Ginther to the team, who scored Honda's first point at the Belgian Grand Prix, and Honda's first win at the Mexican Grand Prix. 1967 saw their next win at the Italian Grand Prix with John Surtees as their driver. In 1968, Jo Schlesser was killed in a Honda RA302 at the French Grand Prix. This racing tragedy, coupled with their commercial difficulties selling automobiles in the United States, prompted Honda to withdraw from all international motorsport that year.
After a learning year in 1965, Honda-powered Brabhams dominated the 1966 French Formula Two championship in the hands of Jack Brabham and Denny Hulme. As there was no European Championship that season, this was the top F2 championship that year. In the early 1980s Honda returned to F2, supplying engines to Ron Tauranac's Ralt team. Tauranac had designed the Brabham cars for their earlier involvement. They were again extremely successful. In a related exercise, John Judd's Engine Developments company produced a turbo "Brabham-Honda" engine for use in IndyCar racing. It won only one race, in 1988 for Bobby Rahal at Pocono.
Honda returned to Formula One in 1983, initially with another Formula Two partner, the Spirit team, before switching abruptly to Williams in 1984. Between 1986 and 1991, Honda won six consecutive Formula One Constructors' Championships as an engine manufacturer, as well as five consecutive Drivers' Championships with Nelson Piquet, Ayrton Senna and Alain Prost. Williams-Honda won the crown in 1986 and 1987. Honda switched allegiance to McLaren in 1988, and then won the title in 1988, 1989, 1990 and 1991. Honda withdrew from Formula One at the end of 1992, although the related Mugen company maintained a presence up to the end of 2000, winning four races with Ligier and Jordan.
Honda debuted in the CART IndyCar World Series as a works supplier in 1994. The engines were far from competitive at first, but after development, the company won six consecutive drivers' championships and four manufacturers' championships between 1996 and 2001. In 2003, Honda transferred its effort to the IRL IndyCar Series with Ilmor supporting HPD. In 2004, Honda-powered cars overwhelmingly dominated the IndyCar Series, winning 14 of 16 IndyCar races, including the Indianapolis 500, and claimed the IndyCar Series Manufacturers' Championship, Drivers' Championship and Rookie of the Year titles. From 2006 to 2011, Honda was the lone engine supplier for the IndyCar Series, including the Indianapolis 500. In the 2006 Indianapolis 500, for the first time in Indianapolis 500 history, the race was run without a single engine problem. Since 2012, HPD has constructed turbocharged V6 engines for its IndyCar effort, winning four Indianapolis 500s, two manufacturers' championships and two drivers' championships.
During 1998, Honda considered returning to Formula One with their own team. The project was aborted after the death of its technical director, Harvey Postlethwaite. Honda instead came back as an official engine supplier to British American Racing (BAR), and briefly to Jordan Grand Prix. Together BAR and Honda achieved 15 podium finishes and second place in the 2004 constructors' championship. Honda bought a stake in the BAR team in 2004 before buying the team outright at the end of 2005, becoming a constructor for the first time since the 1960s. Honda won the 2006 Hungarian Grand Prix with driver Jenson Button. Honda announced in December 2008, that it would be exiting Formula One with immediate effect due to the 2008 global economic crisis.
Honda has competed in the British Touring Car Championship since 1995, though not always as a works team. They have achieved over 170 race victories, seven drivers' championships, five manufacturers' championships and seven teams' championships, ranking second with most wins in the series. Honda also won the World Touring Car Championship in 2013.
Honda made an official announcement on 16 May 2013 that it planned to re-enter into Formula One in 2015 as an engine supplier to McLaren. On 15 September 2017, after a winless campaign spanning three seasons and achieving a best finish of fifth place, McLaren and Honda announced their split, with the latter going on to sign a multi-year deal to supply Toro Rosso, the junior team of Red Bull Racing. After a fairly successful season with Toro Rosso, Honda made a deal to also supply Red Bull Racing. Max Verstappen scored Honda's first win of the V6 turbo-hybrid era at the Austrian Grand Prix.
Honda Racing Corporation (HRC) was formed in 1982. The company combines participation in motorcycle races throughout the world with the development of high potential racing machines. Its racing activities are an important source for the creation of leading-edge technologies used in the development of Honda motorcycles. HRC also contributes to the advancement of motorcycle sports through a range of activities that include sales of production racing motorcycles, support for satellite teams, and rider education programs.
Soichiro Honda, being a race driver himself, could not stay out of international motorsport. In 1959, Honda entered five motorcycles into the Isle of Man TT race, the most prestigious motorcycle race in the world. While always having powerful engines, it took until 1961 for Honda to tune their chassis well enough to allow Mike Hailwood to claim their first Grand Prix victories in the 125 and 250 cc classes. Hailwood would later pick up their first Senior TT wins in 1966 and 1967. Honda's race bikes were known for their "sleek & stylish design" and exotic engine configurations, such as the 5-cylinder, 22,000 rpm, 125 cc bike and their 6-cylinder 250 cc and 297 cc bikes.
In 1979, Honda returned to Grand Prix motorcycle racing with the monocoque-framed, four-stroke NR500. The FIM rules limited engines to four cylinders, so the NR500 had non-circular, 'race-track', cylinders, each with 8 valves and two connecting rods, in order to provide sufficient valve area to compete with the dominant two-stroke racers. Unfortunately, it seemed Honda tried to accomplish too much at one time and the experiment failed. For the 1982 season, Honda debuted its first two-stroke race bike, the NS500 and in , Honda won their first 500 cc Grand Prix World Championship with Freddie Spencer. Since then, Honda has become a dominant marque in motorcycle Grand Prix racing, winning a plethora of top-level titles with riders such as Mick Doohan and Valentino Rossi. Honda also head the number of wins at the Isle of Man TT having notched up 227 victories in the solo classes and Sidecar TT, including Ian Hutchinson's clean sweep at the 2010 races.
The outright lap record on the Snaefell Mountain Course was held by Honda, set at the 2015 TT by John McGuinness at an average speed of on a Honda CBR1000RR, bettered the next year by Michael Dunlop on a BMW S1000RR at .
In the Motocross World Championship, Honda has claimed six world championships. In the World Enduro Championship, Honda has captured eight titles, most recently with Stefan Merriman in 2003 and with Mika Ahola from 2007 to 2010. In motorcycle trials, Honda has claimed three world championships with Belgian rider Eddy Lejeune.
The Honda Civic GX was for a long time the only purpose-built natural gas vehicle (NGV) commercially available in some parts of the U.S. The Honda Civic GX first appeared in 1998 as a factory-modified Civic LX that had been designed to run exclusively on compressed natural gas. The car looks and drives just like a contemporary Honda Civic LX, but does not run on gasoline. In 2001, the Civic GX was rated the cleanest-burning internal combustion engine in the world by the U.S. Environmental Protection Agency (EPA).
First leased to the City of Los Angeles, in 2005, Honda started offering the GX directly to the public through factory trained dealers certified to service the GX. Before that, only fleets were eligible to purchase a new Civic GX. In 2006, the Civic GX was released in New York, making it the second state where the consumer is able to buy the car.
In June 2015, Honda announced its decision to phase out the commercialization of natural-gas powered vehicles to focus on the development of a new generation of electrified vehicles such as hybrids, plug-in electric cars and hydrogen-powered fuel cell vehicles. Since 2008, Honda has sold about 16,000 natural-gas vehicles, mainly to taxi and commercial fleets.
Honda's Brazilian subsidiary launched flexible-fuel versions for the Honda Civic and Honda Fit in late 2006. As other Brazilian flex-fuel vehicles, these models run on any blend of hydrous ethanol (E100) and E20-E25 gasoline. Initially, and in order to test the market preferences, the carmaker decided to produce a limited share of the vehicles with flex-fuel engines, 33 percent of the Civic production and 28 percent of the Fit models. Also, the sale price for the flex-fuel version was higher than the respective gasoline versions, around US$1,000 premium for the Civic, and US$650 for the Fit, despite the fact that all other flex-fuel vehicles sold in Brazil had the same tag price as their gasoline versions. In July 2009, Honda launched in the Brazilian market its third flexible-fuel car, the Honda City.
During the last two months of 2006, both flex-fuel models sold 2,427 cars against 8,546 gasoline-powered automobiles, jumping to 41,990 flex-fuel cars in 2007, and reaching 93,361 in 2008. Due to the success of the flex versions, by early 2009 a hundred percent of Honda's automobile production for the Brazilian market is now flexible-fuel, and only a small percentage of gasoline version is produced in Brazil for exports.
In March 2009, Honda launched in the Brazilian market the first flex-fuel motorcycle in the world. Produced by its Brazilian subsidiary Moto Honda da Amazônia, the CG 150 Titan Mix is sold for around US$2,700.
In late 1999, Honda launched the first commercial hybrid electric car sold in the U.S. market, the Honda Insight, just one month before the introduction of the Toyota Prius, and initially sold for US$20,000. The first-generation Insight was produced from 2000 to 2006 and had a fuel economy of for the EPA's highway rating, the most fuel-efficient mass-produced car at the time. Total global sales for the Insight amounted to only around 18,000 vehicles. Cumulative global sales reached 100,000 hybrids in 2005 and 200,000 in 2007.
Honda introduced the second-generation Insight in Japan in February 2009, and released it in other markets through 2009 and in the U.S. market in April 2009. At $19,800 as a five-door hatchback it will be the least expensive hybrid available in the U.S.
Since 2002, Honda has also been selling the Honda Civic Hybrid (2003 model) in the U.S. market. It was followed by the Honda Accord Hybrid, offered in model years 2005 through 2007. Sales of the Honda CR-Z began in Japan in February 2010, becoming Honda's third hybrid electric car in the market. , Honda was producing around 200,000 hybrids a year in Japan.
Sales of the Fit Hybrid began in Japan in October 2010, at the time, the lowest price for a gasoline-hybrid electric vehicle sold in the country. The European version, called Honda Jazz Hybrid, was released in early 2011. During 2011 Honda launched three hybrid models available only in Japan, the Fit Shuttle Hybrid, Freed Hybrid and Freed Spike Hybrid.
Honda's cumulative global hybrid sales passed the 1 million unit milestone at the end of September 2012, 12 years and 11 months after sales of the first generation Insight began in Japan November 1999. A total of 187,851 hybrids were sold worldwide in 2013, and 158,696 hybrids during the first six months of 2014. , Honda has sold more than 1.35 million hybrids worldwide.
In Takanezawa, Japan, on 16 June 2008, Honda Motors produced the first assembly-line FCX Clarity, a hybrid hydrogen fuel cell vehicle. More efficient than a gas-electric hybrid vehicle, the FCX Clarity combines hydrogen and oxygen from ordinary air to generate electricity for an electric motor. In July 2014 Honda announced the end of production of the Honda FCX Clarity for the 2015 model.
The vehicle itself does not emit any pollutants and its only by-products are heat and water. The FCX Clarity also has an advantage over gas-electric hybrids in that it does not use an internal combustion engine to propel itself. Like a gas-electric hybrid, it uses a lithium ion battery to assist the fuel cell during acceleration and capture energy through regenerative braking, thus improving fuel efficiency. The lack of hydrogen filling stations throughout developed countries will keep production volumes low. Honda will release the vehicle in groups of 150. California is the only U.S. market with infrastructure for fueling such a vehicle, though the number of stations is still limited. Building more stations is expensive, as the California Air Resources Board (CARB) granted $6.8 million for four H2 fueling stations, costing US$1.7 million each.
Honda views hydrogen fuel cell vehicles as the long-term replacement of piston cars, not battery cars.
The all-electric Honda EV Plus was introduced in 1997 as a result of CARB's zero-emissions vehicle mandate and was available only for leasing in California. The EV plus was the first battery electric vehicle from a major automaker with non-lead–acid batteries The EV Plus had an all-electric range of . Around 276 units were sold in the U.S. and production ended in 1999.
The all-electric Honda Fit EV was introduced in 2012 and has a range of . The all-electric car was launched in the U.S. to retail customers in July 2012 with initial availability limited to California and Oregon. Production is limited to only 1,100 units over the first three years. A total of 1,007 units have been leased in the U.S. through September 2014. The Fit EV was released in Japan through leasing to local government and corporate customers in August 2012. Availability in the Japanese market is limited to 200 units during its first two years. In July 2014 Honda announced the end of production of the Fit EV for the 2015 model.
The Honda Accord Plug-in Hybrid was introduced in 2013 and has an all-electric range of Sales began in the U.S. in January 2013 and the plug-in hybrid is available only in California and New York. A total of 835 units have been sold in the U.S. through September 2014. The Accord PHEV was introduced in Japan in June 2013 and is available only for leasing, primarily to corporations and government agencies.
Starting in 1978, Honda in Japan decided to diversify its sales distribution channels and created Honda Verno, which sold established products with a higher content of standard equipment and more sporting nature. The establishment of "Honda Verno" coincided with its new sports compact, the Honda Prelude. Later, the Honda Vigor, Honda Ballade, and Honda Quint were added to "Honda Verno" stores. This approach was implemented due to efforts in place by rival Japanese automakers Toyota and Nissan.
As sales progressed, Honda created two more sales channels, called Honda Clio in 1984, and Honda Primo in 1985. The "Honda Clio" chain sold products that were traditionally associated with Honda dealerships before 1978, like the Honda Accord, and "Honda Primo" sold the Honda Civic, kei cars such as the Honda Today, superminis like the Honda Capa, along with other Honda products, such as farm equipment, lawnmowers, portable generators, and marine equipment, plus motorcycles and scooters like the Honda Super Cub. A styling tradition was established when "Honda Primo" and "Clio" began operations in that all "Verno" products had the rear license plate installed in the rear bumper, while "Primo" and "Clio" products had the rear license plate installed on the trunk lid or rear door for minivans.
As time progressed and sales began to diminish partly due to the collapse of the Japanese "bubble economy", "supermini" and "kei" vehicles that were specific to "Honda Primo" were "badge engineered" and sold at the other two sales channels, thereby providing smaller vehicles that sold better at both "Honda Verno" and "Honda Clio" locations. As of March 2006, the three sales chains were discontinued, with the establishment of "Honda Cars" dealerships. While the network was disbanded, some Japanese Honda dealerships still use the network names, offering all Japanese market Honda cars at all locations.
Honda sells genuine accessories through a separate retail chain called "" for both their motorcycle, scooter, and automobile products. In cooperation with corporate group partner Pioneer, Honda sells an aftermarket line of audio and in-car navigation equipment that can be installed in any vehicle under the brand name , which is available at Honda Access locations as well as Japanese auto parts retailers, such as Autobacs. Buyers of used vehicles are directed to a specific Honda retail chain that sells only used vehicles called "."
In the spring of 2012, Honda in Japan introduced "Honda Cars Small Store" (Japanese) which is devoted to compact cars like the Honda Fit, and "kei" vehicles like the Honda N-One and Honda S660 roadster.
Prelude, Integra, CR-X, Vigor, Saber, Ballade, Quint, Crossroad, Element, NSX, HR-V, Mobilio Spike, S2000, CR-V, That's, MDX, Rafaga, Capa, and the Torneo
Accord, Legend, Inspire, Avancier, S-MX, Lagreat, Stepwgn, Elysion, Stream, Odyssey (int'l), Domani, Concerto, Accord Tourer, Logo, Fit, Insight, That's, Mobilio, and the City
Civic, Life, Acty, Vamos, Hobio, Ascot, Ascot Innova, Torneo, Civic Ferio, Freed, Mobilio, Orthia, Capa, Today, Z, and the Beat
In 2003, Honda released its "Cog" advertisement in the UK and on the Internet. To make the ad, the engineers at Honda constructed a Rube Goldberg Machine made entirely out of car parts from a Honda Accord Touring. To the chagrin of the engineers at Honda, all the parts were taken from two of only six hand-assembled pre-production models of the Accord. The advertisement depicted a single cog which sets off a chain of events that ends with the Honda Accord moving and Garrison Keillor speaking the tagline, "Isn't it nice when things just... work?" It took 606 takes to get it perfect.
In 2004, they produced the "Grrr" advert, usually immediately followed by a shortened version of the 2005 "Impossible Dream" advert. In December 2005, Honda released "The Impossible Dream" a two-minute panoramic advertisement filmed in New Zealand, Japan and Argentina which illustrates the founder's dream to build performance vehicles. While singing the song "Impossible Dream", a man reaches for his racing helmet, leaves his trailer on a minibike, then rides a succession of vintage Honda vehicles: a motorcycle, then a car, then a powerboat, then goes over a waterfall only to reappear piloting a hot air balloon, with Garrison Keillor saying "I couldn't have put it better myself" as the song ends. The song is from the 1960s musical "Man of La Mancha", sung by Andy Williams.
In 2006, Honda released its "Choir" advertisement, for the UK and the internet. This had a 60-person choir who sang the car noises as film of the Honda Civic are shown.
In the mid to late 2000s in the United States, during model close-out sales for the current year before the start of the new model year, Honda's advertising has had an animated character known simply as Mr. Opportunity, voiced by Rob Paulsen. The casual looking man talked about various deals offered by Honda and ended with the phrase "I'm Mr. Opportunity, and I'm knockin'", followed by him "knocking" on the television screen or "thumping" the speaker at the end of radio ads. In addition, commercials for Honda's international hatchback, the Jazz, are parodies of well-known pop culture images such as Tetris and Thomas The Tank Engine.
In late 2006, Honda released an ad with ASIMO exploring a museum, looking at the exhibits with almost childlike wonderment (spreading out its arms in the aerospace exhibit, waving hello to an astronaut suit that resembles him, etc.), while Garrison Keillor ruminates on progress. It concludes with the tagline: "More forwards please". Honda also sponsored ITV's coverage of Formula One in the UK for 2007. However, they had announced that they would not continue in 2008 due to the sponsorship price requested by ITV being too high.
In May 2007, focuses on their strengths in racing and the use of the Red H badge – a symbol of what is termed as "Hondamentalism". The campaign highlights the lengths that Honda engineers go to in order to get the most out of an engine, whether it is for bikes, cars, powerboats – even lawnmowers. Honda released its Hondamentalism campaign. In the TV spot, Garrison Keillor says, "An engineer once said to build something great is like swimming in honey", while Honda engineers in white suits walk and run towards a great light, battling strong winds and flying debris, holding on to anything that will keep them from being blown away. Finally one of the engineers walks towards a red light, his hand outstretched. A web address is shown for the Hondamentalism website. The digital campaign aims to show how visitors to the site share many of the Hondamentalist characteristics.
At the beginning of 2008, Honda released – the "Problem Playground". The advert outlines Honda's environmental responsibility, demonstrating a hybrid engine, more efficient solar panels and the FCX Clarity, a hydrogen-powered car. The 90-second advert has large-scale puzzles, involving Rubik's Cubes, large shapes, and a 3-dimensional puzzle. On 29 May 2008, Honda, in partnership with Channel 4, broadcast a live advertisement. It showed skydivers jumping from an airplane over Spain and forming the letters H, O, N, D and A in mid-air. This live advertisement is generally agreed to be the first of its kind on British television. The advert lasted three minutes.
In 2009, American Honda released the "Dream the Impossible" documentary series, a collection of 5- to 8-minute web vignettes that focus on the core philosophies of Honda. Current short films include "Failure: The Secret to Success", "Kick Out the Ladder" and "Mobility 2088". They have Honda employees as well as Danica Patrick, Christopher Guest, Ben Bova, Chee Pearlman, Joe Johnston and Orson Scott Card. The film series plays at dreams.honda.com. In the UK, national television ads feature voice-overs from American radio host Garrison Keillor, while in the US the voice of Honda commercials is actor and director Fred Savage.
In the North American market, Honda starts all of their commercials with a two-tone jingle since the mid-2010s.
The late F1 driver Ayrton Senna stated that Honda probably played the most significant role in his three world championships. He had immense respect for founder, Soichiro Honda, and had a good relationship with Nobuhiko Kawamoto, the chairman of Honda at that time. Senna once called Honda "the greatest company in the world".
As part of its marketing campaign, Honda is an official partner and sponsor of the National Hockey League, the Anaheim Ducks of the NHL, and the arena named after it: Honda Center. Honda also sponsors The Honda Classic golf tournament and is a sponsor of Major League Soccer. The "Honda Player of the Year" award is presented in United States soccer. The "Honda Sports Award" is given to the best female athlete in each of twelve college sports in the United States. One of the twelve Honda Sports Award winners is chosen to receive the Honda-Broderick Cup, as "Collegiate Woman Athlete of the Year."
Honda sponsored La Liga club Valencia CF starting from 2014–15 season.
Honda has been a presenting sponsor of the Los Angeles Marathon since 2010 in a three-year sponsorship deal, with winners of the LA Marathon receiving a free Honda Accord. Since 1989, the Honda Campus All-Star Challenge has been a quizbowl tournament for Historically black colleges and universities.
2010 Chinese labor strike happened in Guangqi Honda, Dongfeng Honda. | https://en.wikipedia.org/wiki?curid=13729 |
Handball
Handball (also known as team handball, European handball or Olympic handball) is a team sport in which two teams of seven players each (six outcourt players and a goalkeeper) pass a ball using their hands with the aim of throwing it into the goal of the other team. A standard match consists of two periods of 30 minutes, and the team that scores more goals wins.
Modern handball is played on a court of , with a goal in the middle of each end. The goals are surrounded by a zone where only the defending goalkeeper is allowed; goals must be scored by throwing the ball from outside the zone or while "diving" into it. The sport is usually played indoors, but outdoor variants exist in the forms of field handball, Czech handball (which were more common in the past) and beach handball. The game is fast and high-scoring: professional teams now typically score between 20 and 35 goals each, though lower scores were not uncommon until a few decades ago. Some players may score hat tricks. Body contact is permitted: the defenders trying to stop the attackers from approaching the goal. No protective equipment is mandated, but players may wear soft protective bands, pads and mouth guards.
The game was codified at the end of the 19th century in Denmark. The modern set of rules was published on 29 October 1917 in Berlin, which is seen as the date of birth of the sport, and had several revisions since. The first official handball match was played in the same year in Germany. The first international games were played under these rules for men in 1925 and for women in 1930. Men's handball was first played at the 1936 Summer Olympics in Berlin as outdoors, and the next time at the 1972 Summer Olympics in Munich as indoors, and has been an Olympic sport since. Women's team handball was added at the 1976 Summer Olympics.
The International Handball Federation was formed in 1946 and, , has 197 member federations. The sport is most popular in Europe, and European countries have won all medals but one in the men's world championships since 1938. In the women's world championships, only two non-European countries have won the title: South Korea and Brazil. The game also enjoys popularity in East Asia, North Africa and parts of South America.
Games similar to handball were played in Ancient Greece and are represented on amphoras and stone carvings. Although detailed textual reference is rare, there are numerous descriptions of ball games being played where players throw the ball to one another; sometimes this is done in order to avoid interception by a player on the opposing team. Such games were played widely and served as both a form of exercise and a social event.
There is evidence of ancient Roman women playing a version of handball called "expulsim ludere". There are records of handball-like games in medieval France, and among the Inuit in Greenland, in the Middle Ages. By the 19th century, there existed similar games of "håndbold" from Denmark, "házená" in the Czech Republic, "handbol" in Ukraine, and "torball" in Germany.
The team handball game of today was codified at the end of the 19th century in northern Europe: primarily in Denmark, Germany, Norway and Sweden. The first written set of team handball rules was published in 1906 by the Danish gym teacher, lieutenant and Olympic medalist Holger Nielsen from Ordrup grammar school, north of Copenhagen. The modern set of rules was published on 29 October 1917 by Max Heiser, Karl Schelenz, and Erich Konigh from Berlin, Germany. 29 October 1917 is therefore seen as the "date of birth" of the sport. The first ever official handball match was played on 2 December 1917 in Berlin. After 1919 the rules were improved by Karl Schelenz. The first international games were played under these rules, between Germany and Belgium by men in 1925 and between Germany and Austria by women in 1930.
In 1926, the Congress of the International Amateur Athletics Federation nominated a committee to draw up international rules for field handball. The International Amateur Handball Federation was formed in 1928 and later the International Handball Federation was formed in 1946.
Men's field handball was played at the 1936 Summer Olympics in Berlin. During the next several decades, indoor handball flourished and evolved in the Scandinavian countries. The sport re-emerged onto the world stage as team handball for the 1972 Summer Olympics in Munich. Women's team handball was added at the 1976 Summer Olympics in Montreal. Due to its popularity in the region, the Eastern European countries that refined the event became the dominant force in the sport when it was reintroduced.
The International Handball Federation organised the men's world championship in 1938 and every four (sometimes three) years from World War II to 1995. Since the 1995 world championship in Iceland, the competition has been held every two years. The women's world championship has been held since 1957. The IHF also organizes women's and men's junior world championships. By July 2009, the IHF listed 166 member federations - approximately 795,000 teams and 19 million players.
The rules are laid out in the IHF's set of rules.
Two teams of seven players (six field players plus one goalkeeper) take the field and attempt to score points by putting the game ball into the opposing team's goal. In handling the ball, players are subject to the following restrictions:
Notable scoring opportunities can occur when attacking players jump into the goal area. For example, an attacking player may catch a pass while launching inside the goal area, and then shoot or pass before touching the floor. "Doubling" occurs when a diving attacking player passes to another diving teammate.
Handball is played on a court , with a goal in the centre of each end. The goals are surrounded by a near-semicircular area, called the zone or the crease, defined by a line six meters from the goal. A dashed near-semicircular line nine metres from the goal marks the free-throw line. Each line on the court is part of the area it encompasses. This implies that the middle line belongs to both halves at the same time.
The goals are two meters high and three meters wide. They must be securely bolted either to the floor or the wall behind.
The goal posts and the crossbar must be made out of the same material (e.g., wood or aluminium) and feature a quadratic cross section with sides of . The three sides of the beams visible from the playing field must be painted alternatingly in two contrasting colors which both have to contrast against the background. The colors on both goals must be the same.
Each goal must feature a net. This must be fastened in such a way that a ball thrown into the goal does not leave or pass the goal under normal circumstances. If necessary, a second net may be clasped to the back of the net on the inside.
The goals are surrounded by the crease. This area is delineated by two quarter circles with a radius of six metres around the far corners of each goal post and a connecting line parallel to the goal line. Only the defending goalkeeper is allowed inside this zone. However, the court players may catch and touch the ball in the air within it as long as the player starts his jump outside the zone and releases the ball before he lands (landing inside the perimeter is allowed in this case as long as the ball has been released).
If a player without the ball contacts the ground inside the goal perimeter, or the line surrounding the perimeter, he must take the most direct path out of it. However, should a player cross the zone in an attempt to gain an advantage (e.g., better position) their team cedes the ball. Similarly, violation of the zone by a defending player is penalized only if they do so in order to gain an advantage in defending.
Outside of one long edge of the court to both sides of the middle line are the substitution areas for each team. Team officials, substitutes, and suspended players must wait within this area. A team's area is the same side as the goal the team is defending; during halftime, substitution areas are swapped. Any player entering or leaving the play must cross the substitution line which is part of the side line and extends from the middle line to the team's side.
A standard match has two 30-minute halves with a 10- or 15-minute (major Championships/Olympics) halftime intermission. At half-time, teams switch sides of the court as well as benches. For youths, the length of the halves is reduced—25 minutes at ages 12 to 15, and 20 minutes at ages 8 to 11; though national federations of some countries may differ in their implementation from the official guidelines.
If a decision must be reached in a particular match (e.g., in a tournament) and it ends in a draw after regular time, there are at maximum two overtimes, each consisting of two straight 5-minute periods with a one-minute break in between. Should these not decide the game either, the winning team is determined in a penalty shootout (best-of-five rounds; if still tied, extra rounds are added until one team wins).
The referees may call "timeout" according to their sole discretion; typical reasons are injuries, suspensions, or court cleaning. Penalty throws should trigger a timeout only for lengthy delays, such as a change of the goalkeeper.
Since 2012, teams can call 3 "team timeouts" per game (up to two per half), which last one minute each. This right may only be invoked by the team in possession of the ball. Team representatives must show a green card marked with a black "T" on the timekeeper's desk. The timekeeper then immediately interrupts the game by sounding an acoustic signal to stop the clock. Before 2012, teams were allowed only one timeout per half. For the purpose of calling timeouts, overtime and shootouts are extensions of the second half.
A handball match is adjudicated by two equal referees. Some national bodies allow games with only a single referee in special cases like illness on short notice. Should the referees disagree on any occasion, a decision is made on mutual agreement during a short timeout; or, in case of punishments, the more severe of the two comes into effect. The referees are obliged to make their decisions "on the basis of their observations of facts". Their judgements are final and can be appealed against only if not in compliance with the rules.
The referees position themselves in such a way that the team players are confined between them. They stand diagonally aligned so that each can observe one side line. Depending on their positions, one is called "field referee" and the other "goal referee". These positions automatically switch on ball turnover. They physically exchange their positions approximately every 10 minutes (long exchange), and change sides every five minutes (short exchange).
The IHF defines 18 hand signals for quick visual communication with players and officials. The signal for warning is accompanied by a yellow card. A disqualification for the game is indicated by a red card, followed by a blue card if the disqualification will be accompanied by a report. The referees also use whistle blows to indicate infractions or to restart the play.
The referees are supported by a "scorekeeper" and a "timekeeper" who attend to formal things such as keeping track of goals and suspensions, or starting and stopping the clock, respectively. They also keep an eye on the benches and notify the referees on substitution errors. Their desk is located between the two substitution areas.
Each team consists of seven players on court and seven substitute players on the bench. One player on the court must be the designated goalkeeper, differing in his clothing from the rest of the field players. Substitution of players can be done in any number and at any time during game play. An exchange takes place over the substitution line. A prior notification of the referees is not necessary.
Some national bodies, such as the Deutsche Handball Bund (DHB, "German Handball Federation"), allow substitution in junior teams only when in ball possession or during timeouts. This restriction is intended to prevent early specialization of players to offence or defence.
Field players are allowed to touch the ball with any part of their bodies above and including the knee. As in several other team sports, a distinction is made between catching and dribbling. A player who is in possession of the ball may stand stationary for only three seconds, and may take only three steps. They must then either shoot, pass, or dribble the ball. Taking more than three steps at any time is considered travelling, and results in a turnover. A player may dribble as many times as they want (though, since passing is faster, it is the preferred method of attack), as long as during each dribble the hand contacts only the top of the ball. Therefore, carrying is completely prohibited, and results in a turnover. After the dribble is picked up, the player has the right to another three seconds or three steps. The ball must then be passed or shot, as further holding or dribbling will result in a "double dribble" turnover and a free throw for the other team. Other offensive infractions that result in a turnover include charging and setting an illegal screen. Carrying the ball into the six-meter zone results either in ball possession by the goalkeeper (by attacker) or turnover (by defender).
Only the goalkeepers are allowed to move freely within the goal perimeter, although they may not cross the goal perimeter line while carrying or dribbling the ball. Within the zone, they are allowed to touch the ball with all parts of their bodies, including their feet, with a defensive aim (for other actions, they are subject to the same restrictions as the field players). The goalkeepers may participate in the normal play of their teammates. They may be substituted by a regular field player if their team elects to use this scheme in order to outnumber the defending players. Earlier, this field player become the designated goalkeeper on the court; and had to wear some vest or bib to be identified as such. That shirt had to be equal in colour and form to the goalkeeper's shirt, to avoid confusion. A rule change meant to make the game more offensive now allows any player to substitute with the goalkeeper. The new rule resembles the one used in ice hockey. This rule was first used in the women's world championship in December 2015 and has since been used by the men's European championship in January 2016 and by both genders in the Olympic tournament in Rio in 2016.
If either goalkeeper deflects the ball over the outer goal line, their team stays in possession of the ball, in contrast to other sports like football. The goalkeeper resumes the play with a throw from within the zone ("goalkeeper throw"). Passing to one's own goalkeeper results in a turnover. In a penalty shot, throwing the ball against the head of a goalkeeper who is not moving risks a direct disqualification ("red card").
Outside of own D-zone, the goalkeeper is treated as a current field player, and has to follow field players' rules; holding or tackling an opponent player outside the area risks a direct disqualification. The goalkeeper may not return to the area with the ball.
Each team is allowed to have a maximum of four team officials seated on the benches. An official is anybody who is neither player nor substitute. One official must be the designated representative who is usually the team manager. Since 2012, representatives can call up to 3 team timeouts (up to twice per half), and may address the scorekeeper, timekeeper, and referees (before that, it was once per half); overtime and shootouts are considered extensions of the second half. Other officials typically include physicians or managers. Neither official is allowed to enter the playing court without the permission of the referees.
The ball is spherical and must be made either of leather or a synthetic material. It is not allowed to have a shiny or slippery surface. As the ball is intended to be operated by a single hand, its official sizes vary depending on age and gender of the participating teams.
The referees may award a special throw to a team. This usually happens after certain events such as scored goals, off-court balls, turnovers and timeouts. All of these special throws require the thrower to obtain a certain position, and pose restrictions on the positions of all other players. Sometimes the execution must wait for a whistle blow by the referee.
Penalties are given to players, in progressive format, for fouls that require more punishment than just a free-throw. Actions directed mainly at the opponent and not the ball (such as reaching around, holding, pushing, tripping, and jumping into opponent) as well as contact from the side, from behind a player or impeding the opponent's counterattack are all considered illegal and are subject to penalty. Any infraction that prevents a clear scoring opportunity will result in a seven-meter penalty shot.
Typically the referee will give a warning yellow card for an illegal action; but, if the contact was particularly dangerous, like striking the opponent in the head, neck or throat, the referee can forego the warning for an immediate two-minute suspension. Players are warned once before given a yellow card; they risk being red-carded if they draw three yellows.
A red card results in an ejection from the game and a two-minute penalty for the team. A player may receive a red card directly for particularly rough penalties. For instance, any contact from behind during a fast break is now being treated with a red card; as does any deliberate intent to injure opponents. A red-carded player has to leave the playing area completely. A player who is disqualified may be substituted with another player after the two-minute penalty is served. A coach or official can also be penalized progressively. Any coach or official who receives a two-minute suspension will have to pull out one of their players for two minutes; however, the player is not the one punished, and can be substituted in again, as the penalty consists of the team playing with one fewer player than the opposing team.
After referees award the ball to the opponents for whatever reason, the player currently in possession of the ball has to lay it down quickly, or risk a two-minute suspension. Also, gesticulating or verbally questioning the referee's order, as well as arguing with the officials' decisions, will normally risk a yellow card. If the suspended player protests further, does not walk straight off the field to the bench, or if the referee deems the tempo deliberately slow, that player risks a double yellow card. Illegal substitution (outside of the dedicated area, or if the replacement player enters too early) is prohibited; if they do, they risk a yellow card.
Players are typically referred to by the positions they are playing. The positions are always denoted from the view of the respective goalkeeper, so that a defender on the right opposes an attacker on the left. However, not all of the following positions may be occupied depending on the formation or potential suspensions.
Sometimes, the offense uses formations with two pivot players.
There are many variations in defensive formations. Usually, they are described as "n:m" formations, where "n" is the number of players defending at the goal line and "m" the number of players defending more offensive. Exceptions are the 3:2:1 defense and n+m formation (e.g. 5+1), where m players defend some offensive player in man coverage (instead of the usual zone coverage).
Attacks are played with all field players on the side of the defenders. Depending on the speed of the attack, one distinguishes between three attack "waves" with a decreasing chance of success:
The third wave evolves into the normal offensive play when all defenders not only reach the zone, but gain their accustomed positions. Some teams then substitute specialised offence players. However, this implies that these players must play in the defence should the opposing team be able to switch quickly to offence. The latter is another benefit for fast playing teams.
If the attacking team does not make sufficient progress (eventually releasing a shot on goal), the referees can call passive play (since about 1995, the referee gives a passive warning some time before the actual call by holding one hand up in the air, signalling that the attacking team should release a shot soon), turning control over to the other team. A shot on goal or an infringement leading to a yellow card or two-minute penalty will mark the start of a new attack, causing the hand to be taken down; but a shot blocked by the defense or a normal free throw will not. If it were not for this rule, it would be easy for an attacking team to stall the game indefinitely, as it is difficult to intercept a pass without at the same time conceding dangerous openings towards the goal.
The usual formations of the defense are 6–0, when all the defense players line up between the and lines to form a wall; the 5–1, when one of the players cruises outside the perimeter, usually targeting the center forwards while the other 5 line up on the line; and the less common 4–2 when there are two such defenders out front. Very fast teams will also try a 3–3 formation which is close to a switching man-to-man style. The formations vary greatly from country to country, and reflect each country's style of play. 6–0 is sometimes known as "flat defense", and all other formations are usually called "offensive defense".
Handball teams are usually organised as clubs. On a national level, the clubs are associated in federations which organize matches in leagues and tournaments.
The International Handball Federation (IHF) is the administrative and controlling body for international handball. Handball is an Olympic sport played during the Summer Olympics.
The IHF organizes world championships, held in odd-numbered years, with separate competitions for men and women.
The IHF World Men's Handball Championship 2019 title holders are Denmark. The IHF World Women's Handball Championship 2019 title holders are Netherlands.
The IHF is composed of five continental federations: Asian Handball Federation, African Handball Confederation, Pan-American Team Handball Federation, European Handball Federation and Oceania Handball Federation. These federations organize continental championships held every other second year. Handball is played during the Pan American Games, All-Africa Games, and Asian Games. It is also played at the Mediterranean Games. In addition to continental competitions between national teams, the federations arrange international tournaments between club teams.
The current worldwide attendance record for seven-a-side handball was set on September 6, 2014, during a neutral venue German league game between HSV Hamburg and the Mannheim-based Rhein-Neckar Lions. The matchup drew 44,189 spectators to Commerzbank Arena in Frankfurt, exceeding the previous record of 36,651 set at Copenhagen's Parken Stadium during the 2011 Danish Cup final.
Handball events have been selected as a main motif in numerous collectors' coins. One of the recent samples is the €10 Greek Handball commemorative coin, minted in 2003 to commemorate the 2004 Summer Olympics. On the coin, the modern athlete directs the ball in his hands towards his target, while in the background the ancient athlete is just about to throw a ball, in a game known as cheirosphaira, in a representation taken from a black-figure pottery vase of the Archaic period.
The most recent commemorative coin featuring handball is the British 50 pence coin, part of the series of coins commemorating the London 2012 Olympic Games.
Notes | https://en.wikipedia.org/wiki?curid=13730 |
Hilbert's basis theorem
In mathematics, specifically commutative algebra, Hilbert's basis theorem says that a polynomial ring over a Noetherian ring is Noetherian.
If formula_1 is a ring, let formula_2 denote the ring of polynomials in the indeterminate formula_3 over formula_1. Hilbert proved that if formula_1 is "not too large", in the sense that if formula_1 is Noetherian, the same must be true for formula_2. Formally,
Hilbert's Basis Theorem. If formula_1 is a Noetherian ring, then formula_2 is a Noetherian ring.
Corollary. If formula_1 is a Noetherian ring, then formula_11 is a Noetherian ring.
This can be translated into algebraic geometry as follows: every algebraic set over a field can be described as the set of common roots of finitely many polynomial equations. proved the theorem (for the special case of polynomial rings over a field) in the course of his proof of finite generation of rings of invariants.
Hilbert produced an innovative proof by contradiction using mathematical induction; his method does not give an algorithm to produce the finitely many basis polynomials for a given ideal: it only shows that they must exist. One can determine basis polynomials using the method of Gröbner bases.
Remark. We will give two proofs, in both only the "left" case is considered; the proof for the right case is similar.
Suppose formula_14 is a non-finitely generated left-ideal. Then by recursion (using the axiom of dependent choice) there is a sequence formula_15 of polynomials such that if formula_16 is the left ideal generated by formula_17 then formula_18 is of minimal degree. It is clear that formula_19 is a non-decreasing sequence of naturals. Let formula_20 be the leading coefficient of formula_21 and let formula_22 be the left ideal in formula_1 generated by formula_24. Since formula_1 is Noetherian the chain of ideals
must terminate. Thus formula_27 for some integer formula_28. So in particular,
Now consider
whose leading term is equal to that of formula_31; moreover, formula_32. However, formula_33, which means that formula_34 has degree less than formula_31, contradicting the minimality.
Let formula_14 be a left-ideal. Let formula_37 be the set of leading coefficients of members of formula_38. This is obviously a left-ideal over formula_1, and so is finitely generated by the leading coefficients of finitely many members of formula_38; say formula_41. Let formula_42 be the maximum of the set formula_43, and let formula_44 be the set of leading coefficients of members of formula_38, whose degree is formula_46. As before, the formula_44 are left-ideals over formula_1, and so are finitely generated by the leading coefficients of finitely many members of formula_38, say
with degrees formula_46. Now let formula_52 be the left-ideal generated by:
We have formula_54 and claim also formula_55. Suppose for the sake of contradiction this is not so. Then let formula_56 be of minimal degree, and denote its leading coefficient by formula_57.
Thus our claim holds, and formula_73 which is finitely generated.
Note that the only reason we had to split into two cases was to ensure that the powers of formula_3 multiplying the factors were non-negative in the constructions.
Let formula_1 be a Noetherian commutative ring. Hilbert's basis theorem has some immediate corollaries.
The Mizar project has completely formalized and automatically checked a proof of Hilbert's basis theorem in the HILBASIS file. | https://en.wikipedia.org/wiki?curid=13733 |
Heterocyclic compound
A heterocyclic compound or ring structure is a cyclic compound that has atoms of at least two different elements as members of its ring(s). Heterocyclic chemistry is the branch of organic chemistry dealing with the synthesis, properties, and applications of these heterocycles.
Examples of heterocyclic compounds include all of the nucleic acids, the majority of drugs, most biomass (cellulose and related materials), and many natural and synthetic dyes. 59% of US FDA-approved drugs contain nitrogen heterocycles.
Although heterocyclic chemical compounds may be inorganic compounds or organic compounds, most contain at least one carbon. While atoms that are neither carbon nor hydrogen are normally referred to in organic chemistry as heteroatoms, this is usually in comparison to the all-carbon backbone. But this does not prevent a compound such as borazine (which has no carbon atoms) from being labelled "heterocyclic". IUPAC recommends the Hantzsch-Widman nomenclature for naming heterocyclic compounds.
Heterocyclic compounds can be usefully classified based on their electronic structure. The saturated heterocycles behave like the acyclic derivatives. Thus, piperidine and tetrahydrofuran are conventional amines and ethers, with modified steric profiles. Therefore, the study of heterocyclic chemistry focuses especially on unsaturated derivatives, and the preponderance of work and applications involves unstrained 5- and 6-membered rings. Included are pyridine, thiophene, pyrrole, and furan. Another large class of heterocycles refers to those fused to benzene rings. For example, the fused benzene analogs of pyridine, thiophene, pyrrole, and furan are quinoline, benzothiophene, indole, and benzofuran, respectively. The fusion of two benzene rings gives rise to a third large family of compounds. Analogs of the previously mentioned heterocycles for this third family of compounds are acridine, dibenzothiophene, carbazole, and dibenzofuran, respectively. The unsaturated rings can be classified according to the participation of the heteroatom in the conjugated pi system.
Heterocycles with three atoms in the ring are higher in energy and more reactive because of ring strain. Those containing one heteroatom are, in general, stable. Those with two heteroatoms are more likely to occur as reactive intermediates. The C-X-C bond angles (where X is a heteroatom) in oxiranes and aziridines are very close to 60° and the peripheral H-C-H bond angles are near to 180°.
The 5-membered ring compounds containing "two" heteroatoms, at least one of which is nitrogen, are collectively called the azoles. Thiazoles and isothiazoles contain a sulfur and a nitrogen atom in the ring. Dithiolanes have two sulfur atoms.
A large group of 5-membered ring compounds with "three" or more heteroatoms also exists. One example is the class of dithiazoles, which contain two sulfur atoms and one nitrogen atoms.
Carborazine is a six-membered ring with two nitrogen heteroatoms and two boron heteroatom.
The hypothetical compound with six nitrogen heteroatoms would be hexazine.
Borazine is a six-membered ring with three nitrogen heteroatoms and three boron heteroatom.
With 7-membered rings, the heteroatom must be able to provide an empty pi orbital (e.g., boron) for "normal" aromatic stabilization to be available; otherwise, homoaromaticity may be possible. Compounds with one heteroatom include:
Those with two heteroatoms include:
Borazocine is an eight-membered ring with four nitrogen heteroatoms and four boron heteroatom.
Heterocyclic rings systems that are formally derived by fusion with other rings, either carbocyclic or heterocyclic, have a variety of common and systematic names. For example, with the benzo-fused unsaturated nitrogen heterocycles, pyrrole provides indole or isoindole depending on the orientation. The pyridine analog is quinoline or isoquinoline. For azepine, benzazepine is the preferred name. Likewise, the compounds with two benzene rings fused to the central heterocycle are carbazole, acridine, and dibenzoazepine. Thienothiophene are the fusion of two thiophene rings. Phosphaphenalenes are a tricyclic phosphorus-containing heterocyclic system derived from the carbocycle phenalene.
The history of heterocyclic chemistry began in the 1800s, in step with the development of organic chemistry. Some noteworthy developments:
1818: Brugnatelli isolates alloxan from uric acid
1832: Dobereiner produces furfural (a furan) by treating starch with sulfuric acid
1834: Runge obtains pyrrole ("fiery oil") by dry distillation of bones
1906: Friedlander synthesizes indigo dye, allowing synthetic chemistry to displace a large agricultural industry
1936: Treibs isolates chlorophyl derivatives from crude oil, explaining the biological origin of petroleum.
1951: Chargaff's rules are described, highlighting the role of heterocyclic compounds (purines and pyrimidines) in the genetic code.
Heterocyclic compounds are pervasive in many areas of life sciences and technology. Many drugs are heterocyclic compounds. | https://en.wikipedia.org/wiki?curid=13734 |
Harry Connick Jr.
Joseph Harry Fowler Connick Jr. (born September 11, 1967) is an American singer, pianist, composer, actor, and television host. He has sold over 28million albums worldwide. Connick is ranked among the top60 best-selling male artists in the United States by the Recording Industry Association of America, with 16million in certified sales. He has had seven top20 US albums, and ten number-one US jazz albums, earning more number-one albums than any other artist in US jazz chart history.
Connick's best-selling album in the United States is his Christmas album "When My Heart Finds Christmas" (1993). His highest-charting album is his release "Only You" (2004), which reached No.5 in the US and No.6 in Britain. He has won three Grammy Awards and two Emmy Awards. He played Debra Messing's character Grace Adler’s husband, Leo Markus, on the NBC sitcom "Will & Grace" from 2002 to 2006.
Connick began his acting career as a tail gunner in the World War II film "Memphis Belle" (1990). He played a serial killer in "Copycat" (1995), before being cast as a fighter pilot in the blockbuster "Independence Day" (1996). Connick's first role as a leading man was in "Hope Floats" (1998) with Sandra Bullock. His first thriller film since "Copycat" came in the film "Basic" (2003) with John Travolta. Additionally, he played a violent ex-husband in "Bug", before two romantic comedies, "P.S. I Love You" (2007), and the leading man in "New in Town" (2009) with Renée Zellweger. In 2011, he appeared in the family film "Dolphin Tale" as Dr. Clay Haskett and in its 2014 sequel.
Harry Connick Jr. was born and raised in New Orleans, Louisiana. His mother, Anita Frances Livingston (née Levy) was a lawyer and judge in New Orleans. His father, Joseph Harry Fowler Connick Sr., was the district attorney of Orleans Parish from 1973 to 2003. His parents also owned a record store. Connick's father is a Catholic of Irish, English, and German ancestry. Connick's mother, who died from ovarian cancer, was Jewish (her parents had emigrated from Minsk and Vienna). Connick and his sister, Suzanna, were raised in the Lakeview neighborhood of New Orleans.
Connick's musical talents came to the fore when he started learning keyboards at age three, playing publicly at age five, and recording with a local jazz band at ten. When he was nine years old, Connick performed the Piano Concerto No. 3 Opus 37 of Beethoven with the New Orleans Symphony Orchestra (now the Louisiana Philharmonic). Later he played a duet with Eubie Blake at the Royal Orleans Esplanade Lounge in New Orleans. The song was "I'm Just Wild About Harry". This was recorded for a Japanese documentary called "Jazz Around the World". The clip was also shown in a Bravo special, called "Worlds of Harry Connick, Junior." in 1999. His musical talents were developed at the New Orleans Center for Creative Arts and under the tutelage of Ellis Marsalis Jr. and James Booker.
Connick attended Jesuit High School, Isidore Newman School, Lakeview School, and the New Orleans Center for Creative Arts, all in New Orleans. Following an unsuccessful attempt to study jazz academically, and having given recitals in the classical and jazz piano programs at Loyola University, Connick moved to the 92nd Street YMHA in New York City to study at Hunter College and the Manhattan School of Music. There he met Columbia Records executive, Dr. George Butler, who persuaded him to sign with Columbia. His first record, "Harry Connick Jr.", was a mainly instrumental album of standards. He soon acquired a reputation in jazz because of extended stays at high-profile New York venues. His next album, "20", featured his vocals and added to this reputation.
With Connick's reputation growing, director Rob Reiner asked him to provide a soundtrack for his romantic comedy, "When Harry Met Sally..." (1989), starring Meg Ryan and Billy Crystal. The soundtrack consisted of several standards, including "It Had to Be You", "Let's Call the Whole Thing Off" and "Don't Get Around Much Anymore", and achieved double-platinum status in the United States. He won his first Grammy Award for Best Jazz Male Vocal Performance for his work on the soundtrack.
Connick made his screen debut in "Memphis Belle" (1990), a fictional story about a B-17 Flying Fortress bomber crew in World War II. In that year he began a two-year world tour. In addition, he released two albums in July 1990: the instrumental jazz trio album "Lofty's Roach Souffle" and a big-band album of mostly original songs titled "We Are in Love", which also went double platinum. "We Are in Love" earned him his second consecutive Grammy for Best Jazz Male Vocal.
"Promise Me You'll Remember", his contribution to the "Godfather III" soundtrack, was nominated for both an Academy Award and a Golden Globe Award in 1991. In a year of recognition, he was also nominated for an Emmy Award for Best Performance in a Variety Special for his PBS special "Swingin' Out Live", which was also released as a video. In October 1991, he released his third consecutive multi-platinum album, "Blue Light, Red Light", on which he wrote and arranged the songs. Also in October 1991, he starred in "Little Man Tate", directed by Jodie Foster, playing the friend of a child prodigy who goes to college.
In November 1992, Connick released "25", a solo piano collection of standards that again went platinum. He also re-released the album "Eleven". Connick contributed "A Wink and a Smile" to the "Sleepless in Seattle" soundtrack, released in 1993. His multi-platinum album of holiday songs, "When My Heart Finds Christmas", was the best-selling Christmas album in 1993.
In 1994, Connick decided to branch out. He released "She", an album of New Orleans funk that also went platinum. In addition, he released a song called "(I Could Only) Whisper Your Name" for the soundtrack of "The Mask", starring Jim Carrey, which is his most successful single in the United States to date.
Connick took his funk music on a tour of the United Kingdom in 1994, an effort that did not please some of his fans, who were expecting a jazz crooner. Connick also went on a tour of the People's Republic of China in 1995, playing at the Shanghai Center Theatre. The performance was televised live in China for what became known as the Shanghai Gumbo special. In his third film "Copycat", Connick played a serial killer who terrorizes a psychiatrist (played by Sigourney Weaver). Released in 1995, "Copycat" also starred Holly Hunter and Sigourney Weaver. The following year, he released his second funk album, "Star Turtle", which did not sell as well as previous albums, although it did reach No. 38 on the charts. However, he appeared in the most successful movie of 1996, "Independence Day", with Will Smith and Jeff Goldblum.
For his 1997 release "To See You", Connick recorded original love songs, touring the United States and Europe with a full symphony orchestra backing him and his piano in each city. As part of his tour, he played at the Nobel Peace Prize Concert in Oslo, Norway, with his final concert of that tour in Paris being recorded for a Valentine's Day special on PBS in 1998. He also continued his film career, starring in "Excess Baggage" (1997) opposite Alicia Silverstone and Benicio del Toro.
In May 1998, he had his first leading role in director Forest Whitaker's "Hope Floats", with Sandra Bullock as his female lead. He released "Come By Me", his first album of big band music in eight years in 1999, and embarked on a world tour visiting the United States, Europe, Japan and Australia. In addition, he provided the voice of Dean McCoppin in the animated film "The Iron Giant".
Connick wrote the score for Susan Stroman's Broadway musical "Thou Shalt Not", based on Émile Zola's novel "Thérèse Raquin", in 2000; it premiered in 2001. His music and lyrics earned a Tony Award nomination. He was also the narrator of the film "My Dog Skip", released in that year.
In March 2001, Connick starred in a television production of "South Pacific" with Glenn Close, televised on the ABC network. He also starred in his twelfth movie, "Mickey", featuring a screenplay by John Grisham that same year. In October 2001, he again released two albums: "Songs I Heard", featuring big band re-workings of children's show themes, and "30", featuring Connick on piano with guest appearances by several other musical artists. "Songs I Heard" won Connick another Grammy for Best Traditional Pop Album and he toured performing songs from the album, holding matinees at which each parent had to be accompanied by a child.
In 2002, he received a for a "system and method for coordinating music display among players in an orchestra." Connick appeared as Grace Adler's boyfriend (and later husband) Leo Markus on the NBC sitcom "Will & Grace" from 2002 to 2006.
In July 2003, Connick released his first instrumental album in fifteen years, "Other Hours Connick on Piano Volume 1". It was released on Branford Marsalis' new label Marsalis Music and led to a short tour of nightclubs and small theaters. Connick appeared in the film "Basic". In October 2003, he released his second Christmas album, "Harry for the Holidays", which went gold and reached No. 12 on the "Billboard" 200 albums chart. He also had a television special on NBC featuring Whoopi Goldberg, Nathan Lane, Marc Anthony and Kim Burrell. "Only You", his seventeenth album for Columbia Records, was released in February 2004. A collection of 1950s and 1960s ballads, "Only You", went top ten on both sides of the Atlantic and was certified gold in the United States in March 2004. The "Only You" tour with big band went on in America, Australia and a short trip to Asia. "Harry for the Holidays" was certified platinum in November 2004. A music DVD "Harry Connick Jr."Only You" in Concert" was released in March 2004, after it had first aired as a "Great Performances" special on PBS. The special won him an Emmy Award for Outstanding Music Direction. The DVD received a Gold & Platinum Music VideoLong Form awards from the RIAA in November 2005.
An animated holiday special, "The Happy Elf", aired on NBC in December 2005, with Connick as the composer, the narrator, and one of the executive producers. Shortly after, it was released on DVD. The holiday special was based on his original song "The Happy Elf", from his 2003 album "Harry for the Holidays". Another album from Marsalis Music was recorded in 2005, "", a duo album with Harry Connick Jr. on piano together with Branford Marsalis on saxophone. A music DVD, "A Duo Occasion", was filmed at the Ottawa International Jazz Festival 2005 in Canada, and released in November 2005.
He appeared in another episode of NBC sitcom "Will & Grace" in November 2005, and appeared in an additional three episodes in 2006.
"Bug", a film directed by William Friedkin, is a psychological thriller filmed in 2005, starring Connick, Ashley Judd, and Michael Shannon. The film was released in 2007. He starred in the Broadway revival of "The Pajama Game", produced by the Roundabout Theater Company, along with Michael McKean and Kelli O'Hara, at the "American Airlines Theatre" in 2006. It ran from February 23 to June 17, 2006, including five benefit performances running from June 13 to 17. The "Pajama Game" cast recording was nominated for a Grammy, after being released as part of Connick's double disc album Harry on Broadway, Act I.
He hosted The Weather Channel's miniseries "100 Biggest Weather Moments" which aired in 2007. He was part of the documentary , released in November 2007. He sat in on piano on Bob French's 2007 album "Marsalis Music Honors Series: Bob French". He appeared in the film "P.S. I Love You", released in December 2007. A third album in the "Connick on Piano" series, "Chanson du Vieux Carré" was released in 2007, and Connick received two Grammy nominations for the track "Ash Wednesday", for the Grammy awards in 2008. "Chanson du Vieux Carré" was released simultaneously with the album "Oh, My NOLA". Connick toured North America and Europe in 2007, and toured Asia and Australia in 2008, as part of his My New Orleans Tour. Connick did the arrangements for, wrote a couple of songs, and sang a duet on Kelli O'Hara's album that was released in May 2008. He was also the featured singer at the Concert of Hope immediately preceding Pope Benedict XVI's Mass at Yankee Stadium in April 2008. He had the starring role of Dr. Dennis Slamon in the Lifetime television film "Living Proof" (2008). His third Christmas album, "What a Night!", was released in November 2008.
Harry has a vast knowledge of musical genres and vocalists, even Gospel music. One of his favorite Gospel artists is Stellar Award winner and Grammy nominated artist Kim Burrell of Houston, Texas. "And when Harry Connick Jr. assembled a symphony orchestra for Pope Benedict XVI’s appearance at Yankee Stadium in 2008, he wanted Burrell on vocals"
The film "New in Town" starring Connick and Renée Zellweger, began filming in January 2008, and was released in January 2009. Connick's album "Your Songs" was released on CD, September 22, 2009. In contrast to Connick's previous albums, this album is a collaboration with a record company producer, the multiple Grammy Award winning music executive Clive Davis.
Connick starred in the Broadway revival of "On a Clear Day You Can See Forever", which opened at the St. James Theatre in November 2011 in previews.
Connick appeared on the May 4, 2010 episode of "American Idol" season 9, where he acted as a mentor for the top 5 finalists. He appeared again the next night on May 5 to perform "And I Love Her."
On January 6, 2012, NBC president Robert Greenblatt announced at the Television Critics Association winter press tour that Connick had been cast in a four-episode arc of NBC's long-running legal drama, "" as new Executive ADA, David Haden, a prosecutor who is assigned a case with Detective Olivia Benson (Mariska Hargitay).
On June 11, 2013, Connick released a new album of all original music titled "Every Man Should Know". Connick debuted the title track live on May 2, 2013 episode of "American Idol" and appeared on "The Ellen DeGeneres Show" the following week to discuss his new project. A 2013 US summer tour was announced in support of the album.
Connick returned to "American Idol" to mentor the top four of season 12. He performed "Every Man Should Know" on the results show the following night.
On September 3, 2013, the officials of "American Idol" officially announced that Connick would be a part of the judging panel for season 13 alongside former judge Jennifer Lopez and returning judge Keith Urban.
"Angels Sing", a family Christmas movie released in November 2013 by Lionsgate, afforded Connick an onscreen collaboration with fellow musician Willie Nelson. The two wrote a special song exclusively for the movie. Shot in Austin, Texas, "Angels Sing" features actor/musicians Connie Britton, Lyle Lovett, and Kris Kristofferson and is directed by Tim McCanlies, who previously worked with Connick in The Iron Giant.
A one-hour weekday daytime talk show starring Connick and named "Harry" debuted on September 12, 2016.
In January 2019, it was announced that Connick was hired by piano instruction software company Playground Sessions as a video instructor.
On October 25, 2019 he released a new album of Cole Porter compositions rearranged by Connick himself from Porter’s The Great American Songbook including “Anything Goes” and “You Do Something To Me.” After selecting the songs, and writing and orchestrating the arrangements, he assembled and conducted the orchestra which features his longtime touring band with additional horns and a full string section.
Along with his album, Connick announced his return to Broadway on September 16, 2019 with "Harry Connick Jr. — A Celebration of Cole Porter", a multimedia celebration of the Cole Porter songbook. The production was conceived and directed by Connick himself with the addition of theatrical and film elements accompanied by a company of dancers and an onstage orchestra.
The following musicians have toured as the Harry Connick Jr. Big Band since its inception in 1990:
Connick, a New Orleans native, is a founder of the Krewe of Orpheus, a music-based New Orleans krewe, taking its name from Orpheus of classical mythology. The Krewe of Orpheus parades on St. Charles Avenue and Canal Street in New Orleans on Lundi Gras (Fat Monday)the day before Mardi Gras (Fat Tuesday).
On September 2, 2005, Connick helped to organize, and appeared in, the NBC-sponsored live telethon concert, "A Concert for Hurricane Relief", for relief in the wake of Hurricane Katrina. He spent several days touring the city to draw attention to the plight of citizens stranded at the Ernest N. Morial Convention Center and other places. At the concert he paired with host Matt Lauer, and entertainers including Tim McGraw, Faith Hill, Kanye West, Mike Myers, and John Goodman.
On September 6, 2005, Connick was made honorary chair of Habitat for Humanity's Operation Home Delivery, a long-term rebuilding plan for families who survived Hurricane Katrina in New Orleans and along the Gulf Coast. His actions in New Orleans earned him a Jefferson Award for Public Service.
Connick's album "Oh, My NOLA", and "" were released in 2007, with a following tour called the My New Orleans Tour.
Connick and Branford Marsalis devised an initiative to help restore New Orleans' musical heritage. Habitat for Humanity and New Orleans Area Habitat for Humanity, working with Connick and Marsalis announced December 6, 2005, plans for a Musicians' Village in New Orleans. The Musicians' Village includes Habitat-constructed homes, with an "Ellis Marsalis Center for Music", as the area's centerpiece. The Habitat-built homes provide musicians, and anyone else who qualifies, the opportunity to buy decent, affordable housing.
In 2012, Connick and Marsalis received the S. Roger Horchow Award for Greatest Public Service by a Private Citizen, an award given out annually by Jefferson Awards.
On April 16, 1994, Connick married former Victoria's Secret model Jill Goodacre, originally from Texas, at the St. Louis Cathedral, New Orleans. Jill is the daughter of sculptor Glenna Goodacre, originally from Lubbock, and now Santa Fe, New Mexico. The song "Jill", on the album "Blue Light, Red Light" (1991) is about her. They have three daughters: Georgia Tatum (born April 17, 1996), Sarah Kate (born September 12, 1997), and Charlotte (born June 26, 2002). The family resides in New Canaan, Connecticut and New Orleans, Louisiana. Connick is a practicing Roman Catholic. In 2011 Harry wrote Kate's debut song "A Lot Like Me". The song was released to celebrate the debut of American Girl's newest historical characters Cecile Rey and Marie Grace Gardner. "A Lot Like Me" is available on iTunes. The proceeds from "A Lot Like Me" went towards Ellis Marsalis Center for Music. In 2014, Cecile and Marie Grace were archived with Ruthie and Ivy to make room for the return of Samantha and BeForever.
Connick is a supporter of hometown NFL franchise New Orleans Saints. He was caught on camera at the Super Bowl XLIV, which the Saints won, in Miami by the television crew of "The Ellen DeGeneres Show" during the post-game celebrations. Ellen's mother Betty was on the sidelines watching the festivities when she spotted Connick in the stands sporting a Drew Brees jersey.
Connick was arrested by the Port Authority Police in December 1992 and charged with having a 9mm pistol in his possession at JFK International Airport. After spending a day in jail, he agreed to make a public-service television commercial warning against breaking gun laws. The court agreed to drop all charges if Connick stayed out of trouble for six months. | https://en.wikipedia.org/wiki?curid=13743 |
Hydrostatic shock
Hydrostatic shock is the controversial concept that a penetrating projectile (such as a bullet) can produce a pressure wave that causes "remote neural damage", "subtle damage in neural tissues" and/or "rapid incapacitating effects" in living targets. It has also been suggested that pressure wave effects can cause indirect bone fractures at a distance from the projectile path, although it was later demonstrated that indirect bone fractures are caused by temporary cavity effects (strain placed on the bone by the radial tissue displacement produced by the temporary cavity formation).
Proponents of the concept argue that hydrostatic shock can produce remote neural damage and produce incapacitation more quickly than blood loss effects. In arguments about the differences in stopping power between calibers and between cartridge models, proponents of cartridges that are "light and fast" (such as the 9×19mm Parabellum) versus cartridges that are "slow and heavy" (such as the .45 ACP) often refer to this phenomenon.
Martin Fackler has argued that sonic pressure waves do not cause tissue disruption and that temporary cavity formation is the actual cause of tissue disruption mistakenly ascribed to sonic pressure waves. One review noted that strong opinion divided papers on whether the pressure wave contributes to wound injury. It ultimately concluded that no "conclusive evidence could be found for permanent pathological effects produced by the pressure wave".
In the scientific literature, the first discussion of pressure waves created when a bullet hits a living target is presented by E. Harvey Newton and his research group at Princeton University in 1947:
Frank Chamberlin, a World War II trauma surgeon and ballistics researcher, noted remote pressure wave effects. Col. Chamberlin described what he called "explosive effects" and "hydraulic reaction" of bullets in tissue. "...liquids are put in motion by ‘shock waves’ or hydraulic effects... with liquid filled tissues, the effects and destruction of tissues extend in all directions far beyond the wound axis". He avoided the ambiguous use of the term "shock" because it can refer to either a specific kind of pressure wave associated with explosions and supersonic projectiles or to a medical condition in the body.
Col. Chamberlin recognized that many theories have been advanced in wound ballistics. During World War II he commanded an 8,500-bed hospital center that treated over 67,000 patients during the fourteen months that he operated it. P.O. Ackley estimates that 85% of the patients were suffering from gunshot wounds. Col. Chamberlin spent many hours interviewing patients as to their reactions to bullet wounds. He conducted many live animal experiments after his tour of duty. On the subject of wound ballistics theories, he wrote:
Other World War II era scientists noted remote pressure wave effects in the peripheral nerves. There was support for the idea of remote neural effects of ballistic pressure waves in the medical and scientific communities, but the phrase "’hydrostatic shock’" and similar phrases including "shock" were used mainly by gunwriters (such as Jack O'Conner) and the small arms industry (such as Roy Weatherby, and Federal "Hydra-Shok.")
Dr. Martin Fackler, a Vietnam-era trauma surgeon, wound ballistics researcher, a Colonel in the U.S. Army and the head of the Wound Ballistics Laboratory for the U.S. Army's Medical Training Center, Letterman Institute, claimed that hydrostatic shock had been disproved and that the assertion that a pressure wave plays a role in injury or incapacitation is a myth. Others expressed similar views.
Dr. Fackler based his argument on the lithotriptor, a tool commonly used to break up kidney stones. The lithotriptor uses sonic pressure waves which are stronger than those caused by most handgun bullets, yet it produces no damage to soft tissues whatsoever. Hence, Fackler argued, ballistic pressure waves cannot damage tissue either.
Dr. Fackler claimed that a study of rifle bullet wounds in Vietnam (Wound Data and Munitions Effectiveness Team) found "no cases of bones being broken, or major vessels torn, that were not hit by the penetrating bullet. In only two cases, an organ that was not hit (but was within a few cm of the projectile path), suffered some disruption." Dr. Fackler cited a personal communication with R. F. Bellamy. However, Bellamy's published findings the following year estimated that 10% of fractures in the data set might be due to indirect injuries, and one specific case is described in detail (pp. 153–154). In addition, the published analysis documents five instances of abdominal wounding in cases where the bullet did not penetrate the abdominal cavity (pp. 149–152), a case of lung contusion resulting from a hit to the shoulder (pp. 146–149), and a case of indirect effects on the central nervous system (p. 155). Fackler's critics argue that Fackler's evidence does not contradict distant injuries, as Fackler claimed, but the WDMET data from Vietnam actually provides supporting evidence for it.
A summary of the debate was published in 2009 as part of a "Historical Overview of Wound Ballistics Research."
The Wound Data and Munitions Effectiveness Team (WDMET) gathered data on wounds sustained during the Vietnam War. In their analysis of this data published in the "Textbook of Military Medicine", Ronald Bellamy and Russ Zajtchuck point out a number of cases which seem to be examples of distant injuries. Bellamy and Zajtchuck describe three mechanisms of distant wounding due to pressure transients: 1) stress waves 2) shear waves and 3) a vascular pressure impulse.
After citing Harvey's conclusion that "stress waves probably do not cause any tissue damage" (p. 136), Bellamy and Zajtchuck express their view that Harvey's interpretation might not be definitive because they write "the possibility that stress waves from a penetrating projectile might also cause tissue damage cannot be ruled out." (p. 136) The WDMET data includes a case of a lung contusion resulting from a hit to the shoulder. The caption to Figure 4-40 (p. 149) says, "The pulmonary injury may be the result of a stress wave." They describe the possibility that a hit to a soldier's trapezius muscle caused temporary paralysis due to "the stress wave passing through the soldier's neck indirectly [causing] cervical cord dysfunction." (p. 155)
In addition to stress waves, Bellamy and Zajtchuck describe shear waves as a possible mechanism of indirect injuries in the WDMET data. They estimate that 10% of bone fractures in the data may be the result of indirect injuries, that is, bones fractured by the bullet passing close to the bone without a direct impact. A Chinese experiment is cited which provides a formula estimating how pressure magnitude decreases with distance. Together with the difference between strength of human bones and strength of the animal bones in the Chinese experiment, Bellamy and Zajtchuck use this formula to estimate that assault rifle rounds "passing within a centimeter of a long bone might very well be capable of causing an indirect fracture." (p. 153) Bellamy and Zajtchuck suggest the fracture in Figures 4-46 and 4-47 is likely an indirect fracture of this type. Damage due to shear waves extends to even greater distances in abdominal injuries in the WDMET data. Bellamy and Zajtchuck write, "The abdomen is one body region in which damage from indirect effects may be common." (p. 150) Injuries to the liver and bowel shown in Figures 4-42 and 4-43 are described, "The damage shown in these examples extends far beyond the tissue that is likely to direct contact with the projectile." (p. 150)
In addition to providing examples from the WDMET data for indirect injury due to propagating shear and stress waves, Bellamy and Zajtchuck expresses an openness to the idea of pressure transients propagating via blood vessels can cause indirect injuries. "For example, pressure transients arising from an abdominal gunshot wound might propagate through the vena cavae and jugular venous system into the cranial cavity and cause a precipitous rise in intracranial pressure there, with attendant transient neurological dysfunction." (p. 154) However, no examples of this injury mechanism are presented from the WDMET data. However, the authors suggest the need for additional studies writing, "Clinical and experimental data need to be gathered before such indirect injuries can be confirmed." Distant injuries of this nature were later confirmed in the experimental data of Swedish and Chinese researchers, in the clinical findings of Krajsa and in autopsy findings from Iraq.
Proponents of the concept point to human autopsy results demonstrating brain hemorrhaging from fatal hits to the chest, including cases with handgun bullets. Thirty-three cases of fatal penetrating chest wounds by a single bullet were selected from a much larger set by excluding all other traumatic factors, including past history.
An 8-month study in Iraq performed in 2010 and published in 2011 reports on autopsies of 30 gunshot victims struck with high-velocity (greater than 2500 fps) rifle bullets. The authors determined that the lungs and chest are the most susceptible to distant wounding, followed by the abdomen. The study noted that the "sample size was so small [too small] to reach the level of statistical significance". Nevertheless, the authors conclude:
A shock wave can be created when fluid is rapidly displaced by an explosive or projectile. Tissue behaves similarly enough to water that a sonic pressure wave can be created by a bullet impact, generating pressures in excess of .
Duncan MacPherson, a former member of the International Wound Ballistics Association and author of the book, Bullet Penetration, claimed that shock waves cannot result from bullet impacts with tissue. In contrast, Brad Sturtevant, a leading researcher in shock wave physics at Caltech for many decades, found that shock waves can result from handgun bullet impacts in tissue. Other sources indicate that ballistic impacts can create shock waves in tissue.
Blast and ballistic pressure waves have physical similarities. Prior to wave reflection, they both are characterized by a steep wave front followed by a nearly exponential decay at close distances. They have similarities in how they cause neural effects in the brain. In tissue, both types of pressure waves have similar magnitudes, duration, and frequency characteristics. Both have been shown to cause damage in the hippocampus. It has been hypothesized that both reach the brain from the thoracic cavity via major blood vessels.
For example, Ibolja Cernak, a leading researcher in blast wave injury at the Applied Physics Laboratory at Johns Hopkins University, hypothesized, "alterations in brain function following blast exposure are induced by kinetic energy transfer of blast overpressure via great blood vessels in abdomen and thorax to the central nervous system." This hypothesis is supported by observations of neural effects in the brain from localized blast exposure focused on the lungs in experiments in animals.
"Hydrostatic shock" expresses the idea that organs can be damaged by the pressure wave in addition to damage from direct contact with the penetrating projectile. If one interprets the "shock" in the term "hydrostatic shock" to refer to the physiological effects rather than the physical wave characteristics, the question of whether the pressure waves satisfy the definition of "shock wave" is unimportant, and one can consider the weight of scientific evidence and various claims regarding the possibility of a ballistic pressure wave to create tissue damage and incapacitation in living targets.
A number of papers describe the physics of ballistic pressure waves created when a high-speed projectile enters a viscous medium. These results show that ballistic impacts produce pressure waves that propagate at close to the speed of sound.
Lee et al. present an analytical model showing that unreflected ballistic pressure waves are well approximated by an exponential decay, which is similar to blast pressure waves. Lee et al. note the importance of the energy transfer:
The rigorous calculations of Lee et al. require knowing the drag coefficient and frontal area of the penetrating projectile at every instant of the penetration. Since this is not generally possible with expanding handgun bullets, Courtney and Courtney developed a model for estimating the peak pressure waves of handgun bullets from the impact energy and penetration depth in ballistic gelatin. This model agrees with the more rigorous approach of Lee et al. for projectiles where they can both be applied. For expanding handgun bullets, the peak pressure wave magnitude is proportional to the bullet's kinetic energy divided by the penetration depth.
Goransson et al. were the first contemporary researchers to present compelling evidence for remote cerebral effects of extremity bullet impact. They observed changes in EEG readings from pigs shot in the thigh. A follow-up experiment by Suneson et al. implanted high-speed pressure transducers into the brain of pigs and demonstrated that a significant pressure wave reaches the brain of pigs shot in the thigh. These scientists observed apnea, depressed EEG readings, and neural damage in the brain caused by the distant effects of the ballistic pressure wave originating in the thigh.
The results of Suneson et al. were confirmed and expanded upon by a later experiment in dogs
which "confirmed that distant effect exists in the central nervous system after a high-energy missile impact to an extremity. A high-frequency oscillating pressure wave with large amplitude and short duration was found in the brain after the extremity impact of a high-energy missile..." Wang et al. observed significant damage in both the hypothalamus and hippocampus regions of the brain due to remote effects of the ballistic pressure wave.
In a study of a handgun injury, Sturtevant found that pressure waves from a bullet impact in the torso can reach the spine and that a focusing effect from concave surfaces can concentrate the pressure wave on the spinal cord producing significant injury. This is consistent with other work showing remote spinal cord injuries from ballistic impacts.
Roberts et al. present both experimental work and finite element modeling showing that there can be considerable pressure wave magnitudes in the thoracic cavity for handgun projectiles stopped by a Kevlar vest. For example, an 8 gram projectile at 360 m/s impacting a NIJ level II vest over the sternum can produce an estimated pressure wave level of nearly 2.0 MPa (280 psi) in the heart and a pressure wave level of nearly 1.5 MPa (210 psi) in the lungs. Impacting over the liver can produce an estimated pressure wave level of 2.0 MPa (280 psi) in the liver.
The work of Courtney et al. supports the role of a ballistic pressure wave in incapacitation and injury. The work of Suneson et al. and Courtney et al. suggest that remote neural effects can occur with levels of energy transfer possible with handguns, about . Using sensitive biochemical techniques, the work of Wang et al. suggests even lower impact energy thresholds for remote neural injury to the brain. In analysis of experiments of dogs shot in the thigh they report highly significant (p < 0.01), easily detectable neural effects in the hypothalamus and hippocampus with energy transfer levels close to . Wang et al. reports less significant (p < 0.05) remote effects in the hypothalamus with energy transfer just under .
Even though Wang et al. document remote neural damage for low levels of energy transfer, roughly , these levels of neural damage are probably too small to contribute to rapid incapacitation. Courtney and Courtney believe that remote neural effects only begin to make significant contributions to rapid incapacitation for ballistic pressure wave levels above (corresponds to transferring roughly in of penetration) and become easily observable above (corresponds to transferring roughly in of penetration). Incapacitating effects in this range of energy transfer are consistent with observations of remote spinal injuries, observations of suppressed EEGs and apnea in pigs and with observations of incapacitating effects of ballistic pressure waves without a wound channel.
The scientific literature contains significant other findings regarding injury mechanisms of ballistic pressure waves. Ming et al. found that ballistic pressure waves can break bones. Tikka et al. reports abdominal pressure changes produced in pigs hit in one thigh. Akimov et al. report on injuries to the nerve trunk from gunshot wounds to the extremities.
In self-defense, military, and law enforcement communities, opinions vary regarding the importance of remote wounding effects in ammunition design and selection. In his book on hostage rescuers, Leroy Thompson discusses the importance of hydrostatic shock in choosing a specific design of .357 Magnum and 9×19mm Parabellum bullets. In "Armed and Female", Paxton Quigley explains that hydrostatic shock is the real source of "stopping power." Jim Carmichael, who served as shooting editor for Outdoor Life magazine for 25 years, believes that hydrostatic shock is important to "a more immediate disabling effect" and is a key difference in the performance of .38 Special and .357 Magnum hollow point bullets. In "The search for an effective police handgun," Allen Bristow describes that police departments recognize the importance of hydrostatic shock when choosing ammunition. A research group at West Point suggests handgun loads with at least of energy and of penetration and recommends:
A number of law enforcement and military agencies have adopted the 5.7×28mm cartridge. These agencies include the Navy SEALs and the Federal Protective Service branch of the ICE. In contrast, some defense contractors, law enforcement analysts, and military analysts say that hydrostatic shock is an unimportant factor when selecting cartridges for a particular use because any incapacitating effect it may have on a target is difficult to measure and inconsistent from one individual to the next. This is in contrast to factors such as proper shot placement and massive blood loss which are almost always eventually incapacitating for nearly every individual.
The FBI recommends that loads intended for self-defense and law enforcement applications meet a minimum penetration requirement of in ballistic gelatin and explicitly advises against selecting rounds based on hydrostatic shock effects.
Hydrostatic shock is commonly considered as a factor in the selection of hunting ammunition. Peter Capstick explains that hydrostatic shock may have value for animals up to the size of white-tailed deer, but the ratio of energy transfer to animal weight is an important consideration for larger animals. If the animal's weight exceeds the bullet's energy transfer, penetration in an undeviating line to a vital organ is a much more important consideration than energy transfer and hydrostatic shock. Jim Carmichael, in contrast, describes evidence that hydrostatic shock can affect animals as large as Cape Buffalo in the results of a carefully controlled study carried out by veterinarians in a buffalo culling operation.
Dr. Randall Gilbert describes hydrostatic shock as an important factor in bullet performance on whitetail deer, "When it [a bullet] enters a whitetail’s body, huge accompanying shock waves send vast amounts of energy through nearby organs, sending them into arrest or shut down." Dave Ehrig expresses the view that hydrostatic shock depends on impact velocities above per second. Sid Evans explains the performance of the Nosler Partition bullet and Federal Cartridge Company's decision to load this bullet in terms of the large tissue cavitation and hydrostatic shock produced from the frontal diameter of the expanded bullet. The North American Hunting Club suggests big game cartridges that create enough hydrostatic shock to quickly bring animals down. | https://en.wikipedia.org/wiki?curid=13746 |
Hadith
Ḥadīth ( or ; , pl. aḥādīth, , , , literally means "talk" or "discourse") in Islam refers to what Muslims believe to be a record of the words, actions, and the silent approval of the Islamic prophet Muhammad. Hadith have been called "the backbone" of Islamic civilization, and within that religion the authority of hadith as a source for religious law and moral guidance ranks second only to that of the Quran (which Muslims hold to be the word of God revealed to his messenger Muhammad). Scriptural authority for hadith comes from the Quran which enjoins Muslims to emulate Muhammad and obey his judgments (in verses such as , ). While the number of verses pertaining to law in the Quran is relatively few, hadith give direction on everything from details of religious obligations (such as "Ghusl" or "Wudu", ablutions for "salat" prayer), to the correct forms of salutations and the importance of benevolence to slaves. Thus the "great bulk" of the rules of Sharia (Islamic law) are derived from hadith, rather than the Quran.
"Ḥadīth" is the Arabic word for things like speech, report, account, narrative. Unlike the Quran, not all Muslims believe that hadith accounts (or at least not all hadith accounts) are divine revelation. Hadith were not written down by Muhammad's followers immediately after his death but many generations later when they were collected, collated and compiled into a great corpus of Islamic literature. Different collections of hadīth would come to differentiate the different branches of the Islamic faith. There are many modern Muslims (some of whom call themselves Quranists but many are also known as Submitters) who believe that most Hadiths are actually fabrications created in the 8th & 9th century AD, and which are falsely attributed the Prophet Muhammad.
Because some hadith include questionable and even contradictory statements, the authentication of hadith became a major field of study in Islam. In its classic form a hadith has two parts — the chain of narrators who have transmitted the report (the "isnad"), and the main text of the report (the "matn"). Individual hadith are classified by Muslim clerics and jurists into categories such as "sahih" ("authentic"), "hasan" ("good") or "da'if" ("weak"). However, different groups and different scholars may classify a hadith differently.
Among scholars of Sunni Islam the term hadith may include not only the words, advice, practices, etc. of Muhammad, but also those of his companions. In Shia Islam, hadīth are the embodiment of the "sunnah", the words and actions of the prophet and his family the "Ahl al-Bayt" (The Twelve Imams and the prophet's daughter, Fatimah).
In Arabic, the noun ' ( ) means "report", "account", or "narrative". Its Arabic plural is ' ( ). "Hadith" also refers to the speech of a person.
In Islamic terminology, according to Juan Campo, the term "hadith" refers to reports of statements or actions of Muhammad, or of his tacit approval or criticism of something said or done in his presence.
Classical hadith specialist Ibn Hajar al-Asqalani says that the intended meaning of "hadith" in religious tradition is something attributed to Muhammad but that is not found in the Quran.
Scholar Patricia Crone includes reports by others than Muhammad in her definition of hadith: "short reports (sometimes just a line or two) recording what an early figure, such as a companion of the prophet (known as "sahabah") or Muhammad himself, said or did on a particular occasion, prefixed by a chain of transmitters". But she adds that "nowadays, hadith almost always means hadith from Muhammad himself."
However, according to the Shia Islam Ahlul Bayt Digital Library Project, "... when there is no clear Qur’anic statement, nor is there a Hadith upon which Muslim schools have agreed. ... Shi’a ... refer to Ahlul-Bayt [the family of Muhammad] for deriving the Sunnah of Prophet" — implying that while hadith is limited to the "Traditions" of Muhammad, the Shia Sunna draws on the sayings, etc. of the "Ahlul-Bayt" i.e. the Imams of Shia Islam.
The word "sunnah" is also used in reference to a normative custom of Muhammad or the early Muslim community.
Joseph Schacht describes hadith as providing "the documentation" of the sunnah.
Another source (Joseph A. Islam) distinguishes between the two saying:
Whereas the 'Hadith' is an oral communication that is allegedly derived from the Prophet or his teachings, the 'Sunna' (quite literally: mode of life, behaviour or example) signifies the prevailing customs of a particular community or people. ... A 'Sunna' is a practice which has been passed on by a community from generation to generation en masse, whereas the hadith are reports collected by later compilers often centuries removed from the source. ... A practice which is contained within the Hadith may well be regarded as Sunna, but it is not necessary that a Sunna would have a supporting hadith sanctioning it.
Some sources (Khaled Abou El Fadl) limit hadith to verbal reports, with the deeds of Muhammad and reports about his companions being part of the "sunnah", but not hadith.
Islamic literary classifications similar to hadith (but not "sunna") are "maghazi" and "sira". They differed from hadith in being organized "relatively chronologically" rather than by subject.
Other "traditions" of Islam related to hadith including:
Joseph Schacht quotes a hadith by Muhammad that is used "to justify reference" in Islamic law to the companions of Muhammad as religious authorities — "My companions are like lodestars." According to Schacht, (and other scholars) in the very first generations after the death of Muhammad, use of hadith from "Sahabah" ("companions" of Muhammad) and "Tabi‘un" ("successors" of the companions) "was the rule", while use of hadith of Muhammad himself by Muslims was "the exception". Schacht credits Al-Shafi‘i — founder of the Shafi'i school of "fiqh" (or "madh'hab") — with establishing the principle of the use of the hadith of the Muhammad for Islamic law, and emphasizing the inferiority of hadith of anyone else, saying hadiths
"from other persons are of no account in the face of a tradition from the Prophet, whether they confirm or contradict it;
if the other persons had been aware of the tradition from the Prophet, they would have followed it". This led to "the almost complete neglect" of traditions from Companions and others.
Collections of hadith sometimes mix those of Muhammad with the reports of others. Muwatta Imam Malik is usually described as "the earliest written collection of hadith" but sayings of Muhammad are “blended with the sayings of the companions”, (822 hadith from Muhammad and 898 from others, according to the count of one edition).
In "Introduction to Hadith" by Abd al-Hadi al-Fadli, "Kitab Ali" is referred to as "the first hadith book of the "Ahl al-Bayt" (family of Muhammad) to be written on the authority of the Prophet".
Among the verses cited as proof that the Quran called on Muslims "to refrain from that which [Muhammad] forbade, to obey him and to accept his rulings" in addition to obeying the Quran, are:
The hadith literature in use today is based on spoken reports in circulation after the death of Muhammad. Unlike the Quran, hadith were not promptly written down during Muhammad's life or immediately after his death. Hadith were evaluated and gathered into large collections during the 8th and 9th centuries, generations after the death of Muhammad, after the end of the era of the Rashidun Caliphate, over from where Muhammad lived.
"Many thousands of times" more numerous than Quranic verses, ahadith have been described as resembling layers surrounding the “core” of the Islamic belief (the Quran). Well-known, widely accepted hadith make up the narrow inner layer, with a hadith becoming less reliable and accepted with each layer stretching outward.
The reports of Muhammad's (and sometimes companions) behavior collected by hadith compilers include details of ritual religious practice such as the five "salat" (obligatory Islamic prayers) that are not found in the Quran, but also everyday behavior such as table manners, dress, and posture. Hadith are also regarded by Muslims as important tools for understanding things mentioned in the Quran but not explained, a source for "tafsir" (commentaries written on the Quran).
Some important elements, which are today taken to be a long-held part of Islamic practice and belief are not mentioned in the Quran, but are reported in hadiths. Therefore, Muslims usually maintain that hadiths are a necessary requirement for the true and proper practice of Islam, as it gives Muslims the nuanced details of Islamic practice and belief in areas where the Quran is silent. An example are the obligatory prayers, which are commanded in the Quran, but explained in hadith.
Details of prescribed movements and words of the prayer (known as "rakat's") and how many times they are to be performed, are found in hadith. However, hadiths differ on these details and consequently "salat" is performed differently by different hadithist Islamic sects. Quranists, on the contrary, hold that if the Quran is silent on some matter, it is because God did not hold its detail to be of consequence; and that some hadith contradict the Quran, evidence that some hadith are a source of corruption and not a complement to the Quran.
(Quranists are greatly outnumbered by Sunni, Shia and other Muslims who follow the Sunna.)
The hadith had a profound and controversial influence on "tafsir" (commentaries of the Quran). The earliest commentary of the Quran known as Tafsir Ibn Abbas is sometimes attributed to the companion Ibn Abbas.
The hadith were used in forming the basis of "Sharia" (the religious law system forming part of the Islamic tradition), and "fiqh" (Islamic jurisprudence). The hadith are at the root of why there is no single "fiqh" system, but rather a collection of parallel systems within Islam.
Much of early Islamic history available today is also based on the hadith, although it has been challenged for its lack of basis in primary source material and the internal contradictions of the secondary material available.
Hadith may be "hadith qudsi" (sacred hadith) — which some Muslims regard as the words of God (Arabic: Allah) — or "hadith sharif" (noble hadith), which are Muhammad's own utterances.
According to as-Sayyid ash-Sharif al-Jurjani, the hadith qudsi differ from the Quran in that the former are "expressed in Muhammad's words", whereas the latter are the "direct words of God". A "hadith qudsi" need not be a "sahih" (sound hadith), but may be "da‘if" or even "mawdu‘".
An example of a "hadith qudsi" is the hadith of Abu Hurairah who said that Muhammad said:
When God decreed the Creation He pledged Himself by writing in His book which is laid down with Him: My mercy prevails over My wrath.
In the Shia school of thought, there are two fundamental viewpoints of hadith: The Usuli view and the Akhbari view. The Usuli scholars stress the importance of scientific examination of hadiths using ijtihad while the Akhbari scholars take all hadiths from the four Shia books as authentic
.
The two major aspects of a hadith are the text of the report (the "matn"), which contains the actual narrative, and the chain of narrators (the "isnad"), which documents the route by which the report has been transmitted. The isnad was an effort to document that a hadith had actually come from Muhammad, and Muslim scholars from the eighth century until today have never ceased repeating the mantra "The isnad is part of the religion — if not for the isnad, whoever wanted could say whatever they wanted." The "isnad" means literally 'support', and it is so named due to the reliance of the hadith specialists upon it in determining the authenticity or weakness of a hadith. The "isnad" consists of a chronological list of the narrators, each mentioning the one from whom they heard the hadith, until mentioning the originator of the "matn" along with the "matn" itself.
The first people to hear hadith were the companions who preserved it and then conveyed it to those after them. Then the generation following them received it, thus conveying it to those after them and so on. So a companion would say, "I heard the Prophet say such and such." The Follower would then say, "I heard a companion say, 'I heard the Prophet.'" The one after him would then say, "I heard someone say, 'I heard a Companion say, 'I heard the Prophet..."" and so on.
Different branches of Islam refer to different collections of hadith, though the same incident may be found in hadith in different collections:
In general, the difference between Shi'a and Sunni collections is that Shia give preference to hadiths credited to the Prophet's family and close associates ("Ahl al-Bayt"), while Sunnis do not consider family lineage in evaluating hadith and sunnah narrated by any of twelve thousand companions of Muhammad.
Traditions of the life of Muhammad and the early history of Islam were passed down mostly orally for more than a hundred years after Muhammad's death in AD 632. Muslim historians say that Caliph Uthman ibn Affan (the third khalifa (caliph) of the Rashidun Caliphate, or third successor of Muhammad, who had formerly been Muhammad's secretary), is generally believed to urge Muslims to record the hadith just as Muhammad suggested to some of his followers to write down his words and actions.
Uthman's labours were cut short by his assassination, at the hands of aggrieved soldiers, in 656. No sources survive directly from this period so we are dependent on what later writers tell us about this period.
According to British historian of Arab world Alfred Guillaume, it is "certain" that "several small collections" of hadith were "assembled in Umayyad times."
In Islamic law, the use of hadith as now understood (hadith of Muhammad with documentation, isnads, etc.) came gradually. According to scholars such as Joseph Schacht, Ignaz Goldziher, and Daniel W. Brown, early schools of Islamic jurisprudence used rulings of the Prophet's Companions, the rulings of the Caliphs, and practices that “had gained general acceptance among the jurists of that school”. On his deathbed, Caliph Umar instructed Muslims to seek guidance from the Quran, the early Muslims ("muhajirun") who emigrated to Medina with Muhammad, the Medina residents who welcomed and supported the "muhajirun" (the "ansar"), the people of the desert, and the protected communities of Jews and Christians ("ahl al-dhimma").
According to the scholars Harald Motzki and Daniel W. Brown the earliest Islamic legal reasonings that have come down to us were "virtually hadith-free", but gradually, over the course of second century A.H. "the infiltration and incorporation of Prophetic hadiths into Islamic jurisprudence" took place.
It was Abū ʿAbdullāh Muhammad ibn Idrīs al-Shāfiʿī (150-204 AH), known as al-Shafi'i, who emphasized the final authority of a hadith of Muhammad, so that even the Quran was "to be interpreted in the light of traditions (i.e. hadith), and not vice versa." While traditionally the Quran is considered above the sunna in authority, Al-Shafi'i "forcefully argued" that the sunna stands "on equal footing with the Quran", (according to scholar Daniel Brown) for (as Al-Shafi'i put it) “the command of the Prophet is the command of God.”
In 851 the rationalist Mu`tazila school of thought fell from favor in the Abbasid Caliphate. The Mu`tazila, for whom the "judge of truth ... was human reason," had clashed with traditionists who looked to the literal meaning of the Quran and hadith for truth. While the Quran had been officially compiled and approved, hadiths had not.
One result was the number of hadiths began "multiplying in suspiciously direct correlation to their utility" to the quoter of the hadith (Traditionists quoted hadith warning against listening to human opinion instead of Sharia; Hanafites quoted a hadith stating that "In my community there will rise a man called Abu Hanifa [the Hanafite founder] who will be its guiding light". In fact one agreed upon hadith warned that, "There will be forgers, liars who will bring you hadiths which neither you nor your forefathers have heard, Beware of them." In addition the number of hadith grew enormously. While Malik ibn Anas had attributed just 1720 statements or deeds to the Muhammad, it was no longer unusual to find people who had collected a hundred times that number of hadith.
Faced with a huge corpus of miscellaneous traditions supported differing views on a variety of controversial matters—some of them flatly contradicting each other—Islamic scholars of the Abbasid sought to authenticate hadith. Scholars had to decide which hadith were to be trusted as authentic and which had been invented for political or theological purposes. To do this, they used a number of techniques which Muslims now call the science of hadith.
Sunni and Shia hadith collections differ because scholars from the two traditions differ as to the reliability of the narrators and transmitters. Narrators who took the side of Abu Bakr and Umar rather than Ali, in the disputes over leadership that followed the death of Muhammad, are seen as unreliable by the Shia; narrations sourced to Ali and the family of Muhammad, and to their supporters, are preferred. Sunni scholars put trust in narrators such as Aisha, whom Shia reject. Differences in hadith collections have contributed to differences in worship practices and shari'a law and have hardened the dividing line between the two traditions.
In the Sunni tradition, the number of such texts is somewhere between seven and thirteen thousand, but the number of "hadiths" is far greater because several "isnad" sharing the same text are each counted as individual hadith. If, say, ten companions record a text reporting a single incident in the life of Muhammad, hadith scholars can count this as ten hadiths. So Musnad Ahmad, for example, has over 30,000 hadiths—but this count includes texts that are repeated in order to record slight variations within the text or within the chains of narrations. Identifying the narrators of the various texts, comparing their narrations of the same texts to identify both the soundest reporting of a text and the reporters who are most sound in their reporting occupied experts of hadith throughout the 2nd century. In the 3rd century of Islam (from 225/840 to about 275/889), hadith experts composed brief works recording a selection of about two- to five-thousand such texts which they felt to have been most soundly documented or most widely referred to in the Muslim scholarly community. The 4th and 5th century saw these six works being commented on quite widely. This auxiliary literature has contributed to making their study the place of departure for any serious study of hadith. In addition, Bukhari and Muslim in particular, claimed that they were collecting only the soundest of sound hadiths. These later scholars tested their claims and agreed to them, so that today, they are considered the most reliable collections of hadith. Toward the end of the 5th century, Ibn al-Qaisarani formally standardized the Sunni canon into six pivotal works, a delineation which remains to this day.
Over the centuries, several different categories of collections came into existence. Some are more general, like the "muṣannaf", the "muʿjam", and the "jāmiʿ", and some more specific, either characterized by the topics treated, like the "sunan" (restricted to legal-liturgical traditions), or by its composition, like the "arbaʿīniyyāt" (collections of forty hadiths).
Shi'a Muslims seldom if ever use the six major hadith collections followed by the Sunni, as they do not trust many of the Sunni narrators and transmitters. They have their own extensive hadith literature. The best-known hadith collections are The Four Books, which were compiled by three authors who are known as the 'Three Muhammads'. The Four Books are: "Kitab al-Kafi" by Muhammad ibn Ya'qub al-Kulayni al-Razi (329 AH), "Man la yahduruhu al-Faqih" by Muhammad ibn Babuya and "Al-Tahdhib" and "Al-Istibsar" both by Shaykh Muhammad Tusi. Shi'a clerics also make use of extensive collections and commentaries by later authors.
Unlike Sunnis, the majority of Shia do not consider any of their hadith collections to be sahih (authentic) in their entirety. Therefore, every individual hadith in a specific collection must be investigated separately to determine its authenticity. However, the Akhbari school does take all hadith from the four books as authentic.
The importance of hadith in the Shia school of thought is well documented. This can be captured by Ali ibn Abi Talib, cousin of Muhammad, when he narrated that "Whoever of our Shia (followers) knows our Shariah and takes out the weak of our followers from the darkness of ignorance to the light of knowledge (Hadith) which we (Ahl al-Bayt) have gifted to them, he on the day of judgement will come with a crown on his head. It will shine among the people gathered on the plain of resurrection." Hassan al-Askari, a descendant of Muhammad, gave support to this narration, stating "Whoever he had taken out in the worldly life from the darkness of ignorance can hold to his light to be taken out of the darkness of the plain of resurrection to the garden (paradise). Then all those whomever he had taught in the worldly life anything of goodness, or had opened from his heart a lock of ignorance or had removed his doubts will come out."
Regarding the importance of maintaining accuracy in recording hadith, it has been documented that Muhammad al-Baqir, the great grandson of Muhammad, has said that "Holding back in a doubtful issue is better than entering destruction. Your not narrating a Hadith is better than you narrating a Hadith in which you have not studied thoroughly. On every truth, there is a reality. Above every right thing, there is a light. Whatever agrees with the book of Allah you must take it and whatever disagrees you must leave it alone." Al-Baqir also emphasized the selfless devotion of Ahl al-Bayt to preserving the traditions of Muhammad through his conversation with Jabir ibn Abd Allah, an old companion of Muhammad. He (Al-Baqir) said, "Oh Jabir, had we spoken to you from our opinions and desires, we would be counted among those who are destroyed. We speak to you of the hadith which we treasure from the Messenger of Allah, Oh Allah grant compensation to Muhammad and his family worthy of their services to your cause, just as they treasure their gold and silver." Further, it has been narrated that Ja'far al-Sadiq, the son of al-Baqir, has said the following regarding hadith: "You must write it down; you will not memorize until you write it down."
The mainstream sects consider hadith to be essential supplements to, and clarifications of, the Quran, Islam's holy book, as well as for clarifying issues pertaining to Islamic jurisprudence. Ibn al-Salah, a hadith specialist, described the relationship between hadith and other aspect of the religion by saying: "It is the science most pervasive in respect to the other sciences in their various branches, in particular to jurisprudence being the most important of them." "The intended meaning of 'other sciences' here are those pertaining to religion," explains Ibn Hajar al-Asqalani, "Quranic exegesis, hadith, and jurisprudence. The science of hadith became the most pervasive due to the need displayed by each of these three sciences. The need hadith has of its science is apparent. As for Quranic exegesis, then the preferred manner of explaining the speech of God is by means of what has been accepted as a statement of Muhammad. The one looking to this is in need of distinguishing the acceptable from the unacceptable. Regarding jurisprudence, then the jurist is in need of citing as an evidence the acceptable to the exception of the later, something only possible utilizing the science of hadith."
According to Bernard Lewis, "in the early Islamic centuries there could be no better way of promoting a cause, an opinion, or a faction than to cite an appropriate action or utterance of the Prophet." To fight these forgeries, the elaborate science of hadith studies was devised to authenticate hadith known as "ilm al jarh" or "ilm al dirayah"
Hadith studies use a number of methods of evaluation developed by early Muslim scholars in determining the veracity of reports attributed to Muhammad. This is achieved by
On the basis of these criteria, various classifications were devised for hadith. The earliest comprehensive work in hadith studies was Abu Muhammad al-Ramahurmuzi's "al-Muhaddith al-Fasil", while another significant work was al-Hakim al-Naysaburi's "Ma‘rifat ‘ulum al-hadith". Ibn al-Salah's "ʻUlum al-hadith" is considered the standard classical reference on hadith studies. Some schools of Hadith methodology apply as many as sixteen separate tests.
Biographical analysis ("‘ilm al-rijāl", lit. "science of people", also "science of "Asma Al-Rijal' or "‘ilm al-jarḥ wa al-taʻdīl" ("science of discrediting and accrediting"), in which details about the transmitter are scrutinized. This includes analyzing their date and place of birth; familial connections; teachers and students; religiosity; moral behaviour; literary output; their travels; as well as their date of death. Based upon these criteria, the reliability ("thiqāt") of the transmitter is assessed. Also determined is whether the individual was actually able to transmit the report, which is deduced from their contemporaneity and geographical proximity with the other transmitters in the chain. Examples of biographical dictionaries include: Abd al-Ghani al-Maqdisi's "Al-Kamal fi Asma' al-Rijal", Ibn Hajar al-Asqalani's "Tahdhīb al-Tahdhīb" and al-Dhahabi's "Tadhkirat al-huffaz". | https://en.wikipedia.org/wiki?curid=13749 |
Hull (watercraft)
A hull is the watertight body of a ship or boat. The hull may open at the top (such as a dinghy), or it may be fully or partially covered with a deck. Atop the deck may be a deckhouse and other superstructures, such as a funnel, derrick, or mast. The line where the hull meets the water surface is called the waterline.
There is a wide variety of hull types that are chosen for suitability for different usages, the hull shape being dependent upon the needs of the design. Shapes range from a nearly perfect box in the case of scow barges, to a needle-sharp surface of revolution in the case of a racing multihull sailboat. The shape is chosen to strike a balance between cost, hydrostatic considerations (accommodation, load carrying and stability), hydrodynamics (speed, power requirements, and motion and behavior in a seaway) and special considerations for the ship's role, such as the rounded bow of an icebreaker or the flat bottom of a landing craft.
In a typical modern steel ship, the hull will have watertight decks, and major transverse members called bulkheads. There may also be intermediate members such as girders, stringers and webs, and minor members called ordinary transverse frames, frames, or longitudinals, depending on the structural arrangement. The uppermost continuous deck may be called the "upper deck", "weather deck", "spar deck", "main deck", or simply "deck". The particular name given depends on the context—the type of ship or boat, the arrangement, or even where it sails.
In a typical wooden sailboat, the hull is constructed of wooden planking, supported by transverse frames (often referred to as ribs) and bulkheads, which are further tied together by longitudinal stringers or ceiling. Often but not always there is a centerline longitudinal member called a keel. In fiberglass or composite hulls, the structure may resemble wooden or steel vessels to some extent, or be of a monocoque arrangement. In many cases, composite hulls are built by sandwiching thin fiber-reinforced skins over a lightweight but reasonably rigid core of foam, balsa wood, impregnated paper honeycomb or other material.
Perhaps the earliest proper hulls were built by the Ancient Egyptians, who by 3000 BC knew how to assemble wooden planks into a hull.
Hulls come in many varieties and can have composite shape, (e.g., a fine entry forward and inverted bell shape aft), but are grouped primarily as follows:
At present, the most widely used form is the round bilge hull. It gives hulls the rough cross-sectional shape of an inverted bell.
With a small payload, such a craft has less of its hull below the waterline, giving less resistance and more speed. With a greater payload, resistance is greater and speed lower, but the hull’s outward bend provides smoother performance in waves. As such, the inverted bell shape is a popular form used with planing hulls.
A chined hull does not have a smooth rounded lower cross-section. Instead, its contours are interrupted by hard angles where components of the hull meet underwater. The sharper the intersection, the “harder“ the chine.
The Cajun "pirogue" is an example of a craft with hard chines.
Benefits of this type of hull include potentially lower production cost and a (usually) fairly flat bottom, making the boat faster at planing. Sail boats with chined hull make use of a dagger board or keel.
Chined hulls may have one of three shapes:
Each of these chine hulls has its own unique characteristics and use. The flat-bottom hull has high initial stability but high drag. To counter the high drag, hull forms are narrow and sometimes severely tapered at bow and stern. This leads to poor stability when heeled in a sailboat. This is often countered by using heavy interior ballast on sailing versions. They are best suited to sheltered inshore waters. Early racing power boats were fine forward and flat aft. This produced maximum lift and a smooth, fast ride in flat water, but this hull form is easily unsettled in waves. The multi chine hull approximates a curved hull form. It has less drag than a flat-bottom boat. Multi chines are more complex to build but produce a more seaworthy hull form. They are usually displacement hulls. V or arc-bottom chine boats have a Vshape between 6and 23degrees. This is called the deadrise angle. The flatter shape of a 6-degree hull will plane with less wind or a lower-horsepower engine but will pound more in waves. The deep Vform (between 18and 23degrees) is only suited to high-powered planing boats. They require more powerful engines to lift the boat onto the plane but give a faster, smoother ride in waves.
Displacement chined hulls have more wetted surface area, hence more drag, than an equivalent round-hull form, for any given displacement.
Smooth curve hulls are hulls which use, just like the curved hulls, a centreboard or an attached keel.
Semi round bilge hulls are somewhat less round. The advantage of the semi-round is that it is a nice middle between the S-bottom and chined hull. Typical examples of a semi-round bilge hull can be found in the Centaur and Laser cruising dinghies.
S-bottom hulls are hulls shaped like an "s". In the s-bottom, the hull runs smooth to the keel. As there are no sharp corners in the fuselage. Boats with this hull have a fixed keel, or a "kielmidzwaard" (literally "keel with sword"). This is a short fixed keel, with a swing keel inside. Examples of cruising dinghies that use this s-shape are the Yngling and Randmeer.
Hull forms are defined as follows:
Block measures that define the principal dimensions. They are:
Form derivatives that are calculated from the shape and the block measures. They are:
Coefficients help compare hull forms as well:
formula_1
formula_2
formula_3
formula_4
formula_5
Use of computer-aided design has superseded paper-based methods of ship design that relied on manual calculations and lines drawing. Since the early 1990s, a variety of commercial and freeware software packages specialized for naval architecture have been developed that provide 3D drafting capabilities combined with calculation modules for hydrostatics and hydrodynamics. These may be referred to as geometric modeling systems for naval architecture. | https://en.wikipedia.org/wiki?curid=13755 |
Fyodor Dostoevsky
Fyodor Mikhailovich Dostoevsky (; ; 11 November 18219 February 1881), sometimes transliterated Dostoyevsky, was a Russian novelist, short story writer, essayist and journalist. Dostoevsky's literary works explore human psychology in the troubled political, social, and spiritual atmospheres of 19th-century Russia, and engage with a variety of philosophical and religious themes. His most acclaimed works include "Crime and Punishment" (1866), "The Idiot" (1869), "Demons" (1872), and "The Brothers Karamazov" (1880). Dostoevsky's body of works consists of 12 novels, four novellas, 16 short stories, and numerous other works. Many literary critics rate him as one of the greatest psychological novelists in world literature. His 1864 novella "Notes from Underground" is considered to be one of the first works of existentialist literature.
Born in Moscow in 1821, Dostoevsky was introduced to literature at an early age through fairy tales and legends, and through books by Russian and foreign authors. His mother died in 1837 when he was 15, and around the same time, he left school to enter the Nikolayev Military Engineering Institute. After graduating, he worked as an engineer and briefly enjoyed a lavish lifestyle, translating books to earn extra money. In the mid-1840s he wrote his first novel, "Poor Folk", which gained him entry into St. Petersburg's literary circles. Arrested in 1849 for belonging to a literary group that discussed banned books critical of Tsarist Russia, he was sentenced to death but the sentence was commuted at the last moment. He spent four years in a Siberian prison camp, followed by six years of compulsory military service in exile. In the following years, Dostoevsky worked as a journalist, publishing and editing several magazines of his own and later "A Writer's Diary", a collection of his writings. He began to travel around western Europe and developed a gambling addiction, which led to financial hardship. For a time, he had to beg for money, but he eventually became one of the most widely read and highly regarded Russian writers.
Dostoevsky was influenced by a wide variety of philosophers and authors including Pushkin, Gogol, Augustine, Shakespeare, Dickens, Balzac, Lermontov, Hugo, Poe, Plato, Cervantes, Herzen, Kant, Belinsky, Hegel, Schiller, Solovyov, Bakunin, Sand, Hoffmann, and Mickiewicz.
His writings were widely read both within and beyond his native Russia and influenced an equally great number of later writers including Russians like Aleksandr Solzhenitsyn and Anton Chekhov, philosophers Friedrich Nietzsche and Jean-Paul Sartre and the emergence of Existentialism and Freudianism. His books have been translated into more than 170 languages.
Dostoevsky's parents were part of a multi-ethnic and multi-denominational noble family, its branches including Russian Orthodox Christians, Lithuanian nobility, Polish Roman Catholics and Ukrainian Eastern Catholics. The family traced its roots back to a Tatar, Aslan Chelebi-Murza, who in 1389 defected from the Golden Horde and joined the forces of Dmitry Donskoy, the first prince of Muscovy to openly challenge the Mongol authority in the region, and whose descendant, Danilo Irtishch, was ennobled and given lands in the Pinsk region (for centuries part of the Grand Duchy of Lithuania, now in modern-day Belarus) in 1509 for his services under a local prince, his progeny then taking the name "Dostoevsky" based on a village there called Dostoïevo.
Dostoevsky's immediate ancestors on his mother's side were merchants; the male line on his father's side were priests. Andriy Dostoevsky, the writer's grandfather, was a priest in 1782-1820, signed in Ukrainian - "Andriy". After him, his son Lev ruled in Viitovtsi (1820–1829). Another son, Mykhailo (the writer's father), studied at the Podolsk seminary, which was then founded in Shargorod. From there, as one of the best students, he was sent to study at the Medical and Surgical Academy in Moscow (after training he became one of the best doctors at the Mariinsky Hospital for the Poor). Before the war of 1812 he signed in Ukrainian - "Mykhailo" and only during the war, when he worked as a military doctor, he began to sign in Russian - "Mikhail".
In 1809, the 20-year-old Mykhailo Dostoevsky enrolled in Moscow's Imperial Medical-Surgical Academy. From there he was assigned to a Moscow hospital, where he served as military doctor, and in 1818, he was appointed a senior physician. In 1819 he married Maria Nechayeva. The following year, he took up a post at the Mariinsky Hospital for the poor. In 1828, when his two sons, Mikhail and Fyodor, were eight and seven respectively, he was promoted to collegiate assessor, a position which raised his legal status to that of the nobility and enabled him to acquire a small estate in Darovoye, a town about 150 km (100 miles) from Moscow, where the family usually spent the summers. Dostoevsky's parents subsequently had six more children: Varvara (1822–1892), Andrei (1825–1897), Lyubov (born and died 1829), Vera (1829–1896), Nikolai (1831–1883) and Aleksandra (1835–1889).
Fyodor Dostoevsky, born on , was the second child of Dr. Mikhail Dostoevsky and Maria Dostoevskaya (born Nechayeva). He was raised in the family home in the grounds of the Mariinsky Hospital for the Poor, which was in a lower class district on the edges of Moscow. Dostoevsky encountered the patients, who were at the lower end of the Russian social scale, when playing in the hospital gardens.
Dostoevsky was introduced to literature at an early age. From the age of three, he was read heroic sagas, fairy tales and legends by his nanny, Alena Frolovna, an especially influential figure in his upbringing and love for fictional stories. When he was four his mother used the Bible to teach him to read and write. His parents introduced him to a wide range of literature, including Russian writers Karamzin, Pushkin and Derzhavin; Gothic fiction such as Ann Radcliffe; romantic works by Schiller and Goethe; heroic tales by Miguel de Cervantes and Walter Scott; and Homer's epics. Although his father's approach to education has been described as strict and harsh, Dostoevsky himself reports that his imagination was brought alive by nightly readings by his parents.
Some of his childhood experiences found their way into his writings. When a nine-year-old girl had been raped by a drunk, he was asked to fetch his father to attend to her. The incident haunted him, and the theme of the desire of a mature man for a young girl appears in "The Devils", "The Brothers Karamazov", "Crime and Punishment", and other writings. An incident involving a family servant, or serf, in the estate in Darovoye, is described in "The Peasant Marey": when the young Dostoevsky imagines hearing a wolf in the forest, Marey, who is working nearby, comforts him.
Although Dostoevsky had a delicate physical constitution, his parents described him as hot-headed, stubborn and cheeky. In 1833, Dostoevsky's father, who was profoundly religious, sent him to a French boarding school and then to the Chermak boarding school. He was described as a pale, introverted dreamer and an over-excitable romantic. To pay the school fees, his father borrowed money and extended his private medical practice. Dostoevsky felt out of place among his aristocratic classmates at the Moscow school, and the experience was later reflected in some of his works, notably "The Adolescent".
On 27 September 1837 Dostoevsky's mother died of tuberculosis. The previous May, his parents had sent Dostoevsky and his brother Mikhail to St Petersburg to attend the free Nikolayev Military Engineering Institute, forcing the brothers to abandon their academic studies for military careers. Dostoevsky entered the academy in January 1838, but only with the help of family members. Mikhail was refused admission on health grounds and was sent to an academy in Tallinn, Estonia (then known as Reval).
Dostoevsky disliked the academy, primarily because of his lack of interest in science, mathematics and military engineering and his preference for drawing and architecture. As his friend Konstantin Trutovsky once said, "There was no student in the entire institution with less of a military bearing than F.M. Dostoevsky. He moved clumsily and jerkily; his uniform hung awkwardly on him; and his knapsack, shako and rifle all looked like some sort of fetter he had been forced to wear for a time and which lay heavily on him." Dostoevsky's character and interests made him an outsider among his 120 classmates: he showed bravery and a strong sense of justice, protected newcomers, aligned himself with teachers, criticised corruption among officers and helped poor farmers. Although he was solitary and inhabited his own literary world, he was respected by his classmates. His reclusiveness and interest in religion earned him the nickname "Monk Photius".
Signs of Dostoevsky's epilepsy may have first appeared on learning of the death of his father on 16 June 1839, although the reports of a seizure originated from accounts written by his daughter (later expanded by Sigmund Freud) which are now considered to be unreliable. His father's official cause of death was an apoplectic stroke, but a neighbour, Pavel Khotiaintsev, accused the father's serfs of murder. Had the serfs been found guilty and sent to Siberia, Khotiaintsev would have been in a position to buy the vacated land. The serfs were acquitted in a trial in Tula, but Dostoevsky's brother Andrei perpetuated the story. After his father's death, Dostoevsky continued his studies, passed his exams and obtained the rank of engineer cadet, entitling him to live away from the academy. He visited Mikhail in Reval, and frequently attended concerts, operas, plays and ballets. During this time, two of his friends introduced him to gambling.
On 12 August 1843 Dostoevsky took a job as a lieutenant engineer and lived with Adolph Totleben in an apartment owned by Dr. Rizenkampf, a friend of Mikhail. Rizenkampf characterised him as "no less good-natured and no less courteous than his brother, but when not in a good mood he often looked at everything through dark glasses, became vexed, forgot good manners, and sometimes was carried away to the point of abusiveness and loss of self-awareness". Dostoevsky's first completed literary work, a translation of Honoré de Balzac's novel "Eugénie Grandet", was published in June and July 1843 in the 6th and 7th volume of the journal "Repertoire and Pantheon", followed by several other translations. None were successful, and his financial difficulties led him to write a novel.
Dostoevsky completed his first novel, "Poor Folk", in May 1845. His friend Dmitry Grigorovich, with whom he was sharing an apartment at the time, took the manuscript to the poet Nikolay Nekrasov, who in turn showed it to the renowned and influential literary critic Vissarion Belinsky. Belinsky described it as Russia's first "social novel". "Poor Folk" was released on 15 January 1846 in the "St Petersburg Collection" almanac and became a commercial success.
Dostoevsky felt that his military career would endanger his now flourishing literary career, so he wrote a letter asking to resign his post. Shortly thereafter, he wrote his second novel, "", which appeared in the journal "Notes of the Fatherland" on 30 January 1846, before being published in February. Around the same time, Dostoevsky discovered socialism through the writings of French thinkers Fourier, Cabet, Proudhon and Saint-Simon. Through his relationship with Belinsky he expanded his knowledge of the philosophy of socialism. He was attracted to its logic, its sense of justice and its preoccupation with the destitute and the disadvantaged. However, his relationship with Belinsky became increasingly strained as Belinsky's atheism and dislike of religion clashed with Dostoevsky's Russian Orthodox beliefs. Dostoevsky eventually parted with him and his associates.
After "The Double" received negative reviews, Dostoevsky's health declined and he had more frequent seizures, but he continued writing. From 1846 to 1848 he released several short stories in the magazine "Annals of the Fatherland", including "Mr. Prokharchin", "The Landlady", "A Weak Heart", and "White Nights". These stories were unsuccessful, leaving Dostoevsky once more in financial trouble, so he joined the utopian socialist Betekov circle, a tightly knit community which helped him to survive. When the circle dissolved, Dostoevsky befriended Apollon Maykov and his brother Valerian. In 1846, on the recommendation of the poet Aleksey Pleshcheyev, he joined the Petrashevsky Circle, founded by Mikhail Petrashevsky, who had proposed social reforms in Russia. Mikhail Bakunin once wrote to Alexander Herzen that the group was "the most innocent and harmless company" and its members were "systematic opponents of all revolutionary goals and means". Dostoevsky used the circle's library on Saturdays and Sundays and occasionally participated in their discussions on freedom from censorship and the abolition of serfdom.
In 1849, the first parts of "Netochka Nezvanova", a novel Dostoevsky had been planning since 1846, were published in "Annals of the Fatherland", but his banishment ended the project. Dostoevsky never attempted to complete it.
The members of the Petrashevsky Circle were denounced to Liprandi, an official at the Ministry of Internal Affairs. Dostoevsky was accused of reading works by Belinsky, including the banned "Letter to Gogol", and of circulating copies of these and other works. Antonelli, the government agent who had reported the group, wrote in his statement that at least one of the papers criticised Russian politics and religion. Dostoevsky responded to these charges by declaring that he had read the essays only "as a literary monument, neither more nor less"; he spoke of "personality and human egoism" rather than of politics. Even so, he and his fellow "conspirators" were arrested on 23 April 1849 at the request of Count A. Orlov and Tsar Nicolas I, who feared a revolution like the Decembrist revolt of 1825 in Russia and the Revolutions of 1848 in Europe. The members were held in the well-defended Peter and Paul Fortress, which housed the most dangerous convicts.
The case was discussed for four months by an investigative commission headed by the Tsar, with Adjutant General Ivan Nabokov, senator Prince Pavel Gagarin, Prince Vasili Dolgorukov, General Yakov Rostovtsev and General Leonty Dubelt, head of the secret police. They sentenced the members of the circle to death by firing squad, and the prisoners were taken to Semyonov Place in St Petersburg on 23 December 1849 where they were split into three-man groups. Dostoevsky was the third in the second row; next to him stood Pleshcheyev and Durov. The execution was stayed when a cart delivered a letter from the Tsar commuting the sentence.
Dostoevsky served four years of exile with hard labour at a katorga prison camp in Omsk, Siberia, followed by a term of compulsory military service. After a fourteen-day sleigh ride, the prisoners reached Tobolsk, a prisoner way station. Despite the circumstances, Dostoevsky consoled the other prisoners, such as the Petrashevist Ivan Yastrzhembsky, who was surprised by Dostoevsky's kindness and eventually abandoned his decision to commit suicide. In Tobolsk, the members received food and clothes from the Decembrist women, as well as several copies of the New Testament with a ten-ruble banknote inside each copy. Eleven days later, Dostoevsky reached Omsk together with just one other member of the Petrashevsky Circle, the poet Sergei Durov. Dostoevsky described his barracks:
Classified as "one of the most dangerous convicts", Dostoevsky had his hands and feet shackled until his release. He was only permitted to read his New Testament Bible. In addition to his seizures, he had haemorrhoids, lost weight and was "burned by some fever, trembling and feeling too hot or too cold every night". The smell of the privy pervaded the entire building, and the small bathroom had to suffice for more than 200 people. Dostoevsky was occasionally sent to the military hospital, where he read newspapers and Dickens novels. He was respected by most of the other prisoners, and despised by some because of his supposedly xenophobic statements.
After his release on 14 February 1854, Dostoevsky asked Mikhail to help him financially and to send him books by Vico, Guizot, Ranke, Hegel and Kant. "The House of the Dead", based on his experience in prison, was published in 1861 in the journal "Vremya" ("Time") – it was the first published novel about Russian prisons. Before moving in mid-March to Semipalatinsk, where he was forced to serve in the Siberian Army Corps of the Seventh Line Battalion, Dostoevsky met geographer Pyotr Semyonov and ethnographer Shokan Walikhanuli. Around November 1854, he met Baron Alexander Egorovich Wrangel, an admirer of his books, who had attended the aborted execution. They both rented houses in the Cossack Garden outside Semipalatinsk. Wrangel remarked that Dostoevsky "looked morose. His sickly, pale face was covered with freckles, and his blond hair was cut short. He was a little over average height and looked at me intensely with his sharp, grey-blue eyes. It was as if he were trying to look into my soul and discover what kind of man I was."
In Semipalatinsk, Dostoevsky tutored several schoolchildren and came into contact with upper-class families, including that of Lieutenant-Colonel Belikhov, who used to invite him to read passages from newspapers and magazines. During a visit to Belikhov, Dostoevsky met the family of Alexander Ivanovich Isaev and Maria Dmitrievna Isaeva and fell in love with the latter. Alexander Isaev took a new post in Kuznetsk, where he died in August 1855. Maria and her son then moved with Dostoevsky to Barnaul. In 1856 Dostoevsky sent a letter through Wrangel to General Eduard Totleben, apologising for his activity in several utopian circles. As a result, he obtained the right to publish books and to marry, although he remained under police surveillance for the rest of his life. Maria married Dostoevsky in Semipalatinsk on 7 February 1857, even though she had initially refused his marriage proposal, stating that they were not meant for each other and that his poor financial situation precluded marriage. Their family life was unhappy and she found it difficult to cope with his seizures. Describing their relationship, he wrote: "Because of her strange, suspicious and fantastic character, we were definitely not happy together, but we could not stop loving each other; and the more unhappy we were, the more attached to each other we became". They mostly lived apart. In 1859 he was released from military service because of deteriorating health and was granted permission to return to European Russia, first to Tver, where he met his brother for the first time in ten years, and then to St Petersburg.
"A Little Hero" (Dostoevsky's only work completed in prison) appeared in a journal, but "Uncle's Dream" and "The Village of Stepanchikovo" were not published until 1860. "Notes from the House of the Dead" was released in "Russky Mir" (Russian World) in September 1860. "The Insulted and the Injured" was published in the new "Vremya" magazine, which had been created with the help of funds from his brother's cigarette factory.
Dostoevsky travelled to western Europe for the first time on 7 June 1862, visiting Cologne, Berlin, Dresden, Wiesbaden, Belgium, and Paris. In London, he met Herzen and visited the Crystal Palace. He travelled with Nikolay Strakhov through Switzerland and several North Italian cities, including Turin, Livorno, and Florence. He recorded his impressions of those trips in "Winter Notes on Summer Impressions", in which he criticised capitalism, social modernisation, materialism, Catholicism and Protestantism.
From August to October 1863, Dostoevsky made another trip to western Europe. He met his second love, Polina Suslova, in Paris and lost nearly all his money gambling in Wiesbaden and Baden-Baden. In 1864 his wife Maria and his brother Mikhail died, and Dostoevsky became the lone parent of his stepson Pasha and the sole supporter of his brother's family. The failure of "Epoch", the magazine he had founded with Mikhail after the suppression of "Vremya", worsened his financial situation, although the continued help of his relatives and friends averted bankruptcy.
The first two parts of "Crime and Punishment" were published in January and February 1866 in the periodical "The Russian Messenger", attracting at least 500 new subscribers to the magazine.
Dostoevsky returned to Saint Petersburg in mid-September and promised his editor, Fyodor Stellovsky, that he would complete "The Gambler", a short novel focused on gambling addiction, by November, although he had not yet begun writing it. One of Dostoevsky's friends, Milyukov, advised him to hire a secretary. Dostoevsky contacted stenographer Pavel Olkhin from Saint Petersburg, who recommended his pupil, the twenty-year-old Anna Grigoryevna Snitkina. Her shorthand helped Dostoevsky to complete "The Gambler" on 30 October, after 26 days' work. She remarked that Dostoevsky was of average height but always tried to carry himself erect. "He had light brown, slightly reddish hair, he used some hair conditioner, and he combed his hair in a diligent way ... his eyes, they were different: one was dark brown; in the other, the pupil was so big that you could not see its color, [this was caused by an injury]. The strangeness of his eyes gave Dostoyevsky some mysterious appearance. His face was pale, and it looked unhealthy."
On 15 February 1867 Dostoevsky married Snitkina in Trinity Cathedral, Saint Petersburg. The 7,000 rubles he had earned from "Crime and Punishment" did not cover their debts, forcing Anna to sell her valuables. On 14 April 1867, they began a delayed honeymoon in Germany with the money gained from the sale. They stayed in Berlin and visited the Gemäldegalerie Alte Meister in Dresden, where he sought inspiration for his writing. They continued their trip through Germany, visiting Frankfurt, Darmstadt, Heidelberg and Karlsruhe. They spent five weeks in Baden-Baden, where Dostoevsky had a quarrel with Turgenev and again lost much money at the roulette table. The couple travelled on to Geneva.
In September 1867, Dostoevsky began work on "The Idiot", and after a prolonged planning process that bore little resemblance to the published novel, he eventually managed to write the first 100 pages in only 23 days; the serialisation began in "The Russian Messenger "in January 1868.
Their first child, Sonya, had been conceived in Baden-Baden, and was born in Geneva on 5 March 1868. The baby died of pneumonia three months later, and Anna recalled how Dostoevsky "wept and sobbed like a woman in despair". The couple moved from Geneva to Vevey and then to Milan, before continuing to Florence. "The Idiot" was completed there in January 1869, the final part appearing in "The Russian Messenger "in February 1869. Anna gave birth to their second daughter, Lyubov, on 26 September 1869 in Dresden. In April 1871, Dostoevsky made a final visit to a gambling hall in Wiesbaden. Anna claimed that he stopped gambling after the birth of their second daughter, but this is a subject of debate.
After hearing news that the socialist revolutionary group "People's Vengeance" had murdered one of its own members, Ivan Ivanov, on 21 November 1869, Dostoevsky began writing "Demons". In 1871, Dostoevsky and Anna travelled by train to Berlin. During the trip, he burnt several manuscripts, including those of "The Idiot", because he was concerned about potential problems with customs. The family arrived in Saint Petersburg on 8 July, marking the end of a honeymoon (originally planned for three months) that had lasted over four years.
Back in Russia in July 1871, the family was again in financial trouble and had to sell their remaining possessions. Their son Fyodor was born on 16 July, and they moved to an apartment near the Institute of Technology soon after. They hoped to cancel their large debts by selling their rental house in Peski, but difficulties with the tenant resulted in a relatively low selling price, and disputes with their creditors continued. Anna proposed that they raise money on her husband's copyrights and negotiate with the creditors to pay off their debts in installments.
Dostoevsky revived his friendships with Maykov and Strakhov and made new acquaintances, including church politician Terty Filipov and the brothers Vsevolod and Vladimir Solovyov. Konstantin Pobedonostsev, future Imperial High Commissioner of the Most Holy Synod, influenced Dostoevsky's political progression to conservatism. Around early 1872 the family spent several months in Staraya Russa, a town known for its mineral spa. Dostoevsky's work was delayed when Anna's sister Maria Svatkovskaya died on 1 May 1872, either from typhus or malaria, and Anna developed an abscess on her throat.
The family returned to St Petersburg in September. "Demons" was finished on 26 November and released in January 1873 by the "Dostoevsky Publishing Company", which was founded by Dostoevsky and his wife. Although they only accepted cash payments and the bookshop was in their own apartment, the business was successful, and they sold around 3,000 copies of "Demons". Anna managed the finances. Dostoevsky proposed that they establish a new periodical, which would be called "A Writer's Diary" and would include a collection of essays, but funds were lacking, and the "Diary" was published in Vladimir Meshchersky's "The Citizen", beginning on 1 January, in return for a salary of 3,000 rubles per year. In the summer of 1873, Anna returned to Staraya Russa with the children, while Dostoevsky stayed in St Petersburg to continue with his "Diary".
In March 1874, Dostoevsky left "The Citizen" because of the stressful work and interference from the Russian bureaucracy. In his fifteen months with "The Citizen", he had been taken to court twice: on 11 June 1873 for citing the words of Prince Meshchersky without permission, and again on 23 March 1874. Dostoevsky offered to sell a new novel he had not yet begun to write to "The Russian Messenger", but the magazine refused. Nikolay Nekrasov suggested that he publish "A Writer's Diary" in "Notes of the Fatherland"; he would receive 250 rubles for each printer's sheet – 100 more than the text's publication in "The Russian Messenger" would have earned. Dostoevsky accepted. As his health began to decline, he consulted several doctors in St Petersburg and was advised to take a cure outside Russia. Around July, he reached Ems and consulted a physician, who diagnosed him with acute catarrh. During his stay he began "The Adolescent". He returned to Saint Petersburg in late July.
Anna proposed that they spend the winter in Staraya Russa to allow Dostoevsky to rest, although doctors had suggested a second visit to Ems because his health had previously improved there. On 10 August 1875 his son Alexey was born in Staraya Russa, and in mid-September the family returned to Saint Petersburg. Dostoevsky finished "The Adolescent" at the end of 1875, although passages of it had been serialised in "Notes of the Fatherland" since January. "The Adolescent" chronicles the life of Arkady Dolgoruky, the illegitimate child of the landowner Versilov and a peasant mother. It deals primarily with the relationship between father and son, which became a frequent theme in Dostoevsky's subsequent works.
In early 1876, Dostoevsky continued work on his "Diary". The book includes numerous essays and a few short stories about society, religion, politics and ethics. The collection sold more than twice as many copies as his previous books. Dostoevsky received more letters from readers than ever before, and people of all ages and occupations visited him. With assistance from Anna's brother, the family bought a dacha in Staraya Russa. In the summer of 1876, Dostoevsky began experiencing shortness of breath again. He visited Ems for the third time and was told that he might live for another 15 years if he moved to a healthier climate. When he returned to Russia, Tsar Alexander II ordered Dostoevsky to visit his palace to present the "Diary" to him, and he asked him to educate his sons, Sergey and Paul. This visit further increased Dosteyevsky's circle of acquaintances. He was a frequent guest in several salons in Saint Petersburg and met many famous people, including Princess Sophia Tolstaya, Yakov Polonsky, Sergei Witte, Alexey Suvorin, Anton Rubinstein and Ilya Repin.
Dostoevsky's health declined further, and in March 1877 he had four epileptic seizures. Rather than returning to Ems, he visited Maly Prikol, a manor near Kursk. While returning to St Petersburg to finalise his "Diary", he visited Darovoye, where he had spent much of his childhood. In December he attended Nekrasov's funeral and gave a speech. He was appointed an honorary member of the Russian Academy of Sciences, from which he received an honorary certificate in February 1879. He declined an invitation to an international congress on copyright in Paris after his son Alyosha had a severe epileptic seizure and died on 16 May. The family later moved to the apartment where Dostoevsky had written his first works. Around this time, he was elected to the board of directors of the Slavic Benevolent Society in Saint Petersburg. That summer, he was elected to the honorary committee of the Association Littéraire et Artistique Internationale, whose members included Victor Hugo, Ivan Turgenev, Paul Heyse, Alfred Tennyson, Anthony Trollope, Henry Longfellow, Ralph Waldo Emerson and Leo Tolstoy. Dostoevsky made his fourth and final visit to Ems in early August 1879. He was diagnosed with early-stage pulmonary emphysema, which his doctor believed could be successfully managed, but not cured.
On 3 February 1880 Dostoevsky was elected vice-president of the Slavic Benevolent Society, and he was invited to speak at the unveiling of the Pushkin memorial in Moscow. On 8 June he delivered his speech, giving an impressive performance that had a significant emotional impact on his audience. His speech was met with thunderous applause, and even his long-time rival Turgenev embraced him. Konstantin Staniukovich praised the speech in his essay "The Pushkin Anniversary and Dostoevsky's Speech" in "", writing that "the language of Dostoevsky's [Pushkin Speech] really looks like a sermon. He speaks with the tone of a prophet. He makes a sermon like a pastor; it is very deep, sincere, and we understand that he wants to impress the emotions of his listeners." The speech was criticised later by liberal political scientist Alexander Gradovsky, who thought that Dostoevsky idolised "the people", and by conservative thinker Konstantin Leontiev, who, in his essay "On Universal Love", compared the speech to French utopian socialism. The attacks led to a further deterioration in his health.
On 25 January 1881, while searching for members of the terrorist organisation Narodnaya Volya ("The People's Will") who would soon assassinate Tsar Alexander II, the Tsar's secret police executed a search warrant in the apartment of one of Dostoevsky's neighbours. On the following day, Dostoevsky suffered a pulmonary haemorrhage. Anna denied that the search had caused it, saying that the haemorrhage had occurred after her husband had been looking for a dropped pen holder. After another haemorrhage, Anna called the doctors, who gave a poor prognosis. A third haemorrhage followed shortly afterwards. While seeing his children before dying, Dostoevsky requested that the parable of the Prodigal Son be read to his children. The profound meaning of this request is pointed out by Frank:
Among Dostoevsky's last words was his quotation of : "But John forbad him, saying, I have a need to be baptised of thee, and comest thou to me? And Jesus answering said unto him, Suffer it to be so now: for thus it becometh us to fulfil all righteousness", and he finished with "Hear now—permit it. Do not restrain me!" When he died, his body was placed on a table, following Russian custom. He was interred in the Tikhvin Cemetery at the Alexander Nevsky Convent, near his favourite poets, Nikolay Karamzin and Vasily Zhukovsky. It is unclear how many attended his funeral. According to one reporter, more than 100,000 mourners were present, while others describe attendance between 40,000 and 50,000. His tombstone is inscribed with lines from the New Testament:
Dostoevsky had his first known affair with Avdotya Yakovlevna, whom he met in the Panayev circle in the early 1840s. He described her as educated, interested in literature, and a femme fatale. He admitted later that he was uncertain about their relationship. According to Anna Dostoevskaya's memoirs, Dostoevsky once asked his sister's sister-in-law, Yelena Ivanova, whether she would marry him, hoping to replace her mortally ill husband after he died, but she rejected his proposal.
Dostoevsky and Apollonia (Polina) Suslova had a short but intimate affair, which peaked in the winter of 1862–1863. Suslova's dalliance with a Spaniard in late spring and Dostoevsky's gambling addiction and age ended their relationship. He later described her in a letter to Nadezhda Suslova as a "great egoist. Her egoism and her vanity are colossal. She demands "everything" of other people, all the perfections, and does not pardon the slightest imperfection in the light of other qualities that one may possess", and later stated "I still love her, but I do not want to love her any more. She doesn't deserve this love ..." In 1858 Dostoevsky had a romance with comic actress Aleksandra Ivanovna Schubert. Although she divorced Dostoevsky's friend Stepan Yanovsky, she would not live with him. Dostoevsky did not love her either, but they were probably good friends. She wrote that he "became very attracted to me".
Through a worker in "Epoch", Dostoevsky learned of the Russian-born Martha Brown (née Elizaveta Andreyevna Chlebnikova), who had had affairs with several westerners. Her relationship with Dostoevsky is known only through letters written between November 1864 and January 1865. In 1865, Dostoevsky met Anna Korvin-Krukovskaya. Their relationship is not verified; Anna Dostoevskaya spoke of a good affair, but Korvin-Krukovskaya's sister, the mathematician Sofia Kovalevskaya, thought that Korvin-Krukovskaya had rejected him.
In his youth, Dostoevsky enjoyed reading Nikolai Karamzin's "History of the Russian State", which praised conservatism and Russian independence, ideas that Dostoevsky would embrace later in life. Before his arrest for participating in the Petrashevsky Circle in 1849, Dostoevsky remarked, "As far as I am concerned, nothing was ever more ridiculous than the idea of a republican government in Russia." In an 1881 edition of his "Diaries", Dostoevsky stated that the Tsar and the people should form a unity: "For the people, the tsar is not an external power, not the power of some conqueror ... but a power of all the people, an all-unifying power the people themselves desired."
While critical of serfdom, Dostoevsky was skeptical about the creation of a constitution, a concept he viewed as unrelated to Russia's history. He described it as a mere "gentleman's rule" and believed that "a constitution would simply enslave the people". He advocated social change instead, for example removal of the feudal system and a weakening of the divisions between the peasantry and the affluent classes. His ideal was a utopian, Christianized Russia where "if everyone were actively Christian, not a single social question would come up ... If they were Christians they would settle everything". He thought democracy and oligarchy were poor systems; of France he wrote, "the oligarchs are only concerned with the interest of the wealthy; the democrats, only with the interest of the poor; but the interests of society, the interest of all and the future of France as a whole—no one there bothers about these things." He maintained that political parties ultimately led to social discord. In the 1860s, he discovered "Pochvennichestvo", a movement similar to Slavophilism in that it rejected Europe's culture and contemporary philosophical movements, such as nihilism and materialism. "Pochvennichestvo" differed from Slavophilism in aiming to establish, not an isolated Russia, but a more open state modelled on the Russia of Peter the Great.
In his incomplete article "Socialism and Christianity", Dostoevsky claimed that civilisation ("the second stage in human history") had become degraded, and that it was moving towards liberalism and losing its faith in God. He asserted that the traditional concept of Christianity should be recovered. He thought that contemporary western Europe had "rejected the single formula for their salvation that came from God and was proclaimed through revelation, 'Thou shalt love thy neighbour as thyself', and replaced it with practical conclusions such as, "Chacun pour soi et Dieu pour tous" [Every man for himself and God for all], or "scientific" slogans like 'the struggle for survival. He considered this crisis to be the consequence of the collision between communal and individual interests, brought about by a decline in religious and moral principles.
Dostoevsky distinguished three "enormous world ideas" prevalent in his time: Roman Catholicism, Protestantism and Russian Orthodoxy. He claimed that Catholicism had continued the tradition of Imperial Rome and had thus become anti-Christian and proto-socialist, inasmuch as the Church's interest in political and mundane affairs led it to abandon the idea of Christ. For Dostoevsky, socialism was "the latest incarnation of the Catholic idea" and its "natural ally". He found Protestantism self-contradictory and claimed that it would ultimately lose power and spirituality. He deemed Russian Orthodoxy to be the ideal form of Christianity.
For all that, to place politically Dostoevsky is not that simple, but: as a Christian, he rejected the atheistic socialism; as a traditionalist, he rejected the destruction of the institutions and, as a pacifist, any violent method or upheaval led by both progressives or reactionaries. He supported private property and business rights, and did not agree with many criticisms of the free market from the socialist utopians of his time.
During the Russo-Turkish War, Dostoevsky asserted that war might be necessary if salvation were to be granted. He wanted the Muslim Ottoman Empire eliminated and the Christian Byzantine Empire restored, and he hoped for the liberation of Balkan Slavs and their unification with the Russian Empire.
Jewish characters in Dostoevsky's works have been described as displaying negative stereotypes. In a letter to Arkady Kovner from 1877, a Jew who had accused Dostoevsky of antisemitism, he replied with the following:"I am not an enemy of the Jews at all and never have been. But as you say, its 40-century existence proves that this tribe has exceptional vitality, which would not help, during the course of its history, taking the form of various Status in Statu ... how can they fail to find themselves, even if only partially, at variance with the indigenous population – the Russian tribe?"
Dostoevsky held negative views of the Ottoman Turks, dedicating multiple pages to them in his "Writer's Diary", professing the need to have no pity for Turks at war and no regrets in killing Turks and depopulating Istanbul of the Turkish population and shipping it off to Asia.
Dostoevsky was an Orthodox Christian, was raised in a religious family and knew the Gospel from a very young age. He was influenced by the Russian translation of Johannes Hübner's "One Hundred and Four Sacred Stories from the Old and New Testaments Selected for Children" (partly a German bible for children and partly a catechism). He attended Sunday liturgies from an early age and took part in annual pilgrimages to the St. Sergius Trinity Monastery. A deacon at the hospital gave him religious instruction. Among his most cherished childhood memories were the prayers he used to recite in front of guests and a reading from the Book of Job that impressed him while "still almost a child."
According to an officer at the military academy, Dostoevsky was profoundly religious, followed Orthodox practice, and regularly read the Gospels and Heinrich Zschokke's "Die Stunden der Andacht" ("Hours of Devotion"), which "preached a sentimental version of Christianity entirely free from dogmatic content and with a strong emphasis on giving Christian love a social application." This book may have prompted his later interest in Christian socialism. Through the literature of Hoffmann, Balzac, Eugène Sue and Goethe, Dostoevsky created his own belief system, similar to Russian sectarianism and the Old Belief. After his arrest, aborted execution and subsequent imprisonment, he focused intensely on the figure of Christ and on the New Testament, the only book allowed in prison. In a January 1854 letter to the woman who had sent him the New Testament, Dostoevsky wrote that he was a "child of unbelief and doubt up to this moment, and I am certain that I shall remain so to the grave." He also wrote that "even if someone were to prove to me that the truth lay outside Christ, I should choose to remain with Christ rather than with the truth."
In Semipalatinsk, Dostoevsky revived his faith by looking frequently at the stars. Wrangel said that he was "rather pious, but did not often go to church, and disliked priests, especially the Siberian ones. But he spoke about Christ ecstatically." Both planned to translate Hegel's works and Carus' "Psyche". Two pilgrimages and two works by Dmitri Rostovsky, an archbishop who influenced Ukrainian and Russian literature by composing groundbreaking religious plays, strengthened his beliefs. Through his visits to western Europe and discussions with Herzen, Grigoriev, and Strakhov, Dostoevsky discovered the "Pochvennichestvo" movement and the theory that the Catholic Church had adopted the principles of rationalism, legalism, materialism, and individualism from ancient Rome and had passed on its philosophy to Protestantism and consequently to atheistic socialism.
Dostoevsky's canon includes novels, novellas, novelettes, short stories, essays, pamphlets, limericks, epigrams and poems. He wrote more than 700 letters, a dozen of which are lost.
Dostoevsky expressed religious, psychological and philosophical ideas in his writings. His works explore such themes as suicide, poverty, human manipulation, and morality. Psychological themes include dreaming, first seen in "White Nights", and the father-son relationship, beginning in "The Adolescent". Most of his works demonstrate a vision of the chaotic sociopolitical structure of contemporary Russia. His early works viewed society (for example, the differences between poor and rich) through the lens of literary realism and naturalism. The influences of other writers, particularly evident in his early works, led to accusations of plagiarism, but his style gradually became more individual. After his release from prison, Dostoevsky incorporated religious themes, especially those of Russian Orthodoxy, into his writing. Elements of gothic fiction, romanticism, and satire are observable in some of his books. He frequently used autobiographical or semi-autobiographical details.
An important stylistic element in Dostoevsky's writing is polyphony, the simultaneous presence of multiple narrative voices and perspectives. Polyphony is a literary concept, analogous with musical polyphony, developed by Mikhail Bakhtin on the basis of his analyses of Dostoevsky's works. Kornelije Kvas wrote that Bakhtin’s theory of "the polyphonic novel and Dostoevsky’s dialogicness of narration postulates the non-existence of the 'final' word, which is why the thoughts, emotions and experiences of the world of the narrator and his/her characters are reflected through the words of another, with which they can never fully blend."
Dostoevsky is regarded as one of the greatest and most influential novelists of the Golden Age of Russian literature. Leo Tolstoy admired Dostoevsky's works and considered his novels magnificent (and, conversely, Dostoevsky also admired Tolstoy). Albert Einstein put him above the mathematician Carl Friedrich Gauss, calling him a "great religious writer" who explores "the mystery of spiritual existence". Friedrich Nietzsche at one point called Dostoevsky "the only psychologist ... from whom I had something to learn; he ranks among the most beautiful strokes of fortune in my life." Hermann Hesse enjoyed Dostoevsky's work and cautioned that to read him is like a "glimpse into the havoc". The Norwegian novelist Knut Hamsun wrote that "no one has analyzed the complicated human structure as Dostoyevsky. His psychologic sense is overwhelming and visionary." The Russian literary theorist Mikhail Bakhtin's analysis of Dostoevsky came to be at the foundation of his theory of the novel. Bakhtin argued that Dostoevsky's use of multiple voices was a major advancement in the development of the novel as a genre.
In his posthumous collection of sketches "A Moveable Feast", Ernest Hemingway stated that in Dostoevsky "there were things believable and not to be believed, but some so true that they changed you as you read them; frailty and madness, wickedness and saintliness, and the insanity of gambling were there to know". James Joyce praised Dostoevsky's prose: "... he is the man more than any other who has created modern prose, and intensified it to its present-day pitch. It was his explosive power which shattered the Victorian novel with its simpering maidens and ordered commonplaces; books which were without imagination or violence." In her essay "The Russian Point of View", Virginia Woolf said, "Out of Shakespeare there is no more exciting reading". Franz Kafka called Dostoevsky his "blood-relative" and was heavily influenced by his works, particularly "The Brothers Karamazov" and "Crime and Punishment", both of which profoundly influenced "The Trial". Sigmund Freud called "The Brothers Karamazov" "the most magnificent novel ever written". Modern cultural movements such as the surrealists, the existentialists and the Beats cite Dostoevsky as an influence, and he is cited as the forerunner of Russian symbolism, existentialism, expressionism and psychoanalysis. In her essay, "What Is Romanticism?," Russian-American author Ayn Rand wrote that Dostoevsky was one of the two greatest novelists (the other being Victor Hugo).
In 1956 an olive-green postage stamp dedicated to Dostoevsky was released in the Soviet Union, with a print run of 1,000 copies. A Dostoevsky Museum was opened on 12 November 1971 in the apartment where he wrote his first and final novels. A crater on Mercury was named after him in 1979, and a minor planet discovered in 1981 by Lyudmila Karachkina was named 3453 Dostoevsky. Music critic and broadcaster Artemy Troitsky has hosted the radio show "FM Достоевский" (FM Dostoevsky) since 1997. J.M. Coetzee featured Dostoevsky as the protagonist in his 1997 novel "The Master of Petersburg". The famous Malayalam novel "Oru Sankeerthanam Pole" by Perumbadavam Sreedharan deals with the life of Dostoevsky and his love affair with Anna. Viewers of the TV show "Name of Russia" voted him the ninth greatest Russian of all time, behind chemist Dmitry Mendeleev and ahead of ruler Ivan IV. An Eagle Award-winning TV series directed by Vladimir Khotinenko about Dostoevsky's life was screened in 2011.
Numerous memorials were inaugurated in cities and regions such as Moscow, Saint Petersburg, Novosibirsk, Omsk, Semipalatinsk, Kusnetsk, Darovoye, Staraya Russa, Lyublino, Tallinn, Dresden, Baden-Baden and Wiesbaden. The Dostoyevskaya metro station in Saint Petersburg was opened on 30 December 1991, and the station of the same name in Moscow was opened on 19 June 2010, the 75th anniversary of the Moscow Metro. The Moscow station is decorated with murals by artist Ivan Nikolaev depicting scenes from Dostoevsky's works, such as controversial suicides.
Dostoevsky's work did not always gain a positive reception. Some critics, such as Nikolay Dobrolyubov, Ivan Bunin and Vladimir Nabokov, viewed his writing as excessively psychological and philosophical rather than artistic. Others found fault with chaotic and disorganised plots, and others, like Turgenev, objected to "excessive psychologising" and too-detailed naturalism. His style was deemed "prolix, repetitious and lacking in polish, balance, restraint and good taste". Saltykov-Shchedrin, Nikolay Mikhaylovsky and others criticised his puppet-like characters, most prominently in "The Idiot", "Demons" ("The Possessed", "The Devils") and "The Brothers Karamazov". These characters were compared to those of Hoffmann, an author whom Dostoevsky admired.
Basing his estimation on stated criteria of enduring art and individual genius, Nabokov judges Dostoevsky "not a great writer, but rather a mediocre one—with flashes of excellent humour but, alas, with wastelands of literary platitudes in between". Nabokov complains that the novels are peopled by "neurotics and lunatics" and states that Dostoevsky's characters do not develop: "We get them all complete at the beginning of the tale and so they remain." He finds the novels full of contrived "surprises and complications of plot", which are effective when first read, but on second reading, without the shock and benefit of these surprises, appear loaded with "glorified cliché". The Scottish poet and critic Edwin Muir, however, addressed this criticism, noting that "regarding the 'oddness' of Dostoevsky's characters, it has been pointed out that they perhaps only seem 'pathological', whereas in reality they are 'only visualized more clearly than any figures in imaginative literature'.
Dostoevsky's books have been translated into more than 170 languages. The German translator Wilhelm Wolfsohn published one of the first translations, parts of "Poor Folk", in an 1846–1847 magazine, and a French translation followed. French, German and Italian translations usually came directly from the original, while English translations were second-hand and of poor quality. The first English translations were by Marie von Thilo in 1881, but the first highly regarded ones were produced between 1912 and 1920 by Constance Garnett. Her flowing and easy translations helped popularise Dostoevsky's novels in anglophone countries, and Bakthin's "Problems of Dostoevsky's Creative Art" (1929) (republished and revised as "Problems of Dostoevsky's Poetics" in 1963) provided further understanding of his style.
Dostoevsky's works were interpreted in film and on stage in many different countries. Princess Varvara Dmitrevna Obolenskaya was among the first to propose staging "Crime and Punishment". Dostoevsky did not refuse permission, but he advised against it, as he believed that "each art corresponds to a series of poetic thoughts, so that one idea cannot be expressed in another non-corresponding form". His extensive explanations in opposition to the transposition of his works into other media were groundbreaking in fidelity criticism. He thought that just one episode should be dramatised, or an idea should be taken and incorporated into a separate plot. According to critic Alexander Burry, some of the most effective adaptions are Sergei Prokofiev's opera "The Gambler", Leoš Janáček's opera "From the House of the Dead", Akira Kurosawa's film "The Idiot" and Andrzej Wajda's film "The Possessed".
After the 1917 Russian Revolution, passages of Dostoevsky books were sometimes shortened, although only two books were censored: "Demons" and "Diary of a Writer". His philosophy, particularly in "Demons", was deemed anti-capitalist but also anti-Communist and reactionary. According to historian Boris Ilizarov, Stalin read Dostoevsky's "The Brothers Karamazov" several times.
Dostoevsky's works of fiction include 15 novels and novellas, 17 short stories, and 5 translations. Many of his longer novels were first published in serialised form in literary magazines and journals. The years given below indicate the year in which the novel's final part or first complete book edition was published. In English many of his novels and stories are known by different titles.
"Poor Folk" is an epistolary novel that describes the relationship between the small, elderly official Makar Devushkin and the young seamstress Varvara Dobroselova, remote relatives who write letters to each other. Makar's tender, sentimental adoration for Varvara and her confident, warm friendship for him explain their evident preference for a simple life, although it keeps them in humiliating poverty. An unscrupulous merchant finds the inexperienced girl and hires her as his housewife and guarantor. He sends her to a manor somewhere on a steppe, while Makar alleviates his misery and pain with alcohol.
The story focuses on poor people who struggle with their lack of self-esteem. Their misery leads to the loss of their inner freedom, to dependence on the social authorities, and to the extinction of their individuality. Dostoevsky shows how poverty and dependence are indissolubly aligned with deflection and deformation of self-esteem, combining inward and outerward suffering.
"Notes from Underground" is split into two stylistically different parts, the first essay-like, the second in narrative style. The protagonist and first-person narrator is an unnamed 40-year-old civil servant known as The Underground Man. The only known facts about his situation are that he has quit the service, lives in a basement flat on the outskirts of Saint Petersburg and finances his livelihood from a modest inheritance.
The first part is a record of his thoughts about society and his character. He describes himself as vicious, squalid and ugly; the chief focuses of his polemic are the "modern human" and his vision of the world, which he attacks severely and cynically, and towards which he develops aggression and vengefulness. He considers his own decline natural and necessary. Although he emphasises that he does not intend to publish his notes for the public, the narrator appeals repeatedly to an ill-described audience, whose questions he tries to address.
In the second part he describes scenes from his life that are responsible for his failure in personal and professional life and in his love life. He tells of meeting old school friends, who are in secure positions and treat him with condescension. His aggression turns inward on to himself and he tries to humiliate himself further. He presents himself as a possible saviour to the poor prostitute Lisa, advising her to reject self-reproach when she looks to him for hope. Dostoevsky added a short commentary saying that although the storyline and characters are fictional, such things were inevitable in contemporary society.
The Underground Man was very influential on philosophers. His alienated existence from the mainstream influenced modernist literature.
"Crime and Punishment" describes the fictional Rodion Raskolnikov's life, from the murder of a pawnbroker and her sister, through spiritual regeneration with the help of Sonya (a "hooker with a heart of gold"), to his sentence in Siberia. Strakhov liked the novel, remarking that "Only "Crime and Punishment" was read in 1866" and that Dostoevsky had managed to portray a Russian person aptly and realistically. Otherwise, it received a mixed reception from critics, with most of the negative responses coming from nihilists. Grigory Eliseev of the radical magazine "The Contemporary" called the novel a "fantasy according to which the entire student body is accused without exception of attempting murder and robbery".
The novel's protagonist, the 26-year-old Prince Myshkin, returns to Russia after several years at a Swiss sanatorium. Scorned by Saint Petersburg society for his trusting nature and naivety, he finds himself at the center of a struggle between a beautiful kept woman, Nastasya, and a jealous but pretty young girl, Aglaya, both of whom win his affection. Unfortunately, Myshkin's goodness precipitates disaster, leaving the impression that, in a world obsessed with money, power and sexual conquest, a sanatorium may be the only place for a saint. Myshkin is the personification of a "relatively beautiful man", namely Christ. Coming "from above" (the Swiss mountains), he physically resembles common depictions of Jesus Christ: slightly larger than average, with thick, blond hair, sunken cheeks and a thin, almost entirely white goatee. Like Christ, Myshkin is a teacher, confessor and mysterious outsider. Passions such as greed and jealousy are alien to him. In contrast to those around him, he puts no value on money and power. He feels compassion and love, sincerely, without judgment. His relationship with the immoral Nastasya is obviously inspired by Christ's relationship with Mary Magdalene. He is called "Idiot" because of such differences.
The story of "Demons" (sometimes also titled "The Possessed" or "The Devils") is based largely on the murder of Ivan Ivanov by "People's Vengeance" members in 1869. It was influenced by the Book of Revelation. The secondary characters, Pyotr and Stepan Verkhovensky, are based on Sergei Nechayev and Timofey Granovsky respectively. The novel takes place in a provincial Russian setting, primarily on the estates of Stepan Verkhovensky and Varvara Stavrogina. Stepan's son Pyotr is an aspiring revolutionary conspirator who attempts to organise revolutionaries in the area. He considers Varvara's son Nikolai central to his plot, because he thinks that Nikolai lacks sympathy for mankind. Pyotr gathers conspirators such as the philosophising Shigalyov, the suicidal Kirillov and the former military man Virginsky. He schemes to consolidate their loyalty to him and each other by murdering Ivan Shatov, a fellow conspirator. Pyotr plans to have Kirillov, who is committed to killing himself, take credit for the murder in his suicide note. Kirillov complies and Pyotr murders Shatov, but his scheme goes awry. Pyotr escapes, but the remainder of his aspiring revolutionary crew is arrested. In the denouement, Nikolai kills himself, tortured by his own misdeeds.
At nearly 800 pages, "The Brothers Karamazov" is Dostoevsky's largest work. It received both critical and popular acclaim and is often cited as his "magnum opus". Composed of 12 "books", the novel tells the story of the novice Alyosha Karamazov, the non-believer Ivan Karamazov and the soldier Dmitri Karamazov. The first books introduce the Karamazovs. The main plot is the death of their father Fyodor, while other parts are philosophical and religious arguments by Father Zosima to Alyosha.
The most famous chapter is "The Grand Inquisitor", a parable told by Ivan to Alyosha about Christ's Second Coming in Seville, Spain, in which Christ is imprisoned by a ninety-year-old Catholic Grand Inquisitor. Instead of answering him, Christ gives him a kiss, and the Inquisitor subsequently releases him, telling him not to return. The tale was misunderstood as a defence of the Inquisitor, but some, such as Romano Guardini, have argued that the Christ of the parable was Ivan's own interpretation of Christ, "the idealistic product of the unbelief". Ivan, however, has stated that he is against Christ. Most contemporary critics and scholars agree that Dostoevsky is attacking Roman Catholicism and socialist atheism, both represented by the Inquisitor. He warns the readers against a terrible revelation in the future, referring to the Donation of Pepin around 750 and the Spanish Inquisition in the 16th century, which in his view corrupted true Christianity. | https://en.wikipedia.org/wiki?curid=11625 |
Faith healing
Faith healing is the practice of prayer and gestures (such as laying on of hands) that are believed by some to elicit divine intervention in spiritual and physical healing, especially the Christian practice. Believers assert that the healing of disease and disability can be brought about by religious faith through prayer or other rituals that, according to adherents, can stimulate a divine presence and power. Religious belief in divine intervention does not depend on empirical evidence, that faith healing achieves an evidence-based outcome.
Claims that "a myriad of techniques" such as prayer, divine intervention, or the ministrations of an individual healer can cure illness have been popular throughout history. There have been claims that faith can cure blindness, deafness, cancer, HIV AIDS, developmental disorders, anemia, arthritis, corns, defective speech, multiple sclerosis, skin rashes, total body paralysis, and various injuries. Recoveries have been attributed to many techniques commonly classified as faith healing. It can involve prayer, a visit to a religious shrine, or simply a strong belief in a supreme being.
Many people interpret the Bible, especially the New Testament, as teaching belief in, and the practice of, faith healing. According to a 2004 "Newsweek" poll, 72 percent of Americans said they believe that praying to God can cure someone, even if science says the person has an incurable disease. Unlike faith healing, advocates of spiritual healing make no attempt to seek divine intervention, instead believing in divine energy. The increased interest in alternative medicine at the end of the 20th century has given rise to a parallel interest among sociologists in the relationship of religion to health.
Virtually all scientists and philosophers dismiss faith healing as pseudoscience. Faith healing can be classified as a spiritual, supernatural, or paranormal topic, and, in some cases, belief in faith healing can be classified as magical thinking. The American Cancer Society states "available scientific evidence does not support claims that faith healing can actually cure physical ailments". "Death, disability, and other unwanted outcomes have occurred when faith healing was elected instead of medical care for serious injuries or illnesses." When parents have practiced faith healing rather than medical care, many children have died that otherwise would have been expected to live. Similar results are found in adults.
Regarded as a Christian belief that God heals people through the power of the Holy Spirit, faith healing often involves the laying on of hands. It is also called supernatural healing, divine healing, and miracle healing, among other things. Healing in the Bible is often associated with the ministry of specific individuals including Elijah, Jesus and Paul.
Christian physician Reginald B. Cherry views faith healing as a pathway of healing in which God uses both the natural and the supernatural to heal. Being healed has been described as a privilege of accepting Christ's redemption on the cross. Pentecostal writer Wilfred Graves Jr. views the healing of the body as a physical expression of salvation. , after describing Jesus exorcising at sunset and healing all of the sick who were brought to him, quotes these miracles as a fulfillment of the prophecy in : "He took up our infirmities and carried our diseases".
Even those Christian writers who believe in faith healing do not all believe that one's faith presently brings about the desired healing. "[Y]our faith does not effect your healing now. When you are healed rests entirely on what the sovereign purposes of the Healer are." Larry Keefauver cautions against allowing enthusiasm for faith healing to stir up false hopes. "Just believing hard enough, long enough or strong enough will not strengthen you or prompt your healing. Doing mental gymnastics to 'hold on to your miracle' will not cause your healing to manifest now." Those who actively lay hands on others and pray with them to be healed are usually aware that healing may not always follow immediately. Proponents of faith healing say it may come later, and it may not come in this life. "The truth is that your healing may manifest in eternity, not in time".
Parts of the four gospels in the New Testament say that Jesus cured physical ailments well outside the capacity of first-century medicine. Jesus’ healing acts are considered miraculous and spectacular due to the results being impossible or statistically improbable. One example is the case of "a woman who had had a discharge of blood for twelve years, and who had suffered much under many physicians, and had spent all that she had, and was not better but rather grew worse". After healing her, Jesus tells her "Daughter, your faith has made you well. Go in peace! Be cured from your illness". At least two other times Jesus credited the sufferer's faith as the means of being healed: and .
Jesus endorsed the use of the medical assistance of the time (medicines of oil and wine) when he told the parable of the Good Samaritan (Luke 10:25–37), who "bound up [an injured man's] wounds, pouring on oil and wine" (verse 34) as a physician would. Jesus then told the doubting teacher of the law (who had elicited this parable by his self-justifying question, "And who is my neighbor?" in verse 29) to "go, and do likewise" in loving others with whom he would never ordinarily associate (verse 37).
The healing in the gospels is referred to as a "sign" to prove Jesus' divinity and to foster belief in him as the Christ. However, when asked for other types of miracles, Jesus refused some but granted others in consideration of the motive of the request. Some theologians' understanding is that Jesus healed "all" who were present every single time. Sometimes he determines whether they had faith that he would heal them. Four of the seven miraculous signs performed in the Fourth Gospel that indicated he was sent from God were acts of healing or resurrection. He heals the Capernaum official’s son, heals a paralytic by the pool in Bethsaida, healing a man born blind, and resurrecting Lazarus of Bethany.
Jesus told his followers to heal the sick and stated that signs such as healing are evidence of faith. Jesus also told his followers to "cure sick people, raise up dead persons, make lepers clean, expel demons. You received free, give free".
Jesus sternly ordered many who received healing from him: "Do not tell anyone!" Jesus did not approve of anyone asking for a sign just for the spectacle of it, describing such as coming from a "wicked and adulterous generation".
The apostle Paul believed healing is one of the special gifts of the Holy Spirit, and that the possibility exists that certain persons may possess this gift to an extraordinarily high degree.
In the New Testament Epistle of James, the faithful are told that to be healed, those who are sick should call upon the elders of the church to pray over [them] and anoint [them] with oil in the name of the Lord.
The New Testament says that during Jesus' ministry and after his Resurrection, the apostles healed the sick and cast out demons, made lame men walk, raised the dead and performed other miracles. Apostles were holy men who had direct access to God and could channel his power to help and heal people. For example, Saint Peter healed a crippled man.
Jesus used miracles to convince people that he was inaugurating the Messianic Age, as in Mt 12.28. Scholars have described Jesus' miracles as establishing the kingdom during his lifetime.
The Roman Catholic Church recognizes two "not mutually exclusive" kinds of healing, one justified by science and one justified by faith:
In 2000, the Congregation for the Doctrine of the Faith issued "Instruction on prayers for healing" with specific norms about prayer meetings for obtaining healing, which presents the Catholic Church's doctrines of sickness and healing.
It accepts "that there may be means of natural healing that have not yet been understood or recognized by science", but it rejects superstitious practices which are neither compatible with Christian teaching nor compatible with scientific evidence.
Faith healing is reported by Catholics as the result of intercessory prayer to a saint or to a person with the gift of healing. According to "U.S. Catholic" magazine, "Even in this skeptical, postmodern, scientific age—miracles really are possible." Three-fourths of American Catholics say they pray for miracles.
According to John Cavadini, when healing is granted, "The miracle is not primarily for the person healed, but for all people, as a sign of God's work in the ultimate healing called 'salvation', or a sign of the kingdom that is coming." Some might view their own healing as a sign they are particularly worthy or holy, while others do not deserve it.
The Catholic Church has a special Congregation dedicated to the careful investigation of the validity of alleged miracles attributed to prospective saints. Pope Francis tightened the rules on money and miracles in the canonization process. Since Catholic Christians believe the lives of canonized saints in the Church will reflect Christ's, many have come to expect healing miracles. While the popular conception of a miracle can be wide-ranging, the Catholic Church has a specific definition for the kind of miracle formally recognized in a canonization process.
According to "Catholic Encyclopedia", it is often said that cures at shrines and during Christian pilgrimages are mainly due to psychotherapy — partly to confident trust in Divine providence, and partly to the strong expectancy of cure that comes over suggestible persons at these times and places.
Among the best-known accounts by Catholics of faith healings are those attributed to the miraculous intercession of the apparition of the Blessed Virgin Mary known as Our Lady of Lourdes at the Sanctuary of Our Lady of Lourdes in France and the remissions of life-threatening disease claimed by those who have applied for aid to Saint Jude, who is known as the "patron saint of lost causes".
, Catholic medics have asserted that there have been 67 miracles and 7,000 unexplainable medical cures at Lourdes since 1858. In a 1908 book, it says these cures were subjected to intense medical scrutiny and were only recognized as authentic spiritual cures after a commission of doctors and scientists, called the Lourdes Medical Bureau, had ruled out any physical mechanism for the patient's recovery.
In some Pentecostal and Charismatic Evangelical churches, a special place is thus reserved for faith healings with laying on of hands during worship services or for campaigns evangelization. Faith healing or divine healing is considered to be an inheritance of Jesus acquired by his death and resurrection. Biblicism ensures that the miracles and healings described in the Bible are still relevant and may be present in the life of the believer.
At the beginning of the 20th century, the new Pentecostal movement drew participants from the Holiness movement and other movements in America that already believed in divine healing. By the 1930s, several faith healers drew large crowds and established worldwide followings.
The first Pentecostals in the modern sense appeared in Topeka, Kansas, in a Bible school conducted by Charles Fox Parham, a holiness teacher and former Methodist pastor. Pentecostalism achieved worldwide attention in 1906 through the Azusa Street Revival in Los Angeles led by William Joseph Seymour.
Smith Wigglesworth was also a well-known figure in the early part of the 20th century. A former English plumber turned evangelist who lived simply and read nothing but the Bible from the time his wife taught him to read, Wigglesworth traveled around the world preaching about Jesus and performing faith healings. Wigglesworth claimed to raise several people from the dead in Jesus' name in his meetings.
During the 1920s and 1930s, Aimee Semple McPherson was a controversial faith healer of growing popularity during the Great Depression. Subsequently, William M. Branham has been credited as the initiator of the post-World War II healing revivals. The healing revival he began led many to emulate his style and spawned a generation of faith healers. Because of this, Branham has been recognized as the "father of modern faith healers". According to writer and researcher Patsy Sims, "the power of a Branham service and his stage presence remains a legend unparalleled in the history of the Charismatic movement". By the late 1940s, Oral Roberts, who was associated with and promoted by Branham's Voice of Healing magazine also became well known, and he continued with faith healing until the 1980s. Roberts discounted faith healing in the late 1950s, stating, "I never was a faith healer and I was never raised that way. My parents believed very strongly in medical science and we have a doctor who takes care of our children when they get sick. I cannot heal anyone – God does that." A friend of Roberts was Kathryn Kuhlman, another popular faith healer, who gained fame in the 1950s and had a television program on CBS. Also in this era, Jack Coe and A. A. Allen were faith healers who traveled with large tents for large open-air crusades.
Oral Roberts's successful use of television as a medium to gain a wider audience led others to follow suit. His former pilot, Kenneth Copeland, started a healing ministry. Pat Robertson, Benny Hinn, and Peter Popoff became well-known televangelists who claimed to heal the sick. Richard Rossi is known for advertising his healing clinics through secular television and radio. Kuhlman influenced Benny Hinn, who adopted some of her techniques and wrote a book about her.
Christian Science claims that healing is possible through an understanding of the underlying spiritual perfection of God's creation. The world as humanly perceived is believed to be a distortion of spiritual reality. Christian Scientists believe that healing through prayer is possible insofar as it succeeds in correcting the distortion. Christian Scientists believe that prayer does not change the spiritual creation but gives a clearer view of it, and the result appears in the human scene as healing: the human picture adjusts to coincide more nearly with the divine reality. Prayer works through love: the recognition of God's creation as spiritual, intact, and inherently lovable.
An important point in Christian Science is that effectual prayer and the moral regeneration of one's life go hand-in-hand: that "signs and wonders are wrought in the metaphysical healing of physical disease; but these signs are only to demonstrate its divine origin, to attest the reality of the higher mission of the Christ-power to take away the sins of the world." Christian Science teaches that disease is mental, a mortal fear, a mistaken belief or conviction of the necessity and power of ill-health – an ignorance of God's power and goodness. The chapter "Prayer" in "Science and Health with Key to the Scriptures" gives a full account of healing through prayer, while the testimonies at the end of the book are written by people who believe they have been healed through spiritual understanding gained from reading the book.
The Church of Jesus Christ of Latter-day Saints (LDS) has had a long history of faith healings. Many members of the LDS Church have told their stories of healing within the LDS publication, the "Ensign". The church believes healings come most often as a result of priesthood blessings given by the laying on of hands; however, prayer often accompanied with fasting is also thought to cause healings. Healing is always attributed to be God's power. Latter-day Saints believe that the Priesthood of God, held by prophets (such as Moses) and worthy disciples of the Savior, was restored via heavenly messengers to the first prophet of this dispensation, Joseph Smith.
According to LDS doctrine, even though members may have the restored priesthood authority to heal in the name of Jesus Christ, all efforts should be made to seek the appropriate medical help. Brigham Young stated this effectively, while also noting that the ultimate outcome is still dependent on the will of God.
Konkhogin Haokip has claimed some Muslims believe that the Quran was sent not only as a revelation, but as a medicine, and that they believe the Quran heals any physical and spiritual ailments through such practices as
Some critics of Scientology have referred to some of its practices as being similar to faith healing, based on claims made by L. Ron Hubbard in "" and other writings.
Nearly all scientists dismiss faith healing as pseudoscience. Some opponents of the pseudoscience label assert that faith healing makes no scientific claims and thus should be treated as a matter of faith that is not testable by science. Critics reply that claims of medical cures should be tested scientifically because, although faith in the supernatural is not in itself usually considered to be the purview of science, claims of reproducible effects are nevertheless subject to scientific investigation.
Scientists and doctors generally find that faith healing lacks biological plausibility or epistemic warrant, which is one of the criteria used to judge whether clinical research is ethical and financially justified. A Cochrane review of intercessory prayer found "although some of the results of individual studies suggest a positive effect of intercessory prayer, the majority do not". The authors concluded: "We are not convinced that further trials of this intervention should be undertaken and would prefer to see any resources available for such a trial used to investigate other questions in health care".
A review in 1954 investigated spiritual healing, therapeutic touch and faith healing. Of the hundred cases reviewed, none revealed that the healer's intervention alone resulted in any improvement or cure of a measurable organic disability.
In addition, at least one study has suggested that adult Christian Scientists, who generally use prayer rather than medical care, have a higher death rate than other people of the same age.
The Global Medical Research Institute (GMRI) was created in 2012 to start collecting medical records of patients who claim to have received a supernatural healing miracle as a result of Christian Spiritual Healing practices. The organization has a panel of medical doctors who review the patient's records looking at entries prior to the claimed miracles and entries after the miracle was claimed to have taken place. “The overall goal of GMRI is to promote an empirically grounded understanding of the physiological, emotional, and sociological effects of Christian Spiritual Healing practices”. This is accomplished by applying the same rigorous standards used in other forms of medical and scientific research.
A 2011 article in the New Scientist magazine cited positive physical results from meditation, positive thinking and spiritual faith
Skeptics of faith healing offer primarily two explanations for anecdotes of cures or improvements, relieving any need to appeal to the supernatural. The first is "post hoc ergo propter hoc", meaning that a genuine improvement or spontaneous remission may have been experienced coincidental with but independent from anything the faith healer or patient did or said. These patients would have improved just as well even had they done nothing. The second is the placebo effect, through which a person may experience genuine pain relief and other symptomatic alleviation. In this case, the patient genuinely has been helped by the faith healer or faith-based remedy, not through any mysterious or numinous function, but by the power of their own belief that they would be healed. In both cases the patient may experience a real reduction in symptoms, though in neither case has anything miraculous or inexplicable occurred. Both cases, however, are strictly limited to the body's natural abilities.
According to the American Cancer Society:
The American Medical Association considers that prayer as therapy should not be a medically reimbursable or deductible expense.
Belgian philosopher and skeptic Etienne Vermeersch coined the term Lourdes effect as a criticism of the magical thinking and placebo effect possibilities for the claimed miraculous cures as there are no documented events where a severed arm has been reattached through faith healing at Lourdes. Vermeersch identifies ambiguity and equivocal nature of the miraculous cures as a key feature of miraculous events.
Reliance on faith healing to the exclusion of other forms of treatment can have a public health impact when it reduces or eliminates access to modern medical techniques. This is evident in both higher mortality rates for children and in reduced life expectancy for adults. Critics have also made note of serious injury that has resulted from falsely labelled "healings", where patients erroneously consider themselves cured and cease or withdraw from treatment. For example, at least six people have died after faith healing by their church and being told they had been healed of HIV and could stop taking their medications. It is the stated position of the AMA that "prayer as therapy should not delay access to traditional medical care". Choosing faith healing while rejecting modern medicine can and does cause people to die needlessly.
Christian theological criticism of faith healing broadly falls into two distinct levels of disagreement.
The first is widely termed the "open-but-cautious" view of the miraculous in the church today. This term is deliberately used by Robert L. Saucy in the book "Are Miraculous Gifts for Today?". Don Carson is another example of a Christian teacher who has put forward what has been described as an "open-but-cautious" view. In dealing with the claims of Warfield, particularly "Warfield's insistence that miracles ceased", Carson asserts, "But this argument stands up only if such miraculous gifts are theologically tied exclusively to a role of attestation; and that is demonstrably not so." However, while affirming that he does not expect healing to happen today, Carson is critical of aspects of the faith healing movement, "Another issue is that of immense abuses in healing practises... The most common form of abuse is the view that since all illness is directly or indirectly attributable to the devil and his works, and since Christ by his cross has defeated the devil, and by his Spirit has given us the power to overcome him, healing is the inheritance right of all true Christians who call upon the Lord with genuine faith."
The second level of theological disagreement with Christian faith healing goes further. Commonly referred to as cessationism, its adherents either claim that faith healing will not happen today at all, or may happen today, but it would be unusual. Richard Gaffin argues for a form of cessationism in an essay alongside Saucy's in the book "Are Miraculous Gifts for Today"? In his book "Perspectives on Pentecost" Gaffin states of healing and related gifts that "the conclusion to be drawn is that as listed in 1 Corinthians 12(vv. 9f., 29f.) and encountered throughout the narrative in Acts, these gifts, particularly when exercised regularly by a given individual, are part of the foundational structure of the church... and so have passed out of the life of the church." Gaffin qualifies this, however, by saying "At the same time, however, the sovereign will and power of God today to heal the sick, particularly in response to prayer (see e.g. James 5:14,15), ought to be acknowledged and insisted on."
Skeptics of faith healers point to fraudulent practices either in the healings themselves (such as plants in the audience with fake illnesses), or concurrent with the healing work supposedly taking place and claim that faith healing is a quack practice in which the "healers" use well known non-supernatural illusions to exploit credulous people in order to obtain their gratitude, confidence and money. James Randi's "The Faith Healers" investigates Christian evangelists such as Peter Popoff, who claimed to heal sick people on stage in front of an audience. Popoff pretended to know private details about participants' lives by receiving radio transmissions from his wife who was off-stage and had gathered information from audience members prior to the show. According to this book, many of the leading modern evangelistic healers have engaged in deception and fraud. The book also questioned how faith healers use funds that were sent to them for specific purposes. Physicist Robert L. Park and doctor and consumer advocate Stephen Barrett have called into question the ethics of some exorbitant fees.
There have also been legal controversies. For example, in 1955 at a Jack Coe revival service in Miami, Florida, Coe told the parents of a three-year-old boy that he healed their son who had polio. Coe then told the parents to remove the boy's leg braces. However, their son was not cured of polio and removing the braces left the boy in constant pain. As a result, through the efforts of Joseph L. Lewis, Coe was arrested and charged on February 6, 1956 with practicing medicine without a license, a felony in the state of Florida. A Florida Justice of the Peace dismissed the case on grounds that Florida exempts divine healing from the law. Later that year Coe was diagnosed with bulbar polio, and died a few weeks later at Dallas' Parkland Hospital on December 17, 1956.
TV personality Derren Brown produced a show on faith healing entitled "Miracles for sale" which arguably exposed the art of faith healing as a scam. In this show, Derren trained a scuba diver trainer picked from the general public to be a faith healer and took him to Texas to successfully deliver a faith healing session to a congregation.
The 1974 Child Abuse Prevention and Treatment Act (CAPTA) required states to grant religious exemptions to child neglect and child abuse laws in order to receive federal money. The CAPTA amendments of 1996 state:
Thirty-one states have child-abuse religious exemptions. These are Alabama, Alaska, California, Colorado, Delaware, Florida, Georgia, Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine, Michigan, Minnesota, Mississippi, Missouri, Montana, Nevada, New Hampshire, New Jersey, New Mexico, Ohio, Oklahoma, Oregon, Pennsylvania, Vermont, Virginia, and Wyoming. In six of these states, Arkansas, Idaho, Iowa, Louisiana, Ohio and Virginia, the exemptions extend to murder and manslaughter. Of these, Idaho is the only state accused of having a large number of deaths due to the legislation in recent times. In February 2015, controversy was sparked in Idaho over a bill believed to further reinforce parental rights to deny their children medical care.
Parents have been convicted of child abuse and felony reckless negligent homicide and found responsible for killing their children when they withheld lifesaving medical care and chose only prayers. | https://en.wikipedia.org/wiki?curid=11627 |
Fritz Lang
Friedrich Christian Anton "Fritz" Lang (December 5, 1890 – August 2, 1976) was an Austrian-born German-American filmmaker, screenwriter, and occasional film producer and actor. One of the best-known "émigrés" from Germany's school of Expressionism, he was dubbed the "Master of Darkness" by the British Film Institute.
Lang's most famous films include the groundbreaking futuristic "Metropolis" (1927) and the also influential "M" (1931), a film noir precursor that he made before he moved to the United States. His other notable films include "Dr. Mabuse the Gambler" (1922), "Die Nibelungen" (1924), "Fury" (1936), "You Only Live Once" (1937), "Hangmen Also Die!" (1943), "The Woman in the Window" (1944) and "The Big Heat" (1953).
Lang was born in Vienna, as the second son of Anton Lang (1860–1940), an architect and construction company manager, and his wife Pauline "Paula" Lang ( Schlesinger; 1864–1920). He was baptized on December 28, 1890, at the Schottenkirche in Vienna.
Lang's parents were of Moravian descent and practising Roman Catholics. His parents (his mother, born Jewish, converted to Roman Catholicism before Fritz's birth) took their religion seriously and were dedicated to raising Fritz as a Catholic. Lang frequently had Catholic-influenced themes in his films. Late in life, he described himself as "a born Catholic and very puritan”. Although an atheist, Lang believed that religion was important for teaching ethics.
After finishing school, Lang briefly attended the Technical University of Vienna, where he studied civil engineering and eventually switched to art. In 1910 he left Vienna to see the world, travelling throughout Europe and Africa and later Asia and the Pacific area. In 1913, he studied painting in Paris.
At the outbreak of World War I, Lang returned to Vienna and volunteered for military service in the Austrian army and fought in Russia and Romania, where he was wounded three times. While recovering from his injuries and shell shock in 1916, he wrote some scenarios and ideas for films. He was discharged from the army with the rank of lieutenant in 1918 and did some acting in the Viennese theater circuit for a short time before being hired as a writer at Decla Film, Erich Pommer's Berlin-based production company.
Lang's writing stint was brief, as he soon started to work as a director at the German film studio UFA, and later Nero-Film, just as the Expressionist movement was building. In this first phase of his career, Lang alternated between films such as "Der Müde Tod" ("The Weary Death") and popular thrillers such as "Die Spinnen" ("The Spiders"), combining popular genres with Expressionist techniques to create an unprecedented synthesis of popular entertainment with art cinema.
In 1920, Lang met his future wife, the writer Thea von Harbou. She and Lang co-wrote all of his movies from 1921 through 1933, including "Dr. Mabuse, der Spieler" (Dr. Mabuse the Gambler; 1922), which ran for over four hours in two parts in the original version and was the first in the Dr. Mabuse trilogy, the five-hour "" (1924), the famous 1927 film "Metropolis", and the science fiction film "Woman in the Moon" (1929). "Metropolis" went far over budget and nearly destroyed UFA, which was bought by right-wing businessman and politician Alfred Hugenberg. It was a financial flop, as were his last silent films "Spies" (1928) and "Woman in the Moon" produced by Lang's own company.
In 1931 independent producer Seymour Nebenzahl hired Lang to direct "M" for Nero-Film. His first "talking" picture, considered by many film scholars to be a masterpiece of the early sound era, "M" is a disturbing story of a child murderer (Peter Lorre in his first starring role) who is hunted down and brought to rough justice by Berlin's criminal underworld. "M" remains a powerful work; it was remade in 1951 by Joseph Losey, but this version had little impact on audiences, and has become harder to see than the original film.
During the climactic final scene in "M", Lang allegedly threw Peter Lorre down a flight of stairs in order to give more authenticity to Lorre's battered look. Lang, who was known for being hard to work with, epitomized the stereotype of the tyrannical German film director, a type embodied also by Erich von Stroheim and Otto Preminger. His wearing a monocle added to the stereotype.
In the films of his German period, Lang produced a coherent oeuvre that established the characteristics later attributed to film noir, with its recurring themes of psychological conflict, paranoia, fate and moral ambiguity.
At the end of 1932, Lang started filming "The Testament of Dr. Mabuse". Adolf Hitler came to power in January 1933, and by March 30, the new regime banned it as an incitement to public disorder. "Testament" is sometimes deemed an anti-Nazi film, as Lang had put phrases used by the Nazis into the mouth of the title character. A screening of the film was cancelled by Joseph Goebbels and the film later banned by the Reich Ministry of Public Enlightenment and Propaganda. In banning the film, Goebbels stated that the film "showed that an extremely dedicated group of people are perfectly capable of overthrowing any state with violence", and that the film posed a threat to public health and safety.
Lang was worried about the advent of the Nazi regime, partly because of his Jewish heritage, whereas his wife and screenwriter Thea von Harbou had started to sympathize with the Nazis in the early 1930s and joined the NSDAP in 1940. They soon divorced. Lang's fears would be realized following his departure from Austria, as under the Nuremberg Laws he would be identified as a part-Jew even though his mother was a converted Roman Catholic, and he was raised as such.
According to Lang, propaganda minister Joseph Goebbels called Lang to his offices to inform him - apologetically - that "The Testament of Dr Mabuse" was being banned but, nevertheless, he was so impressed by Lang's abilities as a filmmaker (especially "Metropolis"), that he offered Lang the position of head of German film studio UFA. Lang said it was during that meeting he had decided to leave for Paris – but that the banks had closed by the time the meeting was over. Lang claimed that, after selling his wife's jewelry, he fled by train to Paris that very evening, leaving most of his money and personal possessions behind. However, his passport of the time showed that he traveled to and from Germany a few times during 1933.
Lang finally left Berlin on 31 July 1933, four months after his meeting with Goebbels and his supposed dramatic escape. He moved to Paris, and divorced Thea von Harbou, who stayed behind, late in 1933.
In Paris, Lang filmed a version of Ferenc Molnár's "Liliom", starring Charles Boyer. That was Lang's only film in French (not including the French version of "Testament"). He then moved to the United States.
In Hollywood, Lang signed first with MGM Studios. His first American film was the crime drama "Fury" (1936), which starred Spencer Tracy as a man who is wrongly accused of a crime and nearly killed when a lynch mob sets fire to the jail where he is awaiting trial. From the beginning Lang was struggling with restrictions in the United States. Thus, in "Fury" he was not allowed to represent black victims in a lynching scenario or to criticize racism. Because of his anti-Nazi films and his acquaintance with Brecht and Hanns Eisler, he came into the field of view of the Communist hunter Joseph McCarthy.
Lang became a naturalized citizen of the United States in 1939. He made twenty-three features in his 20-year American career, working in a variety of genres at every major studio in Hollywood, and occasionally producing his films as an independent. Lang's American films were often compared unfavorably to his earlier works by contemporary critics, but the restrained Expressionism of these films is now seen as integral to the emergence and evolution of American genre cinema, "film noir" in particular. Lang's 1945 film "Scarlet Street" is considered a central film in the genre.
One of Lang's most famous "films noir" is the police drama "The Big Heat" (1953), noted for its uncompromising brutality, especially for a scene in which Lee Marvin throws scalding coffee on Gloria Grahame's face. As Lang's visual style simplified, in part due to the constraints of the Hollywood studio system, his worldview became increasingly pessimistic, culminating in the cold, geometric style of his last American films, "While the City Sleeps" (1956) and "Beyond a Reasonable Doubt" (1956).
Finding it difficult to find congenial production conditions and backers in Hollywood, particularly as his health declined with age, Lang contemplated retirement. The German producer Artur Brauner had expressed interest in remaking "The Indian Tomb" (from an original story by Thea von Harbou, that Lang had developed in the 1920s which had ultimately been directed by Joe May). So Lang returned to Germany, to make his "Indian Epic" (consisting of "The Tiger of Eschnapur" and "The Indian Tomb"). Following the production, Brauner was preparing for a remake of "The Testament of Dr. Mabuse" when Lang approached him with the idea of adding a new original film to the series. The result was "The Thousand Eyes of Dr. Mabuse" (1960), whose success led to a series of new Mabuse films, which were produced by Brauner (including the remake of "The Testament of Dr. Mabuse"), though Lang did not direct any of the sequels. "The Thousand Eyes of Dr. Mabuse" can be viewed as the marriage between the director's early experiences with expressionist techniques in Germany with the spartan style already visible in his late American work. Lang was approaching blindness during the production, and it was his final project as director. In 1963, he appeared as himself in Jean-Luc Godard's film "Contempt".
On February 8, 1960, Lang received a star on the Hollywood Walk of Fame for his contributions to the motion picture industry, located at 1600 Vine Street.
Lang died from a stroke in 1976 and was interred in the Forest Lawn – Hollywood Hills Cemetery in the Hollywood Hills of Los Angeles.
While his career had ended without fanfare, Lang's American and later German works were championed by the critics of the "Cahiers du cinéma", such as François Truffaut and Jacques Rivette. Truffaut wrote that Lang, especially in his American career, was greatly underappreciated by "cinema historians and critics" who "deny him any genius when he 'signs' spy movies ... war movies ... or simple thrillers." Filmmakers that were influenced by his work include Jacques Rivette and William Friedkin.
The Academy Film Archive has preserved a number of Lang's films, including "Human Desire" and "Man Hunt". | https://en.wikipedia.org/wiki?curid=11631 |
Food and Drug Administration
The Food and Drug Administration (FDA or USFDA) is a federal agency of the United States Department of Health and Human Services, one of the United States federal executive departments. The FDA is responsible for protecting and promoting public health through the control and supervision of food safety, tobacco products, dietary supplements, prescription and over-the-counter pharmaceutical drugs (medications), vaccines, biopharmaceuticals, blood transfusions, medical devices, electromagnetic radiation emitting devices (ERED), cosmetics, animal foods & feed and veterinary products.
The FDA was empowered by the United States Congress to enforce the Federal Food, Drug, and Cosmetic Act, which serves as the primary focus for the Agency; the FDA also enforces other laws, notably Section 361 of the Public Health Service Act and associated regulations, many of which are not directly related to food or drugs. These include regulating lasers, cellular phones, condoms and control of disease on products ranging from certain household pets to sperm donation for assisted reproduction.
The FDA is led by the Commissioner of Food and Drugs, appointed by the President with the advice and consent of the Senate. The Commissioner reports to the Secretary of Health and Human Services. Stephen M. Hahn, MD is the acting commissioner, .
The FDA has its headquarters in unincorporated White Oak, Maryland. The agency also has 223 field offices and 13 laboratories located throughout the 50 states, the United States Virgin Islands, and Puerto Rico. In 2008, the FDA began to post employees to foreign countries, including China, India, Costa Rica, Chile, Belgium, and the United Kingdom.
In recent years, the agency began undertaking a large-scale effort to consolidate its 25 operations in the Washington metropolitan area, moving from its main headquarters in Rockville and several fragmented office buildings to the former site of the Naval Ordnance Laboratory in the White Oak area of Silver Spring, Maryland. The site was renamed from the White Oak Naval Surface Warfare Center to the Federal Research Center at White Oak. The first building, the Life Sciences Laboratory, was dedicated and opened with 104 employees on the campus in December 2003. Only one original building from the naval facility was kept. All other buildings are new construction. The project is slated to be completed by 2021, assuming future Congressional funding
While most of the Centers are located in the Montgomery and Prince George's Counties in the Washington, D.C. area as part of the Headquarters divisions, two offices – the Office of Regulatory Affairs (ORA) and the Office of Criminal Investigations (OCI) – are primarily field offices with a workforce spread across the country. There also a number of field locations across the United States in addition to international locations in China, India, Europe, Middle East, and Latin America.
White Oak Campus is known as the “Federal Research Center” of the FDA. In total, the campus is 710 acres and separated by eight stream courses. This campus houses the Office of the Commissioner (OC), the Office of Regulatory Affairs (ORA), the Center for Drug Evaluation and Research (CDER), the Center for Devices and Radiological Health (CDRH), the Center for Biologics Evaluation and Research (CBER) and offices for the Center for Veterinary Medicine (CVM).
The Office of Regulatory Affairs is considered the "eyes and ears" of the agency, conducting the vast majority of the FDA's work in the field. Consumer Safety Officers, more commonly called Investigators, are the individuals who inspect production and warehousing facilities, investigate complaints, illnesses, or outbreaks, and review documentation in the case of medical devices, drugs, biological products, and other items where it may be difficult to conduct a physical examination or take a physical sample of the product.
The Office of Regulatory Affairs is divided into five regions, which are further divided into 20 districts. Districts are based roughly on the geographic divisions of the federal court system. Each district comprises a main district office and a number of Resident Posts, which are FDA remote offices that serve a particular geographic area. ORA also includes the Agency's network of regulatory laboratories, which analyze any physical samples taken. Though samples are usually food-related, some laboratories are equipped to analyze drugs, cosmetics, and radiation-emitting devices.
The Office of Criminal Investigations was established in 1991 to investigate criminal cases. Unlike ORA Investigators, OCI Special Agents are armed, and don't focus on technical aspects of the regulated industries. OCI agents pursue and develop cases where individuals and companies have committed criminal actions, such as fraudulent claims, or knowingly and willfully shipping known adulterated goods in interstate commerce. In many cases, OCI pursues cases involving Title 18 violations (e.g., conspiracy, false statements, wire fraud, mail fraud), in addition to prohibited acts as defined in Chapter III of the FD&C Act. OCI Special Agents often come from other criminal investigations backgrounds, and work closely with the Federal Bureau of Investigation, Assistant Attorney General, and even Interpol. OCI receives cases from a variety of sources—including ORA, local agencies, and the FBI—and works with ORA Investigators to help develop the technical and science-based aspects of a case. OCI is a smaller branch, comprising about 200 agents nationwide.
The FDA frequently works with other federal agencies, including the Department of Agriculture, Drug Enforcement Administration, Customs and Border Protection, and Consumer Product Safety Commission. Often local and state government agencies also work with the FDA to provide regulatory inspections and enforcement action.
The FDA regulates more than US$ worth of consumer goods, about 25% of consumer expenditures in the United States. This includes $466 billion in food sales, $275 billion in drugs, $60 billion in cosmetics and $18 billion in vitamin supplements. Much of these expenditures are for goods imported into the United States; the FDA is responsible for monitoring imports.
The FDA's federal budget request for fiscal year (FY) 2012 totaled $4.36 billion, while the proposed 2014 budget is $4.7 billion. About $2 billion of this budget is generated by user fees. Pharmaceutical firms pay the majority of these fees, which are used to expedite drug reviews. The FDA's federal budget request for fiscal year (FY) 2008 (October 2007 through September 2008) totaled $2.1 billion, a $105.8 million increase from what it received for fiscal year 2007.
In February 2008, the FDA announced that the Bush Administration's FY 2009 budget request for the agency was just under $2.4 billion: $1.77 billion in budget authority (federal funding) and $628 million in user fees. The requested budget authority was an increase of $50.7 million more than the FY 2008 funding – about a three percent increase. In June 2008, Congress gave the agency an emergency appropriation of $150 million for FY 2008 and another $150 million.
Most federal laws concerning the FDA are part of the Food, Drug and Cosmetic Act, (first passed in 1938 and extensively amended since) and are codified in Title 21, Chapter 9 of the United States Code. Other significant laws enforced by the FDA include the Public Health Service Act, parts of the Controlled Substances Act, the Federal Anti-Tampering Act, as well as many others. In many cases, these responsibilities are shared with other federal agencies.
As of 2015, the agency regulates more than $1 trillion in consumer products, including:
The programs for safety regulation vary widely by the type of product, its potential risks, and the regulatory powers granted to the agency. For example, the FDA regulates almost every facet of prescription drugs, including testing, manufacturing, labeling, advertising, marketing, efficacy, and safety—yet FDA regulation of cosmetics focuses primarily on labeling and safety. The FDA regulates most products with a set of published standards enforced by a modest number of facility inspections. Inspection observations are documented on Form 483.
In June 2018, the FDA released a statement regarding new guidelines to help food and drug manufacturers "implement protections against potential attacks on the U.S. food supply". One of the new guidelines includes the Intentional Adulteration (IA) rule, which requires strategies and procedures by the food industry to reduce the risk of compromise in facilities and processes that are significantly vulnerable.
The FDA also uses tactics of regulatory shaming, mainly through online publication of non-compliance, warning letters, and "shaming lists." Regulation by shaming harnesses firms' sensitivity to reputational damage. For example, in 2018, the agency published an online "black list," in which it named dozens of branded drug companies that are supposedly using unlawful or unethical means to attempt to impede competition from generic drug companies.
On February 4, 2011, Prime Minister of Canada Stephen Harper and United States President Barack Obama issued a "Declaration on a Shared Vision for Perimeter Security and Economic Competitiveness" and announced the creation of the Canada-United States Regulatory Cooperation Council (RCC) "to increase regulatory transparency and coordination between the two countries".
Health Canada and the U.S. Food and Drug Administration (FDA) under the RCC mandate, undertook the "first of its kind" initiative by selecting "as its first area of alignment common cold indications for certain over-the-counter antihistamine ingredients (GC 2013-01-10)."
The regulation of food and dietary supplements by the Food and Drug Administration is governed by various statutes enacted by the United States Congress and interpreted by the FDA. Pursuant to the Federal Food, Drug, and Cosmetic Act ("the Act") and accompanying legislation, the FDA has authority to oversee the quality of substances sold as food in the United States, and to monitor claims made in the labeling about both the composition and the health benefits of foods.
The FDA subdivides substances that it regulates as food into various categories—including foods, food additives, added substances (man-made substances that are not intentionally introduced into food, but nevertheless end up in it), and dietary supplements. Dietary supplements or dietary ingredients include vitamins, minerals, herbs, amino acids, and. enzymes. Specific standards the FDA exercises differ from one category to the next. Furthermore, legislation had granted the FDA a variety of means to address violations of standards for a given substance category.
Under the Dietary Supplement Health and Education Act of 1994 (DSHEA), the FDA is responsible for making sure that manufacturers and distributors of dietary supplements and dietary ingredients meet the current requirements. These manufacturers and distributors are not allowed to advertise their products in an adulterated way, and they are responsible for evaluating the safety and labeling of their product.
The FDA has a “Dietary Supplement Ingredient Advisory List” that includes ingredients that sometimes appear on dietary supplements but need further evaluation further. An ingredient is added to this list when it is excluded from use in a dietary supplement, does not appear to be an approved food additive or recognized as safe, and/or subjected to the requirement for pre-market notification without having a satisfied requirement.
The FDA does not approve applied coatings used in the food processing industry. There is no review process to approve the composition of nonstick coatings, nor does the FDA inspect or test these materials. Through their governing of processes, however, the FDA does have a set of regulations that cover the formulation, manufacturing, and use of nonstick coatings. Hence, materials like Polytetrafluoroethylene (Teflon) are not, and cannot be, considered as FDA Approved, rather, they are "FDA Compliant" or "FDA Acceptable".
Medical countermeasures (MCMs) are products such as biologics and pharmaceutical drugs that can protect from or treat the effects of a chemical, biological, radiological, or nuclear (CBRN) attack. MCMs can also be used for prevention and diagnosis of symptoms associated with CBRN attacks or threats. The FDA runs a program called the "FDA Medical Countermeasures Initiative" (MCMi), with programs funded by the federal government. It helps support "partner" agencies and organisations prepare for public health emergencies that could require MCMs.
The Center for Drug Evaluation and Research uses different requirements for the three main drug product types: new drugs, generic drugs, and over-the-counter drugs. A drug is considered "new" if it is made by a different manufacturer, uses different excipients or inactive ingredients, is used for a different purpose, or undergoes any substantial change. The most rigorous requirements apply to "new molecular entities": drugs that are not based on existing medications.
New drugs receive extensive scrutiny before FDA approval in a process called a new drug application (NDA). Critics, however, argue that the FDA standards are not sufficiently rigorous, allowing unsafe or ineffective drugs to be approved. New drugs are available only by prescription by default. A change to over-the-counter (OTC) status is a separate process, and the drug must be approved through an NDA first. A drug that is approved is said to be "safe and effective when used as directed".
Some very rare limited exceptions to this multi-step process involving animal testing and controlled clinical trials can be granted out of compassionate use protocols, as was the case during the 2015 Ebola epidemic with the use, by prescription and authorization, of ZMapp and other experimental treatments, and for new drugs that can be used to treat debilitating and/or very rare conditions for which no existing remedies or drugs are satisfactory, or where there has not been an advance in a long period of time. The studies are progressively longer, gradually adding more individuals as they progress from stage I to stage III, normally over a period of years, and normally involve drug companies, the government and its laboratories, and often medical schools and hospitals and clinics. However, any exceptions to the aforementioned process are subject to strict review and scrutiny and conditions, and are only given if a substantial amount of research and at least some preliminary human testing has shown that they are believed to be somewhat safe and possibly effective.
The FDA's Office of Prescription Drug Promotion reviews and regulates prescription drug advertising and promotion through surveillance activities and issuance of enforcement letters to pharmaceutical manufacturers. Advertising and promotion for over-the-counter drugs is regulated by the Federal Trade Commission.
The drug advertising regulation contains two broad requirements: (1) a company may advertise or promote a drug only for the specific indication or medical use for which it was approved by FDA. Also, an advertisement must contain a "fair balance" between the benefits and the risks (side effects) of a drug.
The term off-label refers to drug usage for indications other than those approved by the FDA.
After NDA approval, the sponsor must review and report to the FDA every patient adverse drug experience it learns of. They must report unexpected serious and fatal adverse drug events within 15 days, and other events on a quarterly basis.
The FDA also receives directly adverse drug event reports through its MedWatch program. These reports are called "spontaneous reports" because reporting by consumers and health professionals is voluntary.
While this remains the primary tool of postmarket safety surveillance, FDA requirements for postmarketing risk management are increasing.
As a condition of approval, a sponsor may be required to conduct additional clinical trials, called Phase IV trials.
In some cases, the FDA requires risk management plans ("Risk Evaluation and Mitigation Strategies" or "REMS") for some drugs that require actions to be taken to ensure that the drug is used safely. For example, thalidomide can cause birth defects but has uses that outweigh the risks if men and women taking the drugs do not conceive a child; a REMS program for thalidomide mandates an auditable process to ensure that people taking the drug take action to avoid pregnancy; many opioid drugs have REMS programs to avoid addiction and diversion of drugs. There is also a REMS program called iPLEDGE for the drug, isotretinoin.
Generic drugs are chemical and therapeutical equivalents of name-brand drugs whose patents have expired. Approved generic drugs should have the same dosage, safety, effectiveness, strength, stability, and quality, as well as route of administration. In general, they are less expensive than their name brand counterparts, are manufactured and marketed by other companies and, in the 1990s, accounted for about a third of all prescriptions written in the United States.
For approval of a generic drug, the U.S. Food and Drug Administration (FDA) requires scientific evidence that the generic drug is interchangeable with or therapeutically equivalent to the originally approved drug. This is called an "ANDA" (Abbreviated New Drug Application). As of 2012, 80% of all FDA approved drugs are available in generic form.
In 1989, a major scandal erupted involving the procedures used by the FDA to approve generic drugs for sale to the public. Charges of corruption in generic drug approval first emerged in 1988, in the course of an extensive congressional investigation into the FDA. The oversight subcommittee of the United States House Energy and Commerce Committee resulted from a complaint brought against the FDA by Mylan Laboratories Inc. of Pittsburgh. When its application to manufacture generics were subjected to repeated delays by the FDA, Mylan, convinced that it was being discriminated against, soon began its own private investigation of the agency in 1987. Mylan eventually filed suit against two former FDA employees and four drug-manufacturing companies, charging that corruption within the federal agency resulted in racketeering and in violations of antitrust law. "The order in which new generic drugs were approved was set by the FDA employees even before drug manufacturers submitted applications" and, according to Mylan, this illegal procedure was followed to give preferential treatment to certain companies. During the summer of 1989, three FDA officials (Charles Y. Chang, David J. Brancato, Walter Kletch) pleaded guilty to criminal charges of accepting bribes from generic drugs makers, and two companies (Par Pharmaceutical and its subsidiary Quad Pharmaceuticals) pleaded guilty to giving bribes.
Furthermore, it was discovered that several manufacturers had falsified data submitted in seeking FDA authorization to market certain generic drugs. Vitarine Pharmaceuticals of New York, which sought approval of a generic version of the drug Dyazide, a medication for high blood pressure, submitted Dyazide, rather than its generic version, for the FDA tests. In April 1989, the FDA investigated 11 manufacturers for irregularities; and later brought that number up to 13. Dozens of drugs were eventually suspended or recalled by manufacturers. In the early 1990s, the U.S. Securities and Exchange Commission filed securities fraud charges against the Bolar Pharmaceutical Company, a major generic manufacturer based in Long Island, New York.
Over-the-counter (OTC) drugs like aspirin are drugs and combinations that do not require a doctor's prescription.
The FDA has a list of approximately 800 approved ingredients that are combined in various ways to create more than 100,000 OTC drug products.
Many OTC drug ingredients had been previously approved prescription drugs now deemed safe enough for use without a medical practitioner's supervision like ibuprofen.
In 2014, the FDA added an Ebola treatment being developed by Canadian pharmaceutical company Tekmira to the Fast Track program, but halted the phase 1 trials in July pending the receipt of more information about how the drug works. This is seen as increasingly important in the face of a major outbreak of the disease in West Africa that began in late March 2014 and continued .
During the coronavirus pandemic, FDA granted Emergency Use Authorization for personal protective equipment (PPE), in vitro diagnostic equipment, ventilators and other medical devices.
On March 18, FDA inspectors postponed most foreign facility inspections and all domestic routine surveillance facility inspections. By contrast, the USDA's Food Safety and Inspection Service (FSIS) continued inspections of meatpacking plants, which resulted in 145 FSIS field employees who tested positive for COVID-19, and three who died.
The Center for Biologics Evaluation and Research is the branch of the FDA responsible for ensuring the safety and efficacy of biological therapeutic agents. These include blood and blood products, vaccines, allergenics, cell and tissue-based products, and gene therapy products. New biologics are required to go through a premarket approval process called a Biologics License Application (BLA), similar to that for drugs.
The original authority for government regulation of biological products was established by the 1902 Biologics Control Act, with additional authority established by the 1944 Public Health Service Act. Along with these Acts, the Federal Food, Drug, and Cosmetic Act applies to all biologic products, as well. Originally, the entity responsible for regulation of biological products resided under the National Institutes of Health; this authority was transferred to the FDA in 1972.
The Center for Devices and Radiological Health (CDRH) is the branch of the FDA responsible for the premarket approval of all medical devices, as well as overseeing the manufacturing, performance and safety of these devices. The definition of a medical device is given in the FD&C Act, and it includes products from the simple toothbrush to complex devices such as implantable neurostimulators. CDRH also oversees the safety performance of non-medical devices that emit certain types of electromagnetic radiation. Examples of CDRH-regulated devices include cellular phones, airport baggage screening equipment, television receivers, microwave ovens, tanning booths, and laser products.
CDRH regulatory powers include the authority to require certain technical reports from the manufacturers or importers of regulated products, to require that radiation-emitting products meet mandatory safety performance standards, to declare regulated products defective, and to order the recall of defective or noncompliant products. CDRH also conducts limited amounts of direct product testing.
Clearance requests are for medical devices that prove they are "substantially equivalent" to the predicate devices already on the market. Approved requests are for items that are new or substantially different and need to demonstrate "safety and efficacy", for example it may be inspected for safety in case of new toxic hazards. Both aspects need to be proved or provided by the submitter to ensure proper procedures are followed.
Cosmetics are regulated by the Center for Food Safety and Applied Nutrition, the same branch of the FDA that regulates food. Cosmetic products are not, in general, subject to premarket approval by the FDA unless they make "structure or function claims" that make them into drugs (see Cosmeceutical). However, all color additives must be specifically FDA approved before manufacturers can include them in cosmetic products sold in the U.S. The FDA regulates cosmetics labeling, and cosmetics that have not been safety tested must bear a warning to that effect.
According to the industry advocacy group the American Council on Science and Health, though the cosmetic industry is predominantly responsible in ensuring the safety of its products, the FDA also has the power to intervene when necessary to protect the public but in general does not require pre-market approval or testing. The ACSH says that companies are required to place a warning note on their products if they have not been tested and that experts in cosmetic ingredient reviews also play a role in monitoring safety through influence on the use of ingredients, but also lack legal authority. According to the ACSH, overall the organization has reviewed about 1,200 ingredients and has suggested that several hundred be restricted, but there is no standard or systemic method for reviewing chemicals for safety and a clear definition of what is meant by 'safety' so that all chemicals are tested on the same basis.
The Center for Veterinary Medicine (CVM) is a center of the FDA that regulates food additives and drugs that are given to animals. CVM regulates animal drugs, animal food including pet animal, and animal medical devices. The FDA's requirements to prevent the spread of bovine spongiform encephalopathy are also administered by CVM through inspections of feed manufacturers. CVM does not regulate vaccines for animals; these are handled by the United States Department of Agriculture.
Since the Family Smoking Prevention and Tobacco Control Act became law in 2009, the FDA also has had the authority to regulate tobacco products.
In 2009, Congress passed a law requiring color warnings on cigarette packages and on printed advertising, in addition to text warnings from the U.S. Surgeon General.
The nine new graphic warning labels were announced by the FDA in June 2011 and were scheduled to be required to appear on packaging by September 2012. The implementation date is uncertain, due to ongoing proceedings in the case of R.J. Reynolds Tobacco Co. v. U.S. Food and Drug Administration. R.J. Reynolds, Lorillard, Commonwealth Brands Inc., Liggett Group LLC and Santa Fe Natural Tobacco Company Inc. have filed suit in Washington, D.C. federal court claiming that the graphic labels are an unconstitutional way of forcing tobacco companies to engage in anti-smoking advocacy on the government's behalf.
A First Amendment lawyer, Floyd Abrams, is representing the tobacco companies in the case, contending requiring graphic warning labels on a lawful product cannot withstand constitutional scrutiny. The Association of National Advertisers and the American Advertising Federation have also filed a brief in the suit, arguing that the labels infringe on commercial free speech and could lead to further government intrusion if left unchallenged. In November 2011, Federal judge Richard Leon of the U.S. District Court for the District of Columbia temporarily halted the new labels, likely delaying the requirement that tobacco companies display the labels. The U.S. Supreme Court ultimately could decide the matter.
In July 2017, the FDA announced a plan that would reduce the current levels of nicotine permitted in tobacco cigarettes.
With acceptance of premarket notification 510(k) k033391 in January 2004, the FDA granted Dr. Ronald Sherman permission to produce and market medical maggots for use in humans or other animals as a prescription medical device.
Medical maggots represent the first living organism allowed by the Food and Drug Administration for production and marketing as a prescription medical device.
In June 2004, the FDA cleared "Hirudo medicinalis" (medicinal leeches) as the second living organism to be used as a medical device.
The FDA also requires milk to be pasteurized to remove bacteria.
The FDA cooperated with international regulatory and law enforcement agencies through Interpol as part of Operation Pangea XI from October 9 to October 16, 2018. The FDA targeted 465 websites that illegally sold potentially dangerous, unapproved versions of opioid, oncology and antiviral prescription drugs to U.S. consumers. In terms of money laundering, the FDA targeted transaction laundering schemes in order to uncover the complex online drug network.
In addition to its regulatory functions, the FDA carries out research and development activities to develop technology and standards that support its regulatory role, with the objective of resolving scientific and technical challenges before they become impediments. The FDA's research efforts include the areas of biologics, medical devices, drugs, women's health, toxicology, food safety and applied nutrition, and veterinary medicine.
The FDA has collected a large amount of data through decades. In March 2013, OpenFDA was created to enable easy access of the data for the public.
Up until the 20th century, there were few federal laws regulating the contents and sale of domestically produced food and pharmaceuticals, with one exception being the short-lived Vaccine Act of 1813. The history of the FDA can be traced to the latter part of the 19th century and the U.S. Department of Agriculture's Division of Chemistry, later its Bureau of Chemistry. Under Harvey Washington Wiley, appointed chief chemist in 1883, the Division began conducting research into the adulteration and misbranding of food and drugs on the American market. Wiley's advocacy came at a time when the public had become aroused to hazards in the marketplace by muckraking journalists like Upton Sinclair, and became part of a general trend for increased federal regulations in matters pertinent to public safety during the Progressive Era. The 1902 Biologics Control Act was put in place after a diphtheria antitoxin—derived from tetanus-contaminated serum—was used to produce a vaccine that caused the deaths of thirteen children in St. Louis, Missouri. The serum was originally collected from a horse named Jim, who had contracted tetanus.
In June 1906, President Theodore Roosevelt signed into law the Pure Food and Drug Act, also known as the "Wiley Act" after its chief advocate. The Act prohibited, under penalty of seizure of goods, the interstate transport of food that had been "adulterated". The act applied similar penalties to the interstate marketing of "adulterated" drugs, in which the "standard of strength, quality, or purity" of the active ingredient was not either stated clearly on the label or listed in the "United States Pharmacopeia" or the "National Formulary".
The responsibility for examining food and drugs for such "adulteration" or "misbranding" was given to Wiley's USDA Bureau of Chemistry. Wiley used these new regulatory powers to pursue an aggressive campaign against the manufacturers of foods with chemical additives, but the Chemistry Bureau's authority was soon checked by judicial decisions, which narrowly defined the bureau's powers and set high standards for proof of fraudulent intent. In 1927, the Bureau of Chemistry's regulatory powers were reorganized under a new USDA body, the Food, Drug, and Insecticide organization. This name was shortened to the Food and Drug Administration (FDA) three years later.
By the 1930s, muckraking journalists, consumer protection organizations, and federal regulators began mounting a campaign for stronger regulatory authority by publicizing a list of injurious products that had been ruled permissible under the 1906 law, including radioactive beverages, the mascara Lash lure, which caused blindness, and worthless "cures" for diabetes and tuberculosis. The resulting proposed law was unable to get through the Congress of the United States for five years, but was rapidly enacted into law following the public outcry over the 1937 Elixir Sulfanilamide tragedy, in which over 100 people died after using a drug formulated with a toxic, untested solvent.
President Franklin Delano Roosevelt signed the new Food, Drug, and Cosmetic Act (FD&C Act) into law on June 24, 1938. The new law significantly increased federal regulatory authority over drugs by mandating a pre-market review of the safety of all new drugs, as well as banning false therapeutic claims in drug labeling without requiring that the FDA prove fraudulent intent. Soon after passage of the 1938 Act, the FDA began to designate certain drugs as safe for use only under the supervision of a medical professional, and the category of "prescription-only" drugs was securely codified into law by the 1951 Durham-Humphrey Amendment. These developments confirmed extensive powers for the FDA to enforce post-marketing recalls of ineffective drugs.
Outside of the US, the drug thalidomide was marketed for the relief of general nausea and morning sickness but caused birth defects and even the death of thousands of babies when taken during pregnancy. American mothers were largely unaffected as Dr. Frances Oldham Kelsey of the FDA refused to authorize the medication for market, the 1962 Kefauver-Harris Amendment to the FD&C Act was passed, which represented a "revolution" in FDA regulatory authority. The most important change was the requirement that all new drug applications demonstrate "substantial evidence" of the drug's efficacy for a marketed indication, in addition to the existing requirement for pre-marketing demonstration of safety. This marked the start of the FDA approval process in its modern form.
These reforms had the effect of increasing the time, and the difficulty, required to bring a drug to market. One of the most important statutes in establishing the modern American pharmaceutical market was the 1984 Drug Price Competition and Patent Term Restoration Act, more commonly known as the "Hatch-Waxman Act" after its chief sponsors. The act extended the patent exclusivity terms of new drugs, and tied those extensions, in part, to the length of the FDA approval process for each individual drug. For generic manufacturers, the Act created a new approval mechanism, the Abbreviated New Drug Application (ANDA), in which the generic drug manufacturer need only demonstrate that their generic formulation has the same active ingredient, route of administration, dosage form, strength, and pharmacokinetic properties ("bioequivalence") as the corresponding brand-name drug. This act has been credited with in essence creating the modern generic drug industry.
Concerns about the length of the drug approval process were brought to the fore early in the AIDS epidemic. In the mid- and late 1980s, ACT-UP and other HIV activist organizations accused the FDA of unnecessarily delaying the approval of medications to fight HIV and opportunistic infections. Partly in response to these criticisms, the FDA issued new rules to expedite approval of drugs for life-threatening diseases, and expanded pre-approval access to drugs for patients with limited treatment options. All of the initial drugs approved for the treatment of HIV/AIDS were approved through these accelerated approval mechanisms. Frank Young, the commissioner of the FDA was behind the Action Plan Phase II, established in August 1987 for quicker approval of AIDS medication.
In two instances, state governments have sought to legalize drugs that the FDA has not approved. Under the theory that federal law passed pursuant to Constitutional authority overrules conflicting state laws, federal authorities still claim the authority to seize, arrest, and prosecute for possession and sales of these substances, even in states where they are legal under state law. The first wave was the legalization by 27 states of laetrile in the late 1970s. This drug was used as a treatment for cancer, but scientific studies both before and after this legislative trend found it to be ineffective. The second wave concerned medical marijuana in the 1990s and 2000s. Though Virginia passed a law with limited effect in 1979, a more widespread trend began in California in 1996.
When the FDA requested Endo Pharmaceuticals on June 8, 2017, to remove "oxymorphone hydrochloride" from the market, it was the first such request in FDA history.
The Critical Path Initiative is FDA's effort to stimulate and facilitate a national effort to modernize the sciences through which FDA-regulated products are developed, evaluated, and manufactured. The Initiative was launched in March 2004, with the release of a report entitled Innovation/Stagnation: Challenge and Opportunity on the Critical Path to New Medical Products.
The Compassionate Investigational New Drug program was created after "Randall v. U.S." ruled in favor of Robert C. Randall in 1978, creating a program for medical marijuana.
A 2006 court case, "Abigail Alliance v. von Eschenbach", would have forced radical changes in FDA regulation of unapproved drugs. The Abigail Alliance argued that the FDA must license drugs for use by terminally ill patients with "desperate diagnoses," after they have completed Phase I testing. The case won an initial appeal in May 2006, but that decision was reversed by a March 2007 rehearing. The US Supreme Court declined to hear the case, and the final decision denied the existence of a right to unapproved medications.
Critics of the FDA's regulatory power argue that the FDA takes too long to approve drugs that might ease pain and human suffering faster if brought to market sooner. The AIDS crisis created some political efforts to streamline the approval process. However, these limited reforms were targeted for AIDS drugs, not for the broader market. This has led to the call for more robust and enduring reforms that would allow patients, under the care of their doctors, access to drugs that have passed the first round of clinical trials.
The widely publicized recall of Vioxx, a non-steroidal anti-inflammatory drug now estimated to have contributed to fatal heart attacks in thousands of Americans, played a strong role in driving a new wave of safety reforms at both the FDA rulemaking and statutory levels. Vioxx was approved by the FDA in 1999 and was initially hoped to be safer than previous NSAIDs, due to its reduced risk of intestinal tract bleeding. However, a number of pre- and post-marketing studies suggested that Vioxx might increase the risk of myocardial infarction, and this was conclusively demonstrated by results from the APPROVe trial in 2004.
Faced with numerous lawsuits, the manufacturer voluntarily withdrew it from the market. The example of Vioxx has been prominent in an ongoing debate over whether new drugs should be evaluated on the basis of their absolute safety, or their safety relative to existing treatments for a given condition. In the wake of the Vioxx recall, there were widespread calls by major newspapers, medical journals, consumer advocacy organizations, lawmakers, and FDA officials for reforms in the FDA's procedures for pre- and post-market drug safety regulation.
In 2006, a congressionally requested committee was appointed by the Institute of Medicine to review pharmaceutical safety regulation in the U.S. and to issue recommendations for improvements. The committee was composed of 16 experts, including leaders in clinical medicine medical research, economics, biostatistics, law, public policy, public health, and the allied health professions, as well as current and former executives from the pharmaceutical, hospital, and health insurance industries. The authors found major deficiencies in the current FDA system for ensuring the safety of drugs on the American market. Overall, the authors called for an increase in the regulatory powers, funding, and independence of the FDA. Some of the committee's recommendations have been incorporated into drafts of the PDUFA IV bill, which was signed into law in 2007.
As of 2011, Risk Minimization Action Plans (RiskMAPS) have been created to ensure risks of a drug never outweigh the benefits of that drug within the postmarketing period. This program requires that manufacturers design and implement periodic assessments of their programs' effectiveness. The Risk Minimization Action Plans are set in place depending on the overall level of risk a prescription drug is likely to pose to the public.
Prior to the 1990s, only 20% of all drugs prescribed for children in the United States were tested for safety or efficacy in a pediatric population. This became a major concern of pediatricians as evidence accumulated that the physiological response of children to many drugs differed significantly from those drugs' effects on adults. Children react differently to the drugs because of many reasons, including size, weight, etc. There were several reasons that not many medical trials were done with children. For many drugs, children represented such a small proportion of the potential market, that drug manufacturers did not see such testing as cost-effective.
Also, because children were thought to be ethically restricted in their ability to give informed consent, there were increased governmental and institutional hurdles to approval of these clinical trials, as well as greater concerns about legal liability. Thus, for decades, most medicines prescribed to children in the U.S. were done so in a non-FDA-approved, "off-label" manner, with dosages "extrapolated" from adult data through body weight and body-surface-area calculations.
An initial attempt by the FDA to address this issue was the 1994 FDA Final Rule on Pediatric Labeling and Extrapolation, which allowed manufacturers to add pediatric labeling information, but required drugs that had not been tested for pediatric safety and efficacy to bear a disclaimer to that effect. However, this rule failed to motivate many drug companies to conduct additional pediatric drug trials. In 1997, the FDA proposed a rule to require pediatric drug trials from the sponsors of New Drug Applications. However, this new rule was successfully preempted in federal court as exceeding the FDA's statutory authority.
While this debate was unfolding, Congress used the 1997 Food and Drug Administration Modernization Act to pass incentives that gave pharmaceutical manufacturers a six-month patent term extension on new drugs submitted with pediatric trial data. The act reauthorizing these provisions, the 2002 Best Pharmaceuticals for Children Act, allowed the FDA to request NIH-sponsored testing for pediatric drug testing, although these requests are subject to NIH funding constraints. In the Pediatric Research Equity Act of 2003, Congress codified the FDA's authority to mandate manufacturer-sponsored pediatric drug trials for certain drugs as a "last resort" if incentives and publicly funded mechanisms proved inadequate.
The priority review voucher is a provision of the Food and Drug Administration Amendments Act (HR 3580) signed by President George W. Bush signed the bill in September 2007 which awards a transferable "priority review voucher" to any company that obtains approval for a treatment for a neglected tropical diseases. The system was first proposed by Duke University faculty David Ridley, Henry Grabowski, and Jeffrey Moe in their 2006 "Health Affairs" paper: "Developing Drugs for Developing Countries". In 2012, President Obama signed into law the FDA Safety and Innovation Act which includes Section 908 the "Rare Pediatric Disease Priority Review Voucher Incentive Program".
Since the 1990s, many successful new drugs for the treatment of cancer, autoimmune diseases, and other conditions have been protein-based biotechnology drugs, regulated by the Center for Biologics Evaluation and Research. Many of these drugs are extremely expensive; for example, the anti-cancer drug Avastin costs $55,000 for a year of treatment, while the enzyme replacement therapy drug Cerezyme costs $200,000 per year, and must be taken by Gaucher's Disease patients for life.
Biotechnology drugs do not have the simple, readily verifiable chemical structures of conventional drugs, and are produced through complex, often proprietary techniques, such as transgenic mammalian cell cultures. Because of these complexities, the 1984 Hatch-Waxman Act did not include biologics in the Abbreviated New Drug Application (ANDA) process, in essence precluding the possibility of generic drug competition for biotechnology drugs. In February 2007, identical bills were introduced into the House to create an ANDA process for the approval of generic biologics, but were not passed.
In 2013, a guidance was issued to regulate mobile medical applications and protect users from their unintended use. This guidance distinguishes the apps subjected to regulation based on the marketing claims of the apps. Incorporation of the guidelines during the development phase of such app has been proposed for expedite market entry and clearance.
The FDA has regulatory oversight over a large array of products that affect the health and life of American citizens. As a result, the FDA's powers and decisions are carefully monitored by several governmental and non-governmental organizations. A $1.8million 2006 Institute of Medicine report on pharmaceutical regulation in the U.S. found major deficiencies in the current FDA system for ensuring the safety of drugs on the American market. Overall, the authors called for an increase in the regulatory powers, funding, and independence of the FDA.
Nine FDA scientists appealed to then president-elect Barack Obama over pressures from management, experienced during the George W. Bush presidency, to manipulate data, including in relation to the review process for medical devices. Characterized as "corrupted and distorted by current FDA managers, thereby placing the American people at risk," these concerns were also highlighted in the 2006 report on the agency as well.
The FDA has also been criticized from the opposite viewpoint, as being too tough on industry. According to an analysis published on the website of the libertarian Mercatus Center as well as published statements by economists, medical practitioners, and concerned consumers, many feel the FDA oversteps its regulatory powers and undermines small business and small farms in favor of large corporations. Three of the FDA restrictions under analysis are the permitting of new drugs and devices, the control of manufacturer speech, and the imposition of prescription requirements. The authors argue that in the increasingly complex and diverse food marketplace, the FDA is not equipped to adequately regulate or inspect food. In addition, excessive regulation is blamed for the rising costs of health care and the creation of monopolies, as potential competitors are unable to get FDA approval to enter the market to compete and keep health care costs down.
However, in an indicator that the FDA may be too lax in their approval process, in particular for medical devices, a 2011 study by Dr. Diana Zuckerman and Paul Brown of the National Research Center for Women and Families, and Dr. Steven Nissen of the Cleveland Clinic, published in the Archives of Internal Medicine, showed that most medical devices recalled in the last five years for "serious health problems or death" had been previously approved by the FDA using the less stringent, and cheaper, 510(k) process. In a few cases, the devices had been deemed so low-risk that they did not need FDA regulation. Of the 113 devices recalled, 35 were for cardiovascular health purposes.
International: | https://en.wikipedia.org/wiki?curid=11632 |
Field extension
In mathematics, particularly in algebra, a field extension is a pair of fields formula_1 such that the operations of "E" are those of "F" restricted to "E". In this case, "F" is an extension field of "E" and "E" is a subfield of "F". For example, under the usual notions of addition and multiplication, the complex numbers are an extension field of the real numbers; the real numbers are a subfield of the complex numbers.
Field extensions are fundamental in algebraic number theory, and in the study of polynomial roots through Galois theory, and are widely used in algebraic geometry.
A subfield of a field "L" is a subset "K" of "L" that is a field with respect to the field operations inherited from "L". Equivalently, a subfield is a subset that contains 1, and is closed under the operations of addition, subtraction, multiplication, and taking the inverse of a nonzero element of "L".
As , the latter definition implies "K" and "L" have the same zero element.
For example, the field of rational numbers is a subfield of the real numbers, which is itself a subfield of the complex numbers. More generally, the field of rational numbers is (or is isomorphic to) a subfield of any field of characteristic 0.
The characteristic of a subfield is the same as the characteristic of the larger field.
If "K" is a subfield of "L", then "L" is an extension field or simply extension of "K", and this pair of fields is a field extension. Such a field extension is denoted "L" / "K" (read as ""L" over "K"").
If "L" is an extension of "F", which is in turn an extension of "K", then "F" is said to be an intermediate field (or intermediate extension or subextension) of "L" / "K".
Given a field extension , the larger field "L" is a "K"-vector space. The dimension of this vector space is called the degree of the extension and is denoted by ["L" : "K"].
The degree of an extension is 1 if and only if the two fields are equal. In this case, the extension is a trivial extension. Extensions of degree 2 and 3 are called quadratic extensions and cubic extensions, respectively. A finite extension is an extension that has a finite degree.
Given two extensions and , the extension is finite if and only if both and are finite. In this case, one has
Given a field extension "L" / "K" and a subset "S" of "L", there is a smallest subfield of "L" that contains "K" and "S". It is the intersection of all subfields of "L" that contain "K" and "S", and is denoted by "K"("S"). One says that "K"("S") is the field "generated" by "S" over "K", and that "S" is a generating set of "K"("S") over "K". When formula_3 is finite, one writes formula_4 instead of formula_5 and one says that "K"("S") is finitely generated over "K". If "S" consists of a single element "s", the extension is called a simple extension and "s" is called a primitive element of the extension.
An extension field of the form is often said to result from the "" of "S" to "K".
In characteristic 0, every finite extension is a simple extension. This is the primitive element theorem, which does not hold true for fields of non-zero characteristic.
If a simple extension is not finite, the field "K"("s") is isomorphic to the field of rational fractions in "s" over "K".
The notation "L" / "K" is purely formal and does not imply the formation of a quotient ring or quotient group or any other kind of division. Instead the slash expresses the word "over". In some literature the notation "L":"K" is used.
It is often desirable to talk about field extensions in situations where the small field is not actually contained in the larger one, but is naturally embedded. For this purpose, one abstractly defines a field extension as an injective ring homomorphism between two fields.
"Every" non-zero ring homomorphism between fields is injective because fields do not possess nontrivial proper ideals, so field extensions are precisely the morphisms in the category of fields.
Henceforth, we will suppress the injective homomorphism and assume that we are dealing with actual subfields.
The field of complex numbers formula_6 is an extension field of the field of real numbers formula_7 and formula_8 in turn is an extension field of the field of rational numbers formula_9 Clearly then, formula_10 is also a field extension. We have formula_11 because formula_12 is a basis, so the extension formula_13 is finite. This is a simple extension because formula_14 formula_15 (the cardinality of the continuum), so this extension is infinite.
The field
is an extension field of formula_17 also clearly a simple extension. The degree is 2 because formula_18 can serve as a basis.
The field
is an extension field of both formula_20 and formula_17 of degree 2 and 4 respectively. It is also a simple extension, as one can show that
Finite extensions of formula_23 are also called algebraic number fields and are important in number theory. Another extension field of the rationals, which is also important in number theory, although not a finite extension, is the field of p-adic numbers formula_24 for a prime number "p".
It is common to construct an extension field of a given field "K" as a quotient ring of the polynomial ring "K"["X"] in order to "create" a root for a given polynomial "f"("X"). Suppose for instance that "K" does not contain any element "x" with "x"2 = −1. Then the polynomial formula_25 is irreducible in "K"["X"], consequently the ideal generated by this polynomial is maximal, and formula_26 is an extension field of "K" which "does" contain an element whose square is −1 (namely the residue class of "X").
By iterating the above construction, one can construct a splitting field of any polynomial from "K"["X"]. This is an extension field "L" of "K" in which the given polynomial splits into a product of linear factors.
If "p" is any prime number and "n" is a positive integer, we have a finite field GF("pn") with "pn" elements; this is an extension field of the finite field formula_27 with "p" elements.
Given a field "K", we can consider the field "K"("X") of all rational functions in the variable "X" with coefficients in "K"; the elements of "K"("X") are fractions of two polynomials over "K", and indeed "K"("X") is the field of fractions of the polynomial ring "K"["X"]. This field of rational functions is an extension field of "K". This extension is infinite.
Given a Riemann surface "M", the set of all meromorphic functions defined on "M" is a field, denoted by formula_28 It is a transcendental extension field of formula_6 if we identify every complex number with the corresponding constant function defined on "M". More generally, given an algebraic variety "V" over some field "K", then the function field of "V", consisting of the rational functions defined on "V" and denoted by "K"("V"), is an extension field of "K".
An element "x" of a field extension is algebraic over "K" if it is a root of a nonzero polynomial with coefficients in "K". For example, formula_30 is algebraic over the rational numbers, because it is a root of formula_31 If an element "x" of "L" is algebraic over "K", the monic polynomial of lowest degree that has "x" as a root is called the minimal polynomial of "x". This minimal polynomial is irreducible over "K".
An element "s" of "L" is algebraic over "K" if and only if the simple extension is a finite extension. In this case the degree of the extension equals the degree of the minimal polynomial, and a basis of the "K"-vector space "K"("s") consists of formula_32 where "d" is the degree of the minimal polynomial.
The set of the elements of "L" that are algebraic over "K" form a subextension, which is called the algebraic closure of "K" in "L". This results from the preceding characterization: if "s" and "t" are algebraic, the extensions and are finite. Thus is also finite, as well as the sub extensions , and (if ). It follows that , "st" and 1/"s" are all algebraic.
An "algebraic extension" is an extension such that every element of "L" is algebraic over "K". Equivalently, an algebraic extension is an extension that is generated by algebraic elements. For example, formula_33 is an algebraic extension of formula_23, because formula_30 and formula_36 are algebraic over formula_9
A simple extension is algebraic if and only if it is finite. This implies that an extension is algebraic if and only if it is the union of its finite subextensions, and that every finite extension is algebraic.
Every field "K" has an algebraic closure, which is up to an isomorphism the largest extension field of "K" which is algebraic over "K", and also the smallest extension field such that every polynomial with coefficients in "K" has a root in it. For example, formula_6 is an algebraic closure of formula_7 but not an algebraic closure of formula_17 as it is not algebraic over formula_23 (for example is not algebraic over formula_23).
Given a field extension , a subset "S" of "L" is called algebraically independent over "K" if no non-trivial polynomial relation with coefficients in "K" exists among the elements of "S". The largest cardinality of an algebraically independent set is called the transcendence degree of "L"/"K". It is always possible to find a set "S", algebraically independent over "K", such that "L"/"K"("S") is algebraic. Such a set "S" is called a transcendence basis of "L"/"K". All transcendence bases have the same cardinality, equal to the transcendence degree of the extension. An extension "L"/"K" is said to be if and only if there exists a transcendence basis "S" of "L"/"K" such that "L" = "K"("S"). Such an extension has the property that all elements of "L" except those of "K" are transcendental over "K", but, however, there are extensions with this property which are not purely transcendental—a class of such extensions take the form "L"/"K" where both "L" and "K" are algebraically closed. In addition, if "L"/"K" is purely transcendental and "S" is a transcendence basis of the extension, it doesn't necessarily follow that "L" = "K"("S"). For example, consider the extension formula_43 where "x" is transcendental over formula_9 The set formula_45 is algebraically independent since "x" is transcendental. Obviously, the extension formula_46 is algebraic, hence formula_45 is a transcendence basis. It doesn't generate the whole extension because there is no polynomial expression in formula_48 for formula_49. But it is easy to see that formula_50 is a transcendence basis that generates formula_51 so this extension is indeed purely transcendental.)
An algebraic extension "L"/"K" is called normal if every irreducible polynomial in "K"["X"] that has a root in "L" completely factors into linear factors over "L". Every algebraic extension "F"/"K" admits a normal closure "L", which is an extension field of "F" such that "L"/"K" is normal and which is minimal with this property.
An algebraic extension "L"/"K" is called separable if the minimal polynomial of every element of "L" over "K" is separable, i.e., has no repeated roots in an algebraic closure over "K". A Galois extension is a field extension that is both normal and separable.
A consequence of the primitive element theorem states that every finite separable extension has a primitive element (i.e. is simple).
Given any field extension "L"/"K", we can consider its automorphism group Aut("L"/"K"), consisting of all field automorphisms "α": "L" → "L" with "α"("x") = "x" for all "x" in "K". When the extension is Galois this automorphism group is called the Galois group of the extension. Extensions whose Galois group is abelian are called abelian extensions.
For a given field extension "L"/"K", one is often interested in the intermediate fields "F" (subfields of "L" that contain "K"). The significance of Galois extensions and Galois groups is that they allow a complete description of the intermediate fields: there is a bijection between the intermediate fields and the subgroups of the Galois group, described by the fundamental theorem of Galois theory.
Field extensions can be generalized to ring extensions which consist of a ring and one of its subrings. A closer non-commutative analog are central simple algebras (CSAs) – ring extensions over a field, which are simple algebra (no non-trivial 2-sided ideals, just as for a field) and where the center of the ring is exactly the field. For example, the only finite field extension of the real numbers is the complex numbers, while the quaternions are a central simple algebra over the reals, and all CSAs over the reals are Brauer equivalent to the reals or the quaternions. CSAs can be further generalized to Azumaya algebras, where the base field is replaced by a commutative local ring.
Given a field extension, one can "extend scalars" on associated algebraic objects. For example, given a real vector space, one can produce a complex vector space via complexification. In addition to vector spaces, one can perform extension of scalars for associative algebras defined over the field, such as polynomials or group algebras and the associated group representations. Extension of scalars of polynomials is often used implicitly, by just considering the coefficients as being elements of a larger field, but may also be considered more formally. Extension of scalars has numerous applications, as discussed in extension of scalars: applications. | https://en.wikipedia.org/wiki?curid=11634 |
Flood fill
Flood fill, also called seed fill, is an algorithm that determines the area connected to a given node in a multi-dimensional array. It is used in the "bucket" fill tool of paint programs to fill connected, similarly-colored areas with a different color, and in games such as Go and Minesweeper for determining which pieces are cleared.
The flood-fill algorithm takes three parameters: a start node, a target color, and a replacement color. The algorithm looks for all nodes in the array that are connected to the start node by a path of the target color and changes them to the replacement color. There are many ways in which the flood-fill algorithm can be structured, but they all make use of a queue or stack data structure, explicitly or implicitly.
Depending on whether we consider nodes touching at the corners connected or not, we have two variations: eight-way and four-way respectively.
One implicitly stack-based (recursive) flood-fill implementation (for a two-dimensional array) goes as follows:
Though easy to understand, the implementation of the algorithm used above is impractical in languages and environments where stack space is severely constrained (e.g. Java applets).
An explicitly queue-based implementation (sometimes called "Forest Fire algorithm") is shown in pseudo-code below. It is similar to the simple recursive solution, except that instead of making recursive calls, it pushes the nodes onto a queue for consumption:
Practical implementations intended for filling rectangular areas can use a loop for the west and east directions as an optimization to avoid the overhead of stack or queue management:
Adapting the algorithm to use an additional array to store the shape of the region allows generalization to cover "fuzzy" flood filling, where an element can differ by up to a specified threshold from the source symbol. Using this additional array as an alpha channel allows the edges of the filled region to blend somewhat smoothly with the not-filled region.
A method exists that uses essentially no memory for four-connected regions by pretending to be a painter trying to paint the region without painting themselves into a corner. This is also a method for solving mazes. The four pixels making the primary boundary are examined to see what action should be taken. The painter could find themselves in one of several conditions:
Where a path or boundary is to be followed, the right-hand rule is used. The painter follows the region by placing their right-hand on the wall (the boundary of the region) and progressing around the edge of the region without removing their hand.
For case #1, the painter paints (fills) the pixel the painter is standing upon and stops the algorithm.
For case #2, a path leading out of the area exists. Paint the pixel the painter is standing upon and move in the direction of the open path.
For case #3, the two boundary pixels define a path which, if we painted the current pixel, may block us from ever getting back to the other side of the path. We need a "mark" to define where we are and which direction we are heading to see if we ever get back to exactly the same pixel. If we already created such a "mark", then we preserve our previous mark and move to the next pixel following the right-hand rule.
A mark is used for the first 2-pixel boundary that is encountered to remember where the passage started and in what direction the painter was moving. If the mark is encountered again and the painter is traveling in the same direction, then the painter knows that it is safe to paint the square with the mark and to continue in the same direction. This is because (through some unknown path) the pixels on the other side of the mark can be reached and painted in the future. The mark is removed for future use.
If the painter encounters the mark but is going in a different direction, then some sort of loop has occurred, which caused the painter to return to the mark. This loop must be eliminated. The mark is picked up, and the painter then proceeds in the direction indicated previously by the mark using a left-hand rule for the boundary (similar to the right-hand rule but using the painter's left hand). This continues until an intersection is found (with three or more open boundary pixels). Still using the left-hand rule the painter now searches for a simple passage (made by two boundary pixels). Upon finding this two-pixel boundary path, that pixel is painted. This breaks the loop and allows the algorithm to continue.
For case #4, we need to check the opposite 8-connected corners to see whether they are filled or not. If either or both are filled, then this creates a many-path intersection and cannot be filled. If both are empty, then the current pixel can be painted and the painter can move following the right-hand rule.
The algorithm trades time for memory. For simple shapes it is very efficient. However, if the shape is complex with many features, the algorithm spends a large amount of time tracing the edges of the region trying to ensure that all can be painted.
This algorithm was first available commercially in 1981 on a Vicom Image Processing system manufactured by Vicom Systems, Inc. The classic recursive flood fill algorithm was available on this system as well.
This is a pseudocode implementation of an optimal fixed-memory flood-fill algorithm written in structured English:
The variables:
cur, mark, and mark2 each hold either pixel coordinates or a null value
cur-dir, mark-dir, and mark2-dir each hold a direction (left, right, up, or down)
backtrack and findloop each hold boolean values
count is an integer
The algorithm:
The algorithm can be sped up by filling lines. Instead of pushing each potential future pixel coordinate on the stack, it inspects the neighbour lines (previous and next) to find adjacent segments that may be filled in a future pass; the coordinates (either the start or the end) of the line segment are pushed on the stack. In most cases this scanline algorithm is at least an order of magnitude faster than the per-pixel one.
Efficiency: each pixel is checked once.
Version 0.46 of Inkscape includes a bucket fill tool, giving output similar to ordinary bitmap operations and indeed using one: the canvas is rendered, a flood fill operation is performed on the selected area and the result is then traced back to a path. It uses the concept of a boundary condition.
The primary technique used to control a flood fill will either be data-centric or process-centric. A data-centric approach can use either a stack or a queue to keep track of seed pixels that need to be checked. A process-centric algorithm must necessarily use a stack.
A 4-way flood-fill algorithm that uses the adjacency technique and a queue as its seed pixel store yields an expanding lozenge-shaped fill.
Efficiency: 4 pixels checked for each pixel filled (8 for an 8-way fill).
A 4-way flood-fill algorithm that use the adjacency technique and a stack as its seed pixel store yields a linear fill with "gaps filled later" behaviour. This approach can be particularly seen in older 8-bit computer games, such as those created with "Graphic Adventure Creator".
Efficiency: 4 pixels checked for each pixel filled (8 for an 8-way fill). | https://en.wikipedia.org/wiki?curid=11635 |
Francis of Assisi
Francis of Assisi (; ), born Giovanni di Pietro di Bernardone, informally named as Francesco (1181/11823 October 1226), was an Italian Catholic friar, deacon, philosopher, mystic and preacher. He founded the men's Order of Friars Minor, the women's Order of Saint Clare, the Third Order of Saint Francis and the Custody of the Holy Land. Francis is one of the most venerated religious figures in Christianity.
Pope Gregory IX canonized Francis on 16 July 1228. Along with Saint Catherine of Siena, he was designated Patron saint of Italy. He later became associated with patronage of animals and the natural environment, and it became customary for churches to hold ceremonies blessing animals on or near his feast day of 4 October. In 1219, he went to Egypt in an attempt to convert the Sultan to put an end to the conflict of the Crusades. By this point, the Franciscan Order had grown to such an extent that its primitive organizational structure was no longer sufficient. He returned to Italy to organize the Order.
Once his community was authorized by the Pope, he withdrew increasingly from external affairs. Francis is also known for his love of the Eucharist. In 1223, Francis arranged for the first Christmas live nativity scene. According to Christian tradition, in 1224 he received the stigmata during the apparition of Seraphic angels in a religious ecstasy, which would make him the second person in Christian tradition after St. Paul (Galatians 6:17) to bear the wounds of Christ's Passion. He died during the evening hours of 3 October 1226, while listening to a reading he had requested of Psalm 142 (141).
Francis of Assisi was born in late 1181 or early 1182, one of several children of an Italian father, Pietro di Bernardone dei Moriconi, a prosperous silk merchant, and a French mother, Pica de Bourlemont, about whom little is known except that she was a noblewoman originally from Provence. Pietro was in France on business when Francis was born in Assisi, and Pica had him baptized as Giovanni. Upon his return to Assisi, Pietro took to calling his son Francesco ("the Frenchman"), possibly in honor of his commercial success and enthusiasm for all things French. Since the child was renamed in infancy, the change can hardly have had anything to do with his aptitude for learning French, as some have thought.
Indulged by his parents, Francis lived the high-spirited life typical of a wealthy young man. As a youth, Francesco became a devotee of troubadours and was fascinated with all things Transalpine. He was handsome, witty, gallant, and delighted in fine clothes. He spent money lavishly. Although many hagiographers remark about his bright clothing, rich friends, and love of pleasures, his displays of disillusionment toward the world that surrounded him came fairly early in his life, as is shown in the "story of the beggar". In this account, he was selling cloth and velvet in the marketplace on behalf of his father when a beggar came to him and asked for alms. At the conclusion of his business deal, Francis abandoned his wares and ran after the beggar. When he found him, Francis gave the man everything he had in his pockets. His friends quickly chided and mocked him for his act of charity. When he got home, his father scolded him in rage.
Around 1202, he joined a military expedition against Perugia and was taken as a prisoner at Collestrada, spending a year as a captive. An illness caused him to re-evaluate his life. It is possible that his spiritual conversion was a gradual process rooted in this experience. Upon his return to Assisi in 1203, Francis returned to his carefree life. In 1205, Francis left for Apulia to enlist in the army of Walter III, Count of Brienne. A strange vision made him return to Assisi, having lost his taste for the worldly life. According to hagiographic accounts, thereafter he began to avoid the sports and the feasts of his former companions. In response, they asked him laughingly whether he was thinking of marrying, to which he answered, "Yes, a fairer bride than any of you have ever seen", meaning his "Lady Poverty".
On a pilgrimage to Rome, he joined the poor in begging at St. Peter's Basilica. He spent some time in lonely places, asking God for spiritual enlightenment. He said he had a mystical vision of Jesus Christ in the forsaken country chapel of San Damiano, just outside Assisi, in which the Icon of Christ Crucified said to him, "Francis, Francis, go and repair My house which, as you can see, is falling into ruins." He took this to mean the ruined church in which he was presently praying, and so he sold some cloth from his father's store to assist the priest there for this purpose. When the priest refused to accept the ill-gotten gains, an indignant Francis threw the coins on the floor.
In order to avoid his father's wrath, Francis hid in a cave near San Damiano for about a month. When he returned to town, hungry and dirty, he was dragged home by his father, beaten, bound, and locked in a small storeroom. Freed by his mother during Bernardone's absence, Francis returned at once to San Damiano, where he found shelter with the officiating priest, but he was soon cited before the city consuls by his father. The latter, not content with having recovered the scattered gold from San Damiano, sought also to force his son to forego his inheritance by way of restitution. In the midst of legal proceedings before the Bishop of Assisi, Francis renounced his father and his patrimony. Some accounts report that he stripped himself naked in token of this renunciation, and the Bishop covered him with his own cloak.
For the next couple of months, Francis wandered as a beggar in the hills behind Assisi. He spent some time at a neighbouring monastery working as a scullion. He then went to Gubbio, where a friend gave him, as an alms, the cloak, girdle, and staff of a pilgrim. Returning to Assisi, he traversed the city begging stones for the restoration of St. Damiano's. These he carried to the old chapel, set in place himself, and so at length rebuilt it. Over the course of two years, he embraced the life of a penitent, during which he restored several ruined chapels in the countryside around Assisi, among them San Pietro in Spina (in the area of San Petrignano in the valley about a kilometer from Rivotorto, today on private property and once again in ruin); and the Porziuncola, the little chapel of St. Mary of the Angels in the plain just below the town. This later became his favorite abode. By degrees he took to nursing lepers, in the lazar houses near Assisi.
One morning in February 1208, Francis was hearing Mass in the chapel of St. Mary of the Angels, near which he had then built himself a hut. The Gospel of the day was the "Commissioning of the Twelve" from the Book of Matthew. The disciples are to go and proclaim that the Kingdom of God is at hand. Francis was inspired to devote himself to a life of poverty. Having obtained a coarse woolen tunic, the dress then worn by the poorest Umbrian peasants, he tied it around him with a knotted rope and went forth at once exhorting the people of the country-side to penance, brotherly love, and peace. Francis' preaching to ordinary people was unusual since he had no license to do so.
His example drew others to him. Within a year Francis had eleven followers. The brothers lived a simple life in the deserted lazar house of Rivo Torto near Assisi; but they spent much of their time wandering through the mountainous districts of Umbria, making a deep impression upon their hearers by their earnest exhortations.
In 1209 he composed a simple rule for his followers ("friars"), the "Regula primitiva" or "Primitive Rule", which came from verses in the Bible. The rule was "To follow the teachings of our Lord Jesus Christ and to walk in his footsteps". He then led his first eleven followers to Rome to seek permission from Pope Innocent III to found a new religious Order. Upon entry to Rome, the brothers encountered Bishop Guido of Assisi, who had in his company Giovanni di San Paolo, the Cardinal Bishop of Sabina. The Cardinal, who was the confessor of Pope Innocent III, was immediately sympathetic to Francis and agreed to represent Francis to the pope. Reluctantly, Pope Innocent agreed to meet with Francis and the brothers the next day. After several days, the pope agreed to admit the group informally, adding that when God increased the group in grace and number, they could return for an official admittance. The group was tonsured. This was important in part because it recognized Church authority and prevented his following from possible accusations of heresy, as had happened to the Waldensians decades earlier. Though a number of the Pope's counselors considered the mode of life proposed by Francis as unsafe and impractical, following a dream in which he saw Francis holding up the Basilica of St. John Lateran (the cathedral of Rome, thus the 'home church' of all Christendom), he decided to endorse Francis' Order. This occurred, according to tradition, on 16 April 1210, and constituted the official founding of the Franciscan Order. The group, then the "Lesser Brothers" ("Order of Friars Minor" also known as the "Franciscan Order" or the "Seraphic Order"), were centered in the Porziuncola and preached first in Umbria, before expanding throughout Italy. Francis chose never to be ordained a priest, although he was later ordained a deacon.
From then on, the new Order grew quickly with new vocations. Hearing Francis preaching in the church of San Rufino in Assisi in 1211, the young noblewoman Clare of Assisi became deeply touched by his message and realized her calling. Her cousin Rufino, the only male member of the family in their generation, was also attracted to the new Order, which he joined. On the night of Palm Sunday, 28 March 1212, Clare clandestinely left her family's palace. Francis received her at the Porziuncola and thereby established the Order of Poor Ladies. This was an Order for women, and he gave Clare a religious habit, or garment, similar to his own, before lodging her in a nearby monastery of Benedictine nuns until he could provide a suitable retreat for her, and for her younger sister, Caterina, and the other young women who had joined her. Later he transferred them to San Damiano, to a few small huts or cells of wattle, straw, and mud, and enclosed by a hedge. This became the first monastery of the Second Franciscan Order, now known as Poor Clares.
For those who could not leave their homes, he later formed the Third Order of Brothers and Sisters of Penance, a fraternity composed of either laity or clergy whose members neither withdrew from the world nor took religious vows. Instead, they observed the principles of Franciscan life in their daily lives. Before long, this Third Order grew beyond Italy. The Third Order is now titled the Secular Franciscan Order.
Determined to bring the Gospel to all peoples of the World and convert them, after the example of the first disciples of Jesus, Francis sought on several occasions to take his message out of Italy. In the late spring of 1212, he set out for Jerusalem, but was shipwrecked by a storm on the Dalmatian coast, forcing him to return to Italy. On 8 May 1213, he was given the use of the mountain of La Verna (Alverna) as a gift from Count Orlando di Chiusi, who described it as “eminently suitable for whoever wishes to do penance in a place remote from mankind”. The mountain would become one of his favourite retreats for prayer.
In the same year, Francis sailed for Morocco, but this time an illness forced him to break off his journey in Spain. Back in Assisi, several noblemen (among them Tommaso da Celano, who would later write the biography of St. Francis), and some well-educated men joined his Order. In 1215, Francis may have gone to Rome for the Fourth Lateran Council, but that is not certain. During this time, he probably met a canon, Dominic de Guzman (later to be Saint Dominic, the founder of the Friars Preachers, another Catholic religious order). In 1217, he offered to go to France. Cardinal Ugolino of Segni (the future Pope Gregory IX), an early and important supporter of Francis, advised him against this and said that he was still needed in Italy.
In 1219, accompanied by another friar and hoping to convert the Sultan of Egypt or win martyrdom in the attempt, Francis went to Egypt during the Fifth Crusade where a Crusader army had been encamped for over a year besieging the walled city of Damietta upstream from the mouth of one of the main channels of the Nile. The Sultan, al-Kamil, a nephew of Saladin, had succeeded his father as Sultan of Egypt in 1218 and was encamped upstream of Damietta, unable to relieve it. A bloody and futile attack on the city was launched by the Christians on 29 August 1219, following which both sides agreed to a ceasefire which lasted four weeks. It was most probably during this interlude that Francis and his companion crossed the Muslims' lines and were brought before the Sultan, remaining in his camp for a few days. The visit is reported in contemporary Crusader sources and in the earliest biographies of Francis, but they give no information about what transpired during the encounter beyond noting that the Sultan received Francis graciously and that Francis preached to the Muslims without effect, returning unharmed to the Crusader camp. No contemporary Arab source mentions the visit. One detail, added by Bonaventure in the official life of Francis (written forty years after the event), has Francis offering to challenge the Sultan's "priests" to trial-by-fire in order to prove the veracity of the Christian Gospel.
Such an incident is alluded to in a scene in the late 13th-century fresco cycle, attributed to Giotto, in the upper basilica at Assisi. It has been suggested that the winged figures atop the columns piercing the roof of the building on the left of the scene are not idols (as Erwin Panofsky had proposed) but are part of the secular iconography of the sultan, affirming his worldly power which, as the scene demonstrates, is limited even as regards his own "priests" who shun the challenge. Although Bonaventure asserts that the sultan refused to permit the challenge, subsequent biographies went further, claiming that a fire was actually kindled which Francis unhesitatingly entered without suffering burns. The scene in the fresco adopts a position midway between the two extremes. Since the idea was put forward by the German art historian, Friedrich Rintelen in 1912, many scholars have expressed doubt that Giotto was the author of the Upper Church frescoes.
According to some late sources, the Sultan gave Francis permission to visit the sacred places in the Holy Land and even to preach there. All that can safely be asserted is that Francis and his companion left the Crusader camp for Acre, from where they embarked for Italy in the latter half of 1220. Drawing on a 1267 sermon by Bonaventure, later sources report that the Sultan secretly converted or accepted a death-bed baptism as a result of the encounter with Francis. The Franciscan Order has been present in the Holy Land almost uninterruptedly since 1217 when Brother Elias arrived at Acre. It received concessions from the Mameluke Sultan in 1333 with regard to certain Holy Places in Jerusalem and Bethlehem, and (so far as concerns the Catholic Church) jurisdictional privileges from Pope Clement VI in 1342.
By this time, the growing Order of friars was divided into provinces and groups were sent to France, Germany, Hungary, and Spain and to the East. Upon receiving a report of the martyrdom of five brothers in Morocco, Francis returned to Italy via Venice. Cardinal Ugolino di Conti was then nominated by the Pope as the protector of the Order. Another reason for Francis' return to Italy was that the Franciscan Order had grown at an unprecedented rate compared to previous religious orders, but its organizational sophistication had not kept up with this growth and had little more to govern it than Francis' example and simple rule. To address this problem, Francis prepared a new and more detailed Rule, the "First Rule" or "Rule Without a Papal Bull" ("Regula prima", "Regula non bullata"), which again asserted devotion to poverty and the apostolic life. However, it also introduced greater institutional structure, though this was never officially endorsed by the pope.
On 29 September 1220, Francis handed over the governance of the Order to Brother Peter Catani at the Porziuncola, but Brother Peter died only five months later, on 10 March 1221, and was buried there. When numerous miracles were attributed to the deceased brother, people started to flock to the Porziuncola, disturbing the daily life of the Franciscans. Francis then prayed, asking Peter to stop the miracles and to obey in death as he had obeyed during his life.
The reports of miracles ceased. Brother Peter was succeeded by Brother Elias as Vicar of Francis. Two years later, Francis modified the "First Rule", creating the "Second Rule" or "Rule With a Bull", which was approved by Pope Honorius III on 29 November 1223. As the official Rule of the Order, it called on the friars "to observe the Holy Gospel of our Lord Jesus Christ, living in obedience without anything of our own and in chastity". In addition, it set regulations for discipline, preaching, and entry into the Order. Once the Rule was endorsed by the Pope, Francis withdrew increasingly from external affairs. During 1221 and 1222, Francis crossed Italy, first as far south as Catania in Sicily and afterward as far north as Bologna.
While he was praying on the mountain of Verna, during a forty-day fast in preparation for Michaelmas (29 September), Francis is said to have had a vision on or about 14 September 1224, the Feast of the Exaltation of the Cross, as a result of which he received the stigmata. Brother Leo, who had been with Francis at the time, left a clear and simple account of the event, the first definite account of the phenomenon of stigmata. "Suddenly he saw a vision of a seraph, a six-winged angel on a cross. This angel gave him the gift of the five wounds of Christ." Suffering from these stigmata and from trachoma, Francis received care in several cities (Siena, Cortona, Nocera) to no avail. In the end, he was brought back to a hut next to the Porziuncola. Here, in the place where the Franciscan movement began, and feeling that the end of his life was approaching, he spent his last days dictating his spiritual testament. He died on the evening of Saturday, 3 October 1226, singing Psalm 142 (141), ""Voce mea ad Dominum"".
On 16 July 1228, he was pronounced a saint by Pope Gregory IX (the former cardinal Ugolino di Conti, a friend of Saint Francis and Cardinal Protector of the Order). The next day, the Pope laid the foundation stone for the Basilica of Saint Francis in Assisi. Francis was buried on 25 May 1230, under the Lower Basilica, but his tomb was soon hidden on orders of Brother Elias to protect it from Saracen invaders. His exact burial place remained unknown until it was re-discovered in 1818. Pasquale Belli then constructed for the remains a crypt in neo-classical style in the Lower Basilica. It was refashioned between 1927 and 1930 into its present form by Ugo Tarchi, stripping the wall of its marble decorations. In 1978, the remains of Saint Francis were examined and confirmed by a commission of scholars appointed by Pope Paul VI, and put into a glass urn in the ancient stone tomb.
Francis set out to imitate Christ and literally carry out his work. This is important in understanding Francis' character, his affinity for the Eucharist and respect for the priests who carried out the sacrament. He preached: "Your God is of your flesh, He lives in your nearest neighbor, in every man."
He and his followers celebrated and even venerated poverty, which was so central to his character that in his last written work, the Testament, he said that absolute personal and corporate poverty was the essential lifestyle for the members of his order.
He believed that nature itself was the mirror of God. He called all creatures his "brothers" and "sisters", and even preached to the birds and supposedly persuaded a wolf in Gubbio to stop attacking some locals if they agreed to feed the wolf. In his "Canticle of the Creatures" ("Praises of Creatures" or "Canticle of the Sun"), he mentioned the "Brother Sun" and "Sister Moon", the wind and water. His deep sense of brotherhood under God embraced others, and he declared that "he considered himself no friend of Christ if he did not cherish those for whom Christ died".
Francis' visit to Egypt and attempted rapprochement with the Muslim world had far-reaching consequences, long past his own death, since after the fall of the Crusader Kingdom, it would be the Franciscans, of all Catholics, who would be allowed to stay on in the Holy Land and be recognized as "Custodians of the Holy Land" on behalf of the Catholic Church.
At Greccio near Assisi, around 1220, Francis celebrated Christmas by setting up the first known "presepio" or "crèche" (Nativity scene). His nativity imagery reflected the scene in traditional paintings. He used real animals to create a living scene so that the worshipers could contemplate the birth of the child Jesus in a direct way, making use of the senses, especially sight. Both Thomas of Celano and Saint Bonaventure, biographers of Saint Francis, tell how he used only a straw-filled manger (feeding trough) set between a real ox and donkey. According to Thomas, it was beautiful in its simplicity, with the manger acting as the altar for the Christmas Mass.
Francis preached the Christian doctrine that the world was created good and beautiful by God but suffers a need for redemption because of human sin. As someone who saw God reflected in nature, "St. Francis was a great lover of God's creation..." In the Canticle of the Sun he gives God thanks for Brother Sun, Sister Moon, Brother Wind, Water, Fire, and Earth, all of which he sees as rendering praise to God.
Many of the stories that surround the life of Saint Francis say that he had a great love for animals and the environment. The "Fioretti" ("Little Flowers"), is a collection of legends and folklore that sprang up after the Saint's death. One account describes how one day, while Francis was traveling with some companions, they happened upon a place in the road where birds filled the trees on either side. Francis told his companions to "wait for me while I go to preach to my sisters the birds." The birds surrounded him, intrigued by the power of his voice, and not one of them flew away. He is often portrayed with a bird, typically in his hand.
Another legend from the "Fioretti" tells that in the city of Gubbio, where Francis lived for some time, was a wolf "terrifying and ferocious, who devoured men as well as animals". Francis went up into the hills and when he found the wolf, he made the sign of the cross and commanded the wolf to come to him and hurt no one. Then Francis led the wolf into the town, and surrounded by startled citizens made a pact between them and the wolf. Because the wolf had “done evil out of hunger, the townsfolk were to feed the wolf regularly. In return, the wolf would no longer prey upon them or their flocks. In this manner Gubbio was freed from the menace of the predator.
On 29 November 1979, Pope John Paul II declared Saint Francis the Patron Saint of Ecology. On 28 March 1982, John Paul II said that Saint Francis' love and care for creation was a challenge for contemporary Catholics and a reminder "not to behave like dissident predators where nature is concerned, but to assume responsibility for it, taking all care so that everything stays healthy and integrated, so as to offer a welcoming and friendly environment even to those who succeed us." The same Pope wrote on the occasion of the World Day of Peace, 1 January 1990, the saint of Assisi "offers Christians an example of genuine and deep respect for the integrity of creation ..." He went on to make the point that: "As a friend of the poor who was loved by God's creatures, Saint Francis invited all of creation – animals, plants, natural forces, even Brother Sun and Sister Moon – to give honor and praise to the Lord. The poor man of Assisi gives us striking witness that when we are at peace with God we are better able to devote ourselves to building up that peace with all creation which is inseparable from peace among all peoples."
It is a popular practice on his feastday, October 4, for people to bring their pets and other animals to church for a blessing.
Saint Francis' feast day is observed on 4 October. A secondary feast in honor of the stigmata received by Saint Francis, celebrated on 17 September, was inserted in the General Roman Calendar in 1585 (later than the Tridentine Calendar) and suppressed in 1604, but was restored in 1615. In the New Roman Missal of 1969, it was removed again from the General Calendar, as something of a duplication of the main feast on 4 October, and left to the calendars of certain localities and of the Franciscan Order. Wherever the traditional Roman Missal is used, however, the feast of the Stigmata remains in the General Calendar.
Saint Francis is honored in the Church of England, the Anglican Church of Canada, the Episcopal Church USA, the Old Catholic Churches, the Evangelical Lutheran Church in America, and other churches and religious communities on 4 October.
On 13 March 2013, upon his election as Pope, Archbishop and Cardinal Jorge Mario Bergoglio of Argentina chose Francis as his papal name in honor of Saint Francis of Assisi, becoming Pope Francis.
At his first audience on 16 March 2013, Pope Francis told journalists that he had chosen the name in honor of Saint Francis of Assisi, and had done so because he was especially concerned for the well-being of the poor. He explained that, as it was becoming clear during the conclave voting that he would be elected the new bishop of Rome, the Brazilian Cardinal Cláudio Hummes had embraced him and whispered, "Don't forget the poor", which had made Bergoglio think of the saint. Bergoglio had previously expressed his admiration for St. Francis, explaining that “He brought to Christianity an idea of poverty against the luxury, pride, vanity of the civil and ecclesiastical powers of the time. He changed history." Bergoglio's selection of his papal name is the first time that a pope has been named "Francis".
On 18 June 1939, Pope Pius XII named Francis a joint Patron Saint of Italy along with Saint Catherine of Siena with the apostolic letter "Licet Commissa". Pope Pius also mentioned the two saints in the laudative discourse he pronounced on 5 May 1949, in the Church of Santa Maria Sopra Minerva.
St. Francis is the patron of animals, merchants, and ecology. He is also considered the patron saint against dying alone; patron saint against fire; patron saint of the Franciscan Order and Catholic Action; patron saint of families, peace, and needleworkers. He is the patron saint of many dioceses and other locations around the world, including: Italy; San Pawl il-Bahar, Malta; Freising, Germany; Lancaster, England; Kottapuram, India; San Francisco de Malabon, Philippines (General Trias City); San Francisco, California; Santa Fe, New Mexico; Colorado; Salina, Kansas; Metuchen, New Jersey; and Quibdó, Colombia.
Emerging since the 19th century, there are several Protestant adherents and groups, sometimes organized as religious orders, which strive to adhere to the teachings and spiritual disciplines of Saint Francis.
The 20th-century High Church Movement gave birth to Franciscan-inspired orders among the revival of religious orders in Protestant Christianity.
One of the results of the Oxford Movement in the Anglican Church during the 19th century was the re-establishment of religious orders, including some of Franciscan inspiration. The principal Anglican communities in the Franciscan tradition are the Community of St. Francis (women, founded 1905), the Poor Clares of Reparation (P.C.R.), the Society of Saint Francis (men, founded 1934), and the Community of St. Clare (women, enclosed).
A U.S.-founded order within the Anglican world communion is the Seattle-founded order of Clares in Seattle (Diocese of Olympia), The Little Sisters of St. Clare.
There are also some small Franciscan communities within European Protestantism and the Old Catholic Church. There are some Franciscan orders in Lutheran Churches, including the Order of Lutheran Franciscans, the Evangelical Sisterhood of Mary, and the Evangelische Kanaan Franziskus-Bruderschaft (Kanaan Franciscan Brothers). In addition, there are associations of Franciscan inspiration not connected with a mainstream Christian tradition and describing themselves as ecumenical or dispersed.
The Anglican church retained the Catholic tradition of blessing animals on or near Francis' feast day of 4 October, and more recently Lutheran and other Protestant churches have adopted the practice.
St Francis' feast is celebrated at New Skete, an Orthodox Christian monastic community in Cambridge, New York.
Outside of Christianity, other individuals and movements are influenced by the example and teachings of Saint Francis. These include the popular philosopher Eckhart Tolle, who has made videos on the spirituality of Saint Francis.
The interfaith spiritual community of Skanda Vale also takes inspiration from the example of Saint Francis, and models itself as an interfaith Franciscan order.
In 2019, the Umbria tourist board was continuing the process of refurbishing the route from Florence to Rome that Francis is believed to have used. Called the Via di Francesco or Cammino di Francesco, the 550 kilometer St Francis Way "pilgrimage route" is intended for travel on foot or by bicycle.
For a complete list, see "The Franciscan Experience".
Saint Francis is considered the first Italian poet by literary critics. He believed commoners should be able to pray to God in their own language, and he wrote often in the dialect of Umbria instead of Latin. His writings are considered to have great literary and religious value.
The anonymous 20th-century prayer "Make Me an Instrument of Your Peace" is widely but erroneously attributed to Saint Francis.
The Franciscan Order promoted devotion to the life of Saint Francis from his canonization onwards. The order commissioned many works for Franciscan churches, either showing Saint Francis with sacred figures, or episodes from his life. There are large early fresco cycles in the Basilica of San Francesco d'Assisi, parts of which are shown above. | https://en.wikipedia.org/wiki?curid=11638 |
General Dynamics F-16 Fighting Falcon
The General Dynamics F-16 Fighting Falcon is a single-engine supersonic multirole fighter aircraft originally developed by General Dynamics for the United States Air Force (USAF). Designed as an air superiority day fighter, it evolved into a successful all-weather multirole aircraft. Over 4,600 aircraft have been built since production was approved in 1976. Although no longer being purchased by the U.S. Air Force, improved versions are being built for export customers. In 1993, General Dynamics sold its aircraft manufacturing business to the Lockheed Corporation, which in turn became part of Lockheed Martin after a 1995 merger with Martin Marietta.
The Fighting Falcon's key features include a frameless bubble canopy for better visibility, side-mounted control stick to ease control while maneuvering, an ejection seat reclined 30 degrees from vertical to reduce the effect of g-forces on the pilot, and the first use of a relaxed static stability/fly-by-wire flight control system which helps to make it an agile aircraft. The F-16 has an internal M61 Vulcan cannon and 11 locations for mounting weapons and other mission equipment. The F-16's official name is "Fighting Falcon", but "Viper" is commonly used by its pilots and crews, due to a perceived resemblance to a viper snake as well as the Colonial Viper starfighter on "Battlestar Galactica" which aired at the time the F-16 entered service.
In addition to active duty in the U.S. Air Force, Air Force Reserve Command, and Air National Guard units, the aircraft is also used by the USAF aerial demonstration team, the U.S. Air Force Thunderbirds, and as an adversary/aggressor aircraft by the United States Navy. The F-16 has also been procured to serve in the air forces of 25 other nations. As of 2015, it is the world's most numerous fixed-wing aircraft in military service.
Experiences in the Vietnam War revealed the need for air superiority fighters and better air-to-air training for fighter pilots. Based on his experiences in the Korean War and as a fighter tactics instructor in the early 1960s, Colonel John Boyd with mathematician Thomas Christie developed the energy–maneuverability theory to model a fighter aircraft's performance in combat. Boyd's work called for a small, lightweight aircraft that could maneuver with the minimum possible energy loss and which also incorporated an increased thrust-to-weight ratio. In the late 1960s, Boyd gathered a group of like-minded innovators who became known as the Fighter Mafia, and in 1969, they secured Department of Defense funding for General Dynamics and Northrop to study design concepts based on the theory.
Air Force F-X proponents remained hostile to the concept because they perceived it as a threat to the F-15 program. However, the Air Force's leadership understood that its budget would not allow it to purchase enough F-15 aircraft to satisfy all of its missions. The Advanced Day Fighter concept, renamed "F-XX", gained civilian political support under the reform-minded Deputy Secretary of Defense David Packard, who favored the idea of competitive prototyping. As a result, in May 1971, the Air Force Prototype Study Group was established, with Boyd a key member, and two of its six proposals would be funded, one being the Lightweight Fighter (LWF). The Request for Proposals issued on 6 January 1972 called for a class air-to-air day fighter with a good turn rate, acceleration, and range, and optimized for combat at speeds of Mach 0.6–1.6 and altitudes of . This was the region where USAF studies predicted most future air combat would occur. The anticipated average flyaway cost of a production version was $3 million. This production plan, though, was only notional, as the USAF had no firm plans to procure the winner.
Five companies responded, and in 1972, the Air Staff selected General Dynamics' Model 401 and Northrop's P-600 for the follow-on prototype development and testing phase. GD and Northrop were awarded contracts worth $37.9 million and $39.8 million to produce the YF-16 and YF-17, respectively, with first flights of both prototypes planned for early 1974. To overcome resistance in the Air Force hierarchy, the "Fighter Mafia" and other LWF proponents successfully advocated the idea of complementary fighters in a high-cost/low-cost force mix. The "high/low mix" would allow the USAF to be able to afford sufficient fighters for its overall fighter force structure requirements. The mix gained broad acceptance by the time of the prototypes' flyoff, defining the relationship of the LWF and the F-15.
The YF-16 was developed by a team of General Dynamics engineers led by Robert H. Widmer. The first YF-16 was rolled out on 13 December 1973. Its 90-minute maiden flight was made at the Air Force Flight Test Center (AFFTC) at Edwards AFB, California, on 2 February 1974. Its actual first flight occurred accidentally during a high-speed taxi test on 20 January 1974. While gathering speed, a roll-control oscillation caused a fin of the port-side wingtip-mounted missile and then the starboard stabilator to scrape the ground, and the aircraft then began to veer off the runway. The test pilot, Phil Oestricher, decided to lift off to avoid a potential crash, safely landing six minutes later. The slight damage was quickly repaired and the official first flight occurred on time. The YF-16's first supersonic flight was accomplished on 5 February 1974, and the second YF-16 prototype first flew on 9 May 1974. This was followed by the first flights of Northrop's YF-17 prototypes on 9 June and 21 August 1974, respectively. During the flyoff, the YF-16s completed 330 sorties for a total of 417 flight hours; the YF-17s flew 288 sorties, covering 345 hours.
Increased interest turned the LWF into a serious acquisition program. North Atlantic Treaty Organization (NATO) allies Belgium, Denmark, the Netherlands, and Norway were seeking to replace their F-104G Starfighter fighter-bombers. In early 1974, they reached an agreement with the U.S. that if the USAF ordered the LWF winner, they would consider ordering it as well. The USAF also needed to replace its F-105 Thunderchief and F-4 Phantom II fighter-bombers. The U.S. Congress sought greater commonality in fighter procurements by the Air Force and Navy, and in August 1974 redirected Navy funds to a new Navy Air Combat Fighter (NACF) program that would be a navalized fighter-bomber variant of the LWF. The four NATO allies had formed the "Multinational Fighter Program Group" (MFPG) and pressed for a U.S. decision by December 1974; thus, the USAF accelerated testing.
To reflect this serious intent to procure a new fighter-bomber, the LWF program was rolled into a new Air Combat Fighter (ACF) competition in an announcement by U.S. Secretary of Defense James R. Schlesinger in April 1974. The ACF would not be a pure fighter, but multi-role, and Schlesinger made it clear that any ACF order would be in addition to the F-15, which extinguished opposition to the LWF. ACF also raised the stakes for GD and Northrop because it brought in competitors intent on securing what was touted at the time as "the arms deal of the century". These were Dassault-Breguet's proposed Mirage F1M-53, the Anglo-French SEPECAT Jaguar, and the proposed Saab 37E "Eurofighter". Northrop offered the P-530 Cobra, which was similar to the YF-17. The Jaguar and Cobra were dropped by the MFPG early on, leaving two European and the two U.S. candidates. On 11 September 1974, the U.S. Air Force confirmed plans to order the winning ACF design to equip five tactical fighter wings. Though computer modeling predicted a close contest, the YF-16 proved significantly quicker going from one maneuver to the next, and was the unanimous choice of those pilots that flew both aircraft.
On 13 January 1975, Secretary of the Air Force John L. McLucas announced the YF-16 as the winner of the ACF competition. The chief reasons given by the Secretary were the YF-16's lower operating costs, greater range, and maneuver performance that was "significantly better" than that of the YF-17, especially at supersonic speeds. Another advantage of the YF-16 – unlike the YF-17 – was its use of the Pratt & Whitney F100 turbofan engine, the same powerplant used by the F-15; such commonality would lower the cost of engines for both programs. Secretary McLucas announced that the USAF planned to order at least 650, possibly up to 1,400 production F-16s. In the Navy Air Combat Fighter (NACF) competition, on 2 May 1975 the Navy selected the YF-17 as the basis for what would become the McDonnell Douglas F/A-18 Hornet.
The U.S. Air Force initially ordered 15 "Full-Scale Development" (FSD) aircraft (11 single-seat and four two-seat models) for its flight test program, but was reduced to eight (six F-16A single-seaters and two F-16B two-seaters). The YF-16 design was altered for the production F-16. The fuselage was lengthened by , a larger nose radome was fitted for the AN/APG-66 radar, wing area was increased from to , the tailfin height was decreased, the ventral fins were enlarged, two more stores stations were added, and a single door replaced the original nosewheel double doors. The F-16's weight was increased by 25% over the YF-16 by these modifications.
The FSD F-16s were manufactured by General Dynamics in Fort Worth, Texas at United States Air Force Plant 4 in late 1975; the first F-16A rolled out on 20 October 1976 and first flew on 8 December. The initial two-seat model achieved its first flight on 8 August 1977. The initial production-standard F-16A flew for the first time on 7 August 1978 and its delivery was accepted by the USAF on 6 January 1979. The F-16 was given its formal nickname of "Fighting Falcon" on 21 July 1980, entering USAF operational service with the 34th Tactical Fighter Squadron, 388th Tactical Fighter Wing at Hill AFB in Utah on 1 October 1980.
On 7 June 1975, the four European partners, now known as the European Participation Group, signed up for 348 aircraft at the Paris Air Show. This was split among the European Participation Air Forces (EPAF) as 116 for Belgium, 58 for Denmark, 102 for the Netherlands, and 72 for Norway. Two European production lines, one in the Netherlands at Fokker's Schiphol-Oost facility and the other at SABCA's Gosselies plant in Belgium, would produce 184 and 164 units respectively. Norway's Kongsberg Vaapenfabrikk and Denmark's Terma A/S also manufactured parts and subassemblies for EPAF aircraft. European co-production was officially launched on 1 July 1977 at the Fokker factory. Beginning in November 1977, Fokker-produced components were sent to Fort Worth for fuselage assembly, then shipped back to Europe for final assembly of EPAF aircraft at the Belgian plant on 15 February 1978; deliveries to the Belgian Air Force began in January 1979. The first Royal Netherlands Air Force aircraft was delivered in June 1979. In 1980, the first aircraft were delivered to the Royal Norwegian Air Force by SABCA and to the Royal Danish Air Force by Fokker.
During the late 1980s and 1990s, Turkish Aerospace Industries (TAI) produced 232 Block 30/40/50 F-16s on a production line in Ankara under license for the Turkish Air Force. TAI also produced 46 Block 40s for Egypt in the mid-1990s and 30 Block 50 from 2010. Korean Aerospace Industries opened a production line for the KF-16 program, producing 140 Block 52s from the mid-1990s to mid-2000s (decade). If India had selected the F-16IN for its Medium Multi-Role Combat Aircraft procurement, a sixth F-16 production line would have been built in India. In May 2013, Lockheed Martin stated there were currently enough orders to keep producing the F-16 until 2017.
One change made during production was augmented pitch control to avoid deep stall conditions at high angles of attack. The stall issue had been raised during development, but had originally been discounted. Model tests of the YF-16 conducted by the Langley Research Center revealed a potential problem, but no other laboratory was able to duplicate it. YF-16 flight tests were not sufficient to expose the issue; later flight testing on the FSD aircraft demonstrated there was a real concern. In response, the area of the horizontal stabilizer were increased by 25% on the Block 15 aircraft in 1981 and later retrofitted to earlier aircraft. In addition, a manual override switch to disable the horizontal stabilizer flight limiter was prominently placed on the control console, allowing the pilot to regain control of the horizontal stabilizers (which the flight limiters otherwise lock in place) and recover. Besides reducing the risk of deep stalls, the larger horizontal tail also improved stability and permitted faster takeoff rotation.
In the 1980s, the Multinational Staged Improvement Program (MSIP) was conducted to evolve the F-16's capabilities, mitigate risks during technology development, and ensure the aircraft's worth. The program upgraded the F-16 in three stages. The MSIP process permitted the quick introduction of new capabilities, at lower costs and with reduced risks compared to traditional independent upgrade programs. In 2012, the USAF had allocated $2.8 billion to upgrade 350 F-16s while waiting for the F-35 to enter service. One key upgrade has been an auto-GCAS (Ground collision avoidance system) to reduce instances of controlled flight into terrain. Onboard power and cooling capacities limit the scope of upgrades, which often involve the addition of more power-hungry avionics.
Lockheed won many contracts to upgrade foreign operators' F-16s. BAE Systems also offers various F-16 upgrades, receiving orders from South Korea, Oman, Turkey, and the US Air National Guard; BAE lost the South Korean contract due to a price breach in November 2014. In 2012, the USAF assigned the total upgrade contract to Lockheed Martin. Upgrades include Raytheon's Center Display Unit, which replaces several analog flight instruments with a single digital display.
In 2013, sequestration budget cuts cast doubt on the USAF's ability to complete the Combat Avionics Programmed Extension Suite (CAPES), a part of secondary programs such as Taiwan's F-16 upgrade. ACC's General Mike Hostage stated that if he only had money for SLEP (service life extension program) or CAPES, he would fund SLEP to keep the aircraft flying. Lockheed Martin responded to talk of CAPES cancellation with a fixed-price upgrade package for foreign users. CAPES was not included in the Pentagon's 2015 budget request. The USAF said that the upgrade package will still be offered to the Republic of China Air Force, and Lockheed said that some common elements with the F-35 will keep the radar's unit costs down. In 2014, the USAF issued a RFI to SLEP 300 F-16 C/Ds.
To make more room for assembly of its newer F-35 Lightning II fighter aircraft, Lockheed Martin moved the F-16 production from Fort Worth, Texas to its plant in Greenville, South Carolina. Lockheed delivered the last F-16 from Fort Worth to the Iraqi Air Force on 14 November 2017, ending forty years of F-16 production there. The company is hoping to finish the Greenville move and restart production in 2019, though engineering and modernization work will remain in Fort Worth. A gap in orders made it possible to stop production during the move; after completing orders for the last Iraqi purchase, the company was negotiating an F-16 sale to Bahrain that would be produced in Greenville. This contract was signed in June 2018.
The F-16 is a single-engine, highly maneuverable, supersonic, multi-role tactical fighter aircraft. It is much smaller and lighter than its predecessors, but uses advanced aerodynamics and avionics, including the first use of a relaxed static stability/fly-by-wire (RSS/FBW) flight control system, to achieve enhanced maneuver performance. Highly agile, the F-16 was the first fighter aircraft purpose-built to pull 9-"g" maneuvers and can reach a maximum speed of over Mach 2. Innovations include a frameless bubble canopy for better visibility, a side-mounted control stick, and a reclined seat to reduce g-force effects on the pilot. It is armed with an internal M61 Vulcan cannon in the left wing root and has multiple locations for mounting various missiles, bombs and pods. It has a thrust-to-weight ratio greater than one, providing power to climb and vertical acceleration.
The F-16 was designed to be relatively inexpensive to build and simpler to maintain than earlier-generation fighters. The airframe is built with about 80% aviation-grade aluminum alloys, 8% steel, 3% composites, and 1.5% titanium. The leading-edge flaps, stabilators, and ventral fins make use of bonded aluminum honeycomb structures and graphite epoxy lamination coatings. The number of lubrication points, fuel line connections, and replaceable modules is significantly lower than preceding fighters; 80% of the access panels can be accessed without stands. The air intake was placed so it was rearward of the nose but forward enough to minimize air flow losses and reduce aerodynamic drag.
Although the LWF program called for a structural life of 4,000 flight hours, capable of achieving 7.33 "g" with 80% internal fuel; GD's engineers decided to design the F-16's airframe life for 8,000 hours and for 9-"g" maneuvers on full internal fuel. This proved advantageous when the aircraft's mission changed from solely air-to-air combat to multi-role operations. Changes in operational use and additional systems have increased weight, necessitating multiple structural strengthening programs.
The F-16 has a cropped-delta wing incorporating wing-fuselage blending and forebody vortex-control strakes; a fixed-geometry, underslung air intake (with splitter plate) to the single turbofan jet engine; a conventional tri-plane empennage arrangement with all-moving horizontal "stabilator" tailplanes; a pair of ventral fins beneath the fuselage aft of the wing's trailing edge; and a tricycle landing gear configuration with the aft-retracting, steerable nose gear deploying a short distance behind the inlet lip. There is a boom-style aerial refueling receptacle located behind the single-piece "bubble" canopy of the cockpit. Split-flap speedbrakes are located at the aft end of the wing-body fairing, and a tailhook is mounted underneath the fuselage. A fairing beneath the rudder often houses ECM equipment or a drag chute. Later F-16 models feature a long dorsal fairing along the fuselage's "spine", housing additional equipment or fuel.
Aerodynamic studies in the 1960s demonstrated that the "vortex lift" phenomenon could be harnessed by highly swept wing configurations to reach higher angles of attack, using leading edge vortex flow off a slender lifting surface. As the F-16 was being optimized for high combat agility, GD's designers chose a slender cropped-delta wing with a leading edge sweep of 40° and a straight trailing edge. To improve maneuverability, a variable-camber wing with a NACA 64A-204 airfoil was selected; the camber is adjusted by leading-edge and trailing edge flaperons linked to a digital flight control system (FCS) regulating the flight envelope. The F-16 has a moderate wing loading, reduced by fuselage lift. The vortex lift effect is increased by leading edge extensions, known as strakes. Strakes act as additional short-span, triangular wings running from the wing root (the juncture with the fuselage) to a point further forward on the fuselage. Blended into the fuselage and along the wing root, the strake generates a high-speed vortex that remains attached to the top of the wing as the angle of attack increases, generating additional lift and allowing greater angles of attack without stalling. Strakes allow a smaller, lower-aspect-ratio wing, which increases roll rates and directional stability while decreasing weight. Deeper wingroots also increase structural strength and internal fuel volume.
Early F-16s could be armed with up to six AIM-9 Sidewinder heat-seeking short-range air-to-air missiles (AAM) by employing rail launchers on each wingtip, as well as radar guided AIM-7 Sparrow medium-range AAMs in a weapons mix. More recent versions support the AIM-120 AMRAAM. The aircraft can carry various other AAMs, a wide variety of air-to-ground missiles, rockets or bombs; electronic countermeasures (ECM), navigation, targeting or weapons pods; and fuel tanks on 9 hardpoints – six under the wings, two on wingtips, and one under the fuselage. Two other locations under the fuselage are available for sensor or radar pods. The F-16 carries a 20 mm (0.787 in) M61A1 Vulcan cannon for close range aerial combat and strafing. The 20mm cannon is mounted inside the fuselage to the left of the cockpit.
The F-16 is the first production fighter aircraft intentionally designed to be slightly aerodynamically unstable, also known as "relaxed static stability" (RSS), to improve maneuverability. Most aircraft are designed with positive static stability, which induces aircraft to return to straight and level flight attitude if the pilot releases the controls; this reduces maneuverability as the inherent stability has to be overcome. Aircraft with "negative" stability are designed to deviate from controlled flight and thus be more maneuverable. At supersonic speeds the F-16 gains stability (eventually positive) due to aerodynamic changes.
To counter the tendency to depart from controlled flight—and avoid the need for constant trim inputs by the pilot, the F-16 has a quadruplex (four-channel) fly-by-wire (FBW) flight control system (FLCS). The flight control computer (FLCC) accepts pilot input from the stick and rudder controls, and manipulates the control surfaces in such a way as to produce the desired result without inducing control loss. The FLCC conducts thousands of measurements per second on the aircraft's flight attitude to automatically counter deviations from the pilot-set flight path; leading to a common aphorism among pilots: "You don't fly an F-16; it flies you."
The FLCC further incorporates limiters governing movement in the three main axes based on attitude, airspeed and angle of attack (AOA); these prevent control surfaces from inducing instability such as slips or skids, or a high AOA inducing a stall. The limiters also prevent maneuvers that would exert more than a 9 "g" load. Flight testing has revealed that "assaulting" multiple limiters at high AOA and low speed can result in an AOA far exceeding the 25° limit, colloquially referred to as "departing"; this causes a deep stall; a near-freefall at 50° to 60° AOA, either upright or inverted. While at a very high AOA, the aircraft's attitude is stable but control surfaces are ineffective; the pitch limiter locks the stabilators at an extreme pitch-up or pitch-down attempting to recover, this can be overridden so the pilot can "rock" the nose via pitch control to recover.
Unlike the YF-17, which had hydromechanical controls serving as a backup to the FBW, General Dynamics took the innovative step of eliminating mechanical linkages between the control stick and rudder pedals, and the flight control surfaces. The F-16 is entirely reliant on its electrical systems to relay flight commands, instead of traditional mechanically-linked controls, leading to the early moniker of "the electric jet". The quadruplex design permits "graceful degradation" in flight control response in that the loss of one channel renders the FLCS a "triplex" system. The FLCC began as an analog system on the A/B variants, but has been supplanted by a digital computer system beginning with the F-16C/D Block 40. The F-16's controls suffered from a sensitivity to static electricity or electrostatic discharge (ESD). Up to 70–80% of the C/D models' electronics were vulnerable to ESD.
A key feature of the F-16's cockpit is the exceptional field of view. The single-piece, bird-proof polycarbonate bubble canopy provides 360° all-round visibility, with a 40° look-down angle over the side of the aircraft, and 15° down over the nose (compared to the common 12–13° of preceding aircraft); the pilot's seat is elevated for this purpose. Furthermore, the F-16's canopy lacks the forward bow frame found on many fighters, which is an obstruction to a pilot's forward vision. The F-16's ACES II zero/zero ejection seat is reclined at an unusual tilt-back angle of 30°; most fighters have a tilted seat at 13–15°. The tilted seat can accommodate taller pilots and increases G-force tolerance; however it has been associated with reports of neck ache, possibly caused by incorrect head-rest usage. Subsequent U.S. fighters have adopted more modest tilt-back angles of 20°. Due to the seat angle and the canopy's thickness, the ejection seat lacks canopy-breakers for emergency egress; instead the entire canopy is jettisoned prior to the seat's rocket firing.
The pilot flies primarily by means of an armrest-mounted side-stick controller (instead of a traditional center-mounted stick) and an engine throttle; conventional rudder pedals are also employed. To enhance the pilot's degree of control of the aircraft during high-"g" combat maneuvers, various switches and function controls were moved to centralized "hands on throttle-and-stick (HOTAS)" controls upon both the controllers and the throttle. Hand pressure on the side-stick controller is transmitted by electrical signals via the FBW system to adjust various flight control surfaces to maneuver the F-16. Originally the side-stick controller was non-moving, but this proved uncomfortable and difficult for pilots to adjust to, sometimes resulting in a tendency to "over-rotate" during takeoffs, so the control stick was given a small amount of "play". Since introduction on the F-16, HOTAS controls have become a standard feature on modern fighters.
The F-16 has a head-up display (HUD), which projects visual flight and combat information in front of the pilot without obstructing the view; being able to keep their head "out of the cockpit" improves a pilot's situation awareness. Further flight and systems information are displayed on multi-function displays (MFD). The left-hand MFD is the primary flight display (PFD), typically showing radar and moving-maps; the right-hand MFD is the system display (SD), presenting information about the engine, landing gear, slat and flap settings, and fuel and weapons status. Initially, the F-16A/B had monochrome cathode ray tube (CRT) displays; replaced by color liquid-crystal displays on the Block 50/52. The MLU introduced compatibility with night-vision goggles (NVG). The Boeing Joint Helmet Mounted Cueing System (JHMCS) is available from Block 40 onwards, for targeting based on where the pilot's head faces, unrestricted by the HUD, using high-off-boresight missiles like the AIM-9X.
The F-16A/B was originally equipped with the Westinghouse AN/APG-66 fire-control radar. Its slotted planar array antenna was designed to be compact to fit into the F-16's relatively small nose. In uplook mode, the APG-66 uses a low pulse-repetition frequency (PRF) for medium- and high-altitude target detection in a low-clutter environment, and in look-down/shoot-down employs a medium PRF for heavy clutter environments. It has four operating frequencies within the X band, and provides four air-to-air and seven air-to-ground operating modes for combat, even at night or in bad weather. The Block 15's APG-66(V)2 model added a more powerful signal processing, higher output power, improved reliability and increased range in cluttered or jamming environments. The Mid-Life Update (MLU) program introduced a new model, APG-66(V)2A, which features higher speed and more memory.
The AN/APG-68, an evolution of the APG-66, was introduced with the F-16C/D Block 25. The APG-68 has greater range and resolution, as well as 25 operating modes, including ground-mapping, Doppler beam-sharpening, ground moving target indication, sea target, and track while scan (TWS) for up to 10 targets. The Block 40/42's APG-68(V)1 model added full compatibility with Lockheed Martin Low-Altitude Navigation and Targeting Infra-Red for Night (LANTIRN) pods, and a high-PRF pulse-Doppler track mode to provide Interrupted Continuous Wave guidance for semi-active radar-homing (SARH) missiles like the AIM-7 Sparrow. Block 50/52 F-16s initially used the more reliable APG-68(V)5 which has a programmable signal processor employing Very-High-Speed Integrated Circuit (VHSIC) technology. The Advanced Block 50/52 (or 50+/52+) are equipped with the APG-68(V)9 radar, with a 30% greater air-to-air detection range and a synthetic aperture radar (SAR) mode for high-resolution mapping and target detection-recognition. In August 2004, Northrop Grumman were contracted to upgrade the APG-68 radars of Block 40/42/50/52 aircraft to the (V)10 standard, providing all-weather autonomous detection and targeting for Global Positioning System (GPS)-aided precision weapons, SAR mapping and terrain-following radar (TF) modes, as well as interleaving of all modes.
The F-16E/F is outfitted with Northrop Grumman's AN/APG-80 active electronically scanned array (AESA) radar. Northrop Grumman developed the latest AESA radar upgrade for the F-16 (selected for USAF and Republic of China Air Force F-16 upgrades), named the Scalable Agile Beam Radar (SABR) APG-83. In July 2007, Raytheon announced that it was developing a Next Generation Radar (RANGR) based on its earlier AN/APG-79 AESA radar as a competitor to Northrop Grumman's AN/APG-68 and AN/APG-80 for the F-16. On February 28, 2020, Northrop Grumman received an order from USAF to extend the service lives of their F-16s to at least 2048 with APG-83 Scalable Agile Beam Radar (SABR) as part of the service-life extension program (SLEP).
The initial powerplant selected for the single-engined F-16 was the Pratt & Whitney F100-PW-200 afterburning turbofan, a modified version of the F-15's F100-PW-100, rated at 23,830 lbf (106.0 kN) thrust. During testing, the engine was found to be prone to compressor stalls and "rollbacks", wherein the engine's thrust would spontaneously reduce to idle. Until resolved, the Air Force ordered F-16s to be operated within "dead-stick landing" distance of its bases. It was the standard F-16 engine through the Block 25, except for the newly-built Block 15s with the Operational Capability Upgrade (OCU). The OCU introduced the 23,770 lbf (105.7 kN) F100-PW-220, later installed on Block 32 and 42 aircraft: the main advance being a Digital Electronic Engine Control (DEEC) unit, which improved reliability and reduced stall occurrence. Beginning production in 1988, the "-220" also supplanted the F-15's "-100", for commonality. Many of the "-220" engines on Block 25 and later aircraft were upgraded from 1997 onwards to the "-220E" standard, which enhanced reliability and maintainability; unscheduled engine removals were reduced by 35%.
The F100-PW-220/220E was the result of the USAF's Alternate Fighter Engine (AFE) program (colloquially known as "the Great Engine War"), which also saw the entry of General Electric as an F-16 engine provider. Its F110-GE-100 turbofan was limited by the original inlet to thrust of 25,735 lbf (114.5 kN), the Modular Common Inlet Duct allowed the F110 to achieve its maximum thrust of 28,984 lbf (128.9 kN). (To distinguish between aircraft equipped with these two engines and inlets, from the Block 30 series on, blocks ending in "0" (e.g., Block 30) are powered by GE, and blocks ending in "2" (e.g., Block 32) are fitted with Pratt & Whitney engines.)
The Increased Performance Engine (IPE) program led to the 29,588 lbf (131.6 kN) F110-GE-129 on the Block 50 and 29,160 lbf (129.4 kN) F100-PW-229 on the Block 52. F-16s began flying with these IPE engines in the early 1990s. Altogether, of the 1,446 F-16C/Ds ordered by the USAF, 556 were fitted with F100-series engines and 890 with F110s. The United Arab Emirates’ Block 60 is powered by the General Electric F110-GE-132 turbofan with a maximum thrust of 32,500 lbf (144.6 kN), the highest thrust engine developed for the F-16.
F-16s have participated in numerous conflicts, most of them in the Middle East.
The F-16 is being used by the active duty USAF, Air Force Reserve, and Air National Guard units, the USAF aerial demonstration team, the U.S. Air Force Thunderbirds, and as an adversary-aggressor aircraft by the United States Navy at the Naval Strike and Air Warfare Center.
The U.S. Air Force, including the Air Force Reserve and the Air National Guard, flew the F-16 in combat during Operation Desert Storm in 1991 and in the Balkans later in the 1990s. F-16s also patrolled the no-fly zones in Iraq during Operations Northern Watch and Southern Watch and served during the wars in Afghanistan (Operation Enduring Freedom) and Iraq (Operation Iraqi Freedom) from 2001 and 2003 respectively. In 2011, Air Force F-16s took part in the intervention in Libya.
The F-16 had been scheduled to remain in service with the U.S. Air Force until 2025. Its replacement was planned to be the F-35A variant of the Lockheed Martin F-35 Lightning II, which is expected to gradually begin replacing several multi-role aircraft among the program's member nations. However, due to delays in the F-35 program, all USAF F-16s will receive service life extension upgrades.
The F-16's first air-to-air combat success was achieved by the Israeli Air Force (IAF) over the Bekaa Valley on 28 April 1981, against a Syrian Mi-8 helicopter, which was downed with cannon fire. On 7 June 1981, eight Israeli F-16s, escorted by six F-15s, executed Operation Opera, their first employment in a significant air-to-ground operation. This raid severely damaged Osirak, an Iraqi nuclear reactor under construction near Baghdad, to prevent the regime of Saddam Hussein from using the reactor for the creation of nuclear weapons.
The following year, during the 1982 Lebanon War Israeli F-16s engaged Syrian aircraft in one of the largest air battles involving jet aircraft, which began on 9 June and continued for two more days. Israeli Air Force F-16s were credited with 44 air-to-air kills during the conflict.
In January 2000, Israel completed a purchase of 102 new F-16I aircraft in a deal totaling $4.5 billion. F-16s were also used in their ground-attack role for strikes against targets in Lebanon. IAF F-16s participated in the 2006 Lebanon War and the 2008–09 Gaza War. During and after the 2006 Lebanon war, IAF F-16s shot down Iranian-made UAVs launched by Hezbollah, using Rafael Python 5 air-to-air missiles.
On 10 February 2018, an Israeli Air Force F-16I was shot down in northern Israel when it was hit by a relatively old model S-200 (NATO name SA-5 Gammon) surface-to-air missile of the Syrian Air Defense Force. The pilot and navigator ejected safely in Israeli territory. The F-16I was part of a bombing mission against Syrian and Iranian targets around Damascus after an Iranian drone entered Israeli air space and was shot down. An Israel Air Force investigation determined on 27 February 2018 that the loss was due to pilot error since the IAF determined the air crew did not adequately defend themselves.
During the Soviet–Afghan War, between May 1986 and January 1989, Pakistan Air Force F-16s shot down at least eight intruders from Afghanistan. The first three of these (two Afghan Su-22s and one An-26) were shot down by two pilots. Pakistani pilots also downed five other intruders (two Su-22s, two MiG-23s, and one Su-25). Most of these kills were by AIM-9 Sidewinder missiles, but at least one, an Su-22, was destroyed by cannon fire. Flight Lieutenant Khalid Mahmoud is credited with three of these kills. One F-16 was lost in these battles during an encounter between two F-16s and four Soviet Air Force MiG-23s on 29 April 1987; the pilot ejected safely. The downed F-16 was likely hit accidentally by a Sidewinder fired by the other F-16.
On 7 June 2002, a Pakistan Air Force F-16B Block 15 (S. No. 82-605), flown by Sqn. Leader Zulfiqar, shot down an Indian Air Force unmanned aerial vehicle, an Israeli-made Searcher II, using an AIM-9L Sidewinder missile, during a night interception near Lahore, thus achieving a rare air-to-air kill of a drone at night.
The Pakistan Air Force has used its F-16s in various foreign and internal military exercises, such as the "Indus Vipers" exercise in 2008 conducted jointly with Turkey.
Between May 2009 and , the PAF F-16 fleet flew more than 5,500 sorties in support of the Pakistan Army's operations against the Taliban insurgency in the FATA region of North-West Pakistan. More than 80% of the dropped munitions were laser-guided bombs.
On 27 February 2019, two Pakistan Air Force F-16s, one F-16AM Block 15 MLU (S. No. 92731), flown by Wg. Cdr. Nauman Ali Khan, Officer Commanding (OC) No. 29 'Aggressor' Squadron, and one F-16BM Block 15 MLU (S. No. 92606), flown by Sqn. Ldr. Hassan Mehmood Siddiqui, both aircraft from No. 11 'Arrows' Squadron, reportedly shot down one Indian Air Force MiG-21 Bison (from No. 51 Squadron, flown by Wg. Cdr. Abhinandan Varthaman, the Squadron's OC) over Kashmir using AIM-120C AMRAAM missiles. Pakistani media also stated that one IAF Su-30MKI was shot down, but failed to provide credible proof. The only confirmed loss from the engagement was the MiG-21.
India claimed that a Pakistani F-16 was also shot down by the Indian MiG-21 that was shot down after a few minutes, but the plane crashed in Pakistan administered Kashmir making it difficult to verify. Pakistan denied the use or loss of any F-16 during the engagement. On 28 February 2019, India displayed debris of an AMRAAM missile to show use of F-16s in the mission. On 8 April 2019, the IAF reiterated that it shot down a PAF F-16 in the February engagement and presented radar images to prove that it downed a Pakistan F-16.
The Turkish Air Force acquired its first F-16s in 1987. Turkish F-16s participated in the Bosnia Herzegovina and Kosovo since 1993 in support of United Nations resolutions.
On 18 June 1992, a Greek Mirage F-1 crashed during a dogfight with a Turkish F-16. On 8 February 1995, a Turkish F-16 crashed into the Aegean after being intercepted by Greek Mirage F1 fighters.
On 8 October 1996, 7 months after the escalation over Imia a Greek Mirage 2000 reportedly fired an R.550 Magic II missile and shot down a Turkish F-16D over the Aegean Sea. The Turkish pilot died, while the co-pilot ejected and was rescued by Greek forces. In August 2012, after the downing of a RF-4E on the Syrian Coast, Turkish Defence Minister İsmet Yılmaz confirmed that the Turkish F-16D was shot down by a Greek Mirage 2000 with an R.550 Magic II in 1996 after violating Greek airspace near Chios island. Greece denies that the F-16 was shot down. Both Mirage 2000 pilots reported that the F-16 caught fire and they saw one parachute.
On 23 May 2006, two Greek F-16s intercepted a Turkish RF-4 reconnaissance aircraft and two F-16 escorts off the coast of the Greek island of Karpathos, within the Athens FIR. A mock dogfight ensued between the two sides, resulting in a midair collision between a Turkish F-16 and a Greek F-16. The Turkish pilot ejected safely, but the Greek pilot died due to damage caused by the collision. Five days before the incident, a Turkish F-16 pilot was doing dangerous maneuvers, while being intercepted by Greek F-16 fighters, attempting to hit a Greek fighter.
Turkey used its F-16s extensively in its conflict with separatist Kurds in southeastern parts of Turkey and Iraq. Turkey launched its first cross-border raid on 16 December 2007, a prelude to the 2008 Turkish incursion into northern Iraq, involving 50 fighters before Operation Sun. This was the first time Turkey had mounted a night-bombing operation on a massive scale, and also the largest operation conducted by Turkish Air Force.
During the Syrian Civil War, Turkish F-16s were tasked with airspace protection on the Syrian border. After the RF-4 downing in June 2012 Turkey changed its rules of engagements against Syrian aircraft, resulting in scrambles and downings of Syrian combat aircraft. On 16 September 2013, a Turkish Air Force F-16 shot down a Syrian Arab Air Force Mil Mi-17 helicopter in Latakia province near the Turkish border. On 23 March 2014, a Turkish Air Force F-16 shot down a Syrian Arab Air Force Mikoyan-Gurevich MiG-23 when it allegedly entered Turkish air space during a ground attack mission against Al Qaeda-linked insurgents. On 16 May 2015, Two Turkish Air Force F-16s shot down a Syrian Mohajer 4 UAV firing two AIM-9 missiles after it trespassed into Turkish airspace for 5 minutes. A Turkish Air Force F-16 shot down a Russian Air Force Sukhoi Su-24 on the Turkey-Syria border on 24 November 2015.
On 1 March 2020, two Syrian Sukhoi Su-24s were shot down by Turkish Air Force F-16s using air-to-air missiles over Syria's Idlib province. All four pilots safely ejected. On 3 March 2020, a Syrian Arab Army Air Force L-39 combat trainer was shot down by a Turkish F-16 over Syria's Idlib province. The pilot died.
On 16 February 2015, Egyptian F-16s struck jihadi weapons caches and training camps in Libya in retaliation for the murder of 21 Egyptian Coptic Christian construction workers by masked militants affiliated with the Islamic State (ISIS). The air strikes killed 64 ISIS fighters, including three leaders in Derna and Sirte on the coast.
The Royal Netherlands Air Force, Belgian Air Force, Royal Danish Air Force, Royal Norwegian Air Force, and Venezuela Air Force have flown the F-16 on combat missions.
A Yugoslavian MiG-29 was shot down by a Dutch F-16AM during the Kosovo War in 1999. Belgian and Danish F-16s also participated in joint operations over Kosovo during the war. Dutch, Belgian, Danish, and Norwegian F-16s were deployed during the 2011 intervention in Libya and in Afghanistan. In Libya, Norwegian F-16s dropped almost 550 bombs and flew 596 missions, some 17% of the total strike missions including the bombing of Muammar Gaddafi's headquarters.
The Royal Moroccan Air Force and the Royal Bahraini Air Force, each lost a single F-16C, both shot down by Houthis anti aircraft fire during the Saudi Arabian-led intervention in Yemen, respectively on 11 May 2015 and on 30 December 2015.
In late March 2018, Croatia announced its intention to purchase 12 used Israeli F-16C/D "Barak"/"Brakeet" jets, pending U.S. approval. Acquiring these F-16s would allow Croatia to retire its aging MiG-21s.
On 11 July 2018, Slovakia's government approved the purchase of 14 F-16s Block 70/72 to replace its aging fleet of Soviet-made MiG-29s. A contract was signed on 12 December 2018 in Bratislava.
F-16 models are denoted by increasing block numbers to denote upgrades. The blocks cover both single- and two-seat versions. A variety of software, hardware, systems, weapons compatibility, and structural enhancements have been instituted over the years to gradually upgrade production models and retrofit delivered aircraft.
While many F-16s were produced according to these block designs, there have been many other variants with significant changes, usually due to modification programs. Other changes have resulted in role-specialization, such as the close air support and reconnaissance variants. Several models were also developed to test new technology. The F-16 design also inspired the design of other aircraft, which are considered derivatives. Older F-16s are being converted into QF-16 drone targets.
By July 2010 there had been 4,500 F-16s delivered.
The F-16 has been involved in over 670 hull-loss accidents as of January 2020. | https://en.wikipedia.org/wiki?curid=11642 |
First Council of Constantinople
The First Council of Constantinople ( commonly known as , "Second Ecumenical"; or ) was a council of Christian bishops convened in Constantinople in AD 381 by the Roman Emperor Theodosius I. This second ecumenical council, an effort to attain consensus in the church through an assembly representing all of Christendom, except for the Western Church, confirmed the Nicene Creed, expanding the doctrine thereof to produce the Niceno-Constantinopolitan Creed, and dealt with sundry other matters. It met from May to July 381 in the Church of Hagia Irene and was affirmed as ecumenical in 451 at the Council of Chalcedon.
When Theodosius ascended to the imperial throne in 380, he began on a campaign to bring the Eastern Church back to Nicene Christianity. Theodosius wanted to further unify the entire empire behind the orthodox position and decided to convene a church council to resolve matters of faith and discipline. Gregory Nazianzus was of similar mind, wishing to unify Christianity. In the spring of 381 they convened the Second Ecumenical Council in Constantinople.
The Council of Nicaea in 325 had not ended the Arian controversy which it had been called to clarify. Arius and his sympathizers, e.g. Eusebius of Nicomedia were admitted back into the church after ostensibly accepting the Nicene creed. Athanasius, bishop of Alexandria, the most vocal opponent of Arianism, was ultimately exiled through the machinations of Eusebius of Nicomedia. After the death of Constantine I in 337 and the accession of his Arian-leaning son Constantius II, open discussion of replacing the Nicene creed itself began. Up until about 360, theological debates mainly dealt with the divinity of the Son, the second person of the Trinity. However, because the Council of Nicaea had not clarified the divinity of the Holy Spirit, the third person of the Trinity, it became a topic of debate. The Macedonians denied the divinity of the Holy Spirit. This was also known as Pneumatomachianism.
Nicene Christianity also had its defenders: apart from Athanasius, the Cappadocian Fathers' Trinitarian discourse was influential in the council at Constantinople. Apollinaris of Laodicea, another pro-Nicene theologian, proved controversial. Possibly in an over-reaction to Arianism and its teaching that Christ was not God, he taught that Christ consisted of a human body and a divine mind, rejecting the belief that Christ had a complete human nature, including a human mind. He was charged with confounding the persons of the Godhead, and with giving in to the heretical ways of Sabellius. Basil of Caesarea accused him of abandoning the literal sense of the scripture, and taking up wholly with the allegorical sense. His views were condemned in a Synod at Alexandria, under Athanasius of Alexandria, in 362, and later subdivided into several different heresies, the main ones of which were the Polemians and the Antidicomarianites.
Theodosius' strong commitment to Nicene Christianity involved a calculated risk because Constantinople, the imperial capital of the Eastern Empire, was solidly Arian. To complicate matters, the two leading factions of Nicene Christianity in the East, the Alexandrians and the supporters of Meletius in Antioch, were "bitterly divided ... almost to the point of complete animosity".
The bishops of Alexandria and Rome had worked over a number of years to keep the see of Constantinople from stabilizing. Thus, when Gregory was selected as a candidate for the bishopric of Constantinople, both Alexandria and Rome opposed him because of his Antiochene background.
The incumbent bishop of Constantinople was Demophilus, a Homoian Arian. On his accession to the imperial throne, Theodosius offered to confirm Demophilus as bishop of the imperial city on the condition of accepting the Nicene Creed; however, Demophilus refused to abandon his Arian beliefs, and was immediately ordered to give up his churches and leave Constantinople. After forty years under the control of Arian bishops, the churches of Constantinople were now restored to those who subscribed to the Nicene Creed; Arians were also ejected from the churches of other cities in the Eastern Roman Empire thus re-establishing Christian orthodoxy in the East.
There ensued a contest to control the newly recovered see. A group led by Maximus the Cynic gained the support of Patriarch Peter of Alexandria by playing on his jealousy of the newly created see of Constantinople. They conceived a plan to install a cleric subservient to Peter as bishop of Constantinople so that Alexandria would retain the leadership of the Eastern Churches. Many commentators characterize Maximus as having been proud, arrogant and ambitious. However, it is not clear the extent to which Maximus sought this position due to his own ambition or if he was merely a pawn in the power struggle. In any event, the plot was set into motion when, on a night when Gregory was confined by illness, the conspirators burst into the cathedral and commenced the consecration of Maximus as bishop of Constantinople. They had seated Maximus on the archiepiscopal throne and had just begun shearing away his long curls when the day dawned. The news of what was transpiring quickly spread and everybody rushed to the church. The magistrates appeared with their officers; Maximus and his consecrators were driven from the cathedral, and ultimately completed the tonsure in the tenement of a flute-player.
The news of the brazen attempt to usurp the episcopal throne aroused the anger of the local populace among whom Gregory was popular. Maximus withdrew to Thessalonica to lay his cause before the emperor but met with a cold reception there. Theodosius committed the matter to Ascholius, the much respected bishop of Thessalonica, charging him to seek the counsel of Pope Damasus I.
Damasus' response repudiated Maximus summarily and advised Theodosius to summon a Council of Bishops for the purpose of settling various Church issues such as the schism in Antioch and the consecration of a proper bishop for the see of Constantinople. Damasus condemned the translation of bishops from one see to another and urged Theodosius to "take care that a bishop who is above reproach is chosen for that see."
Thirty-six Pneumatomachians arrived but were denied admission to the council when they refused to accept the Nicene creed.
Since Peter, the bishop of Alexandria, was not present, the presidency over the Council was given to Meletius as bishop of Antioch. The first order of business before the Council was to declare the clandestine consecration of Maximus invalid, and to confirm Theodosius' installation of Gregory Nazianzus as Bishop of Constantinople. When Meletius died shortly after the opening of the council, Gregory was selected to lead the Council.
The Egyptian and Macedonian bishops who had supported Maximus's ordination arrived late for the Council. Once there, they refused to recognise Gregory's position as head of the church of Constantinople, arguing that his transfer from the See of Sasima was canonically illegitimate because one of the canons of the Council of Nicaea had forbidden bishops to transfer from their sees.
McGuckin describes Gregory as physically exhausted and worried that he was losing the confidence of the bishops and the emperor. Ayres goes further and asserts that Gregory quickly made himself unpopular among the bishops by supporting the losing candidate for the bishopric of Antioch and vehemently opposing any compromise with the Homoiousians.
Rather than press his case and risk further division, Gregory decided to resign his office: "Let me be as the Prophet Jonah! I was responsible for the storm, but I would sacrifice myself for the salvation of the ship. Seize me and throw me... I was not happy when I ascended the throne, and gladly would I descend it." He shocked the Council with his surprise resignation and then delivered a dramatic speech to Theodosius asking to be released from his offices. The emperor, moved by his words, applauded, commended his labor, and granted his resignation. The Council asked him to appear once more for a farewell ritual and celebratory orations. Gregory used this occasion to deliver a final address (Or. 42) and then departed.
Nectarius, an unbaptized civil official, was chosen to succeed Gregory as president of the council.
Seven canons, four of these doctrinal canons and three disciplinary canons, are attributed to the Council and accepted by both the Eastern Orthodox Church and the Oriental Orthodox Churches; the Roman Catholic Church accepts only the first four because only the first four appear in the oldest copies and there is evidence that the last three were later additions.
The first canon is an important dogmatic condemnation of all shades of Arianism, and also of Macedonianism and Apollinarianism.
The second canon renewed the Nicene legislation imposing upon the bishops the observance of diocesan and patriarchal limits.
The third canon reads:
The fourth canon decreed the consecration of Maximus as Bishop of Constantinople to be invalid, declaring "that [Maximus] neither was nor is a bishop, nor are they who have been ordained by him in any rank of the clergy". This canon was directed not only against Maximus, but also against the Egyptian bishops who had conspired to consecrate him clandestinely at Constantinople, and against any subordinate ecclesiastics that he might have ordained in Egypt.
The fifth canon might actually have been passed the next year, 382, and is in regard to a "Tome" of the Western bishops, perhaps that of Pope Damasus I.
The sixth canon might belong to the year 382 as well and was subsequently passed at the Quinisext Council as canon 95. It limits the ability to accuse bishops of wrongdoing.
The seventh canon regards procedures for receiving certain heretics into the church.
The third canon was a first step in the rising importance of the new imperial capital, just fifty years old, and was notable in that it demoted the patriarchs of Antioch and Alexandria. Jerusalem, as the site of the first Church, retained its place of honor.
Baronius asserted that the third canon was not authentic, not in fact decreed by the council. Some medieval Greeks maintained that it did not declare supremacy of the Bishop of Rome, but the primacy; "the first among equals", similar to how they today view the Bishop of Constantinople. Throughout the next several centuries, the Western Church asserted that the Bishop of Rome had supreme authority, and by the time of the Great Schism the Roman Catholic Church based its claim to supremacy on the succession of St. Peter. When the First Council of Constantinople was approved, Rome protested the diminished honor to be afforded the bishops of Antioch and Alexandria. The status of these Eastern patriarchs would be brought up again by the Papal Legates at the Council of Chalcedon. Pope Leo the Great, declared that this canon had never been submitted to Rome and that their lessened honor was a violation of the Nicene council order. At the Fourth Council of Constantinople (869), the Roman legates asserted the place of the bishop of Rome's honor over the bishop of Constantinople's.
After the Great Schism of 1054, in 1215 the Fourth Lateran Council declared, in its fifth canon, that the Roman Church "by the will of God holds over all others pre-eminence of ordinary power as the mother and mistress of all the faithful". Roman supremacy over the whole world was formally claimed by the new Latin patriarch. The Roman correctores of Gratian, insert the words: "canon hic ex iis est quos apostolica Romana sedes a principio et longo post tempore non recipit" ("this canon is one of those that the Apostolic See of Rome has not accepted from the beginning and ever since").
It has been asserted by many that a synod was held by Pope Damasus I in the following year (382) which opposed the disciplinary canons of the Council of Constantinople, especially the third canon which placed Constantinople above Alexandria and Antioch. The synod protested against this raising of the bishop of the new imperial capital, just fifty years old, to a status higher than that of the bishops of Alexandria and Antioch, and stated that the primacy of the Roman see had not been established by a gathering of bishops but rather by Christ himself. Thomas Shahan says that, according to Photius too, Pope Damasus approved the council, but he adds that, if any part of the council were approved by this pope, it could have been only its revision of the Nicene Creed, as was the case also when Gregory the Great recognized it as one of the four general councils, but only in its dogmatic utterances.
Traditionally, the Niceno-Constantinopolitan Creed has been associated with the Council of Constantinople (381). It is roughly equivalent to the Nicene Creed plus two additional articles: an article on the Holy Spirit—describing Him as "the Lord, the Giver of Life, Who proceeds from the Father, Who with the Father and the Son is worshipped and glorified, and Who spoke through the prophets"—and an article about the Church, baptism, and the resurrection of the dead. (For the full text of both creeds, see Comparison between Creed of 325 and Creed of 381.)
However, scholars are not agreed on the connection between the Council of Constantinople and the Niceno–Constantinopolitan Creed. Some modern scholars believe that this creed, or something close to it, was stated by the bishops at Constantinople, but not promulgated as an official act of the council. Scholars also dispute whether this creed was simply an expansion of the Creed of Nicaea, or whether it was an expansion of another traditional creed similar but not identical to the one from Nicaea. In 451, the Council of Chalcedon referred to this creed as "the creed ... of the 150 saintly fathers assembled in Constantinople", indicating that this creed was associated with Constantinople (381) no later than 451.
This council condemned Arianism which began to die out with further condemnations at a council of Aquileia by Ambrose of Milan in 381. With the discussion of Trinitarian doctrine now developed and well under agreement to orthodox and biblical understanding, the focus of discussion changed to Christology, which would be the topic of the Council of Ephesus of 431 and the Council of Chalcedon of 451.
David Eastman cites the First Council of Constantinople as another example of the waning influence of Rome over the East. He notes that all three of the presiding bishops came from the East. Damasus had considered both Meletius and Gregory to be illegitimate bishops of their respective sees and yet, as Eastman and others point out, the Eastern bishops paid no heed to his opinions in this regard.
The First Council of Constantinople (381) was the first appearance of the term 'New Rome' in connection to Constantinople. The term was employed as the grounds for giving the relatively young church of Constantinople precedence over Alexandria and Antioch ('because it is the New Rome').
The 150 individuals at the council are commemorated in the Calendar of saints of the Armenian Apostolic Church on February 17.
The Eastern Orthodox Church in some places (e.g. Russia) has a feast day for the Fathers of the First Six Ecumenical Councils on the Sunday nearest to July 13 and on May 22. | https://en.wikipedia.org/wiki?curid=11643 |
Friedrich Hayek
Friedrich August von Hayek ( , ; 8 May 1899 – 23 March 1992), often referred to by his initials F. A. Hayek, was an Austrian-British economist and philosopher who is best known for his defence of classical liberalism. Hayek shared the 1974 Nobel Memorial Prize in Economic Sciences with Gunnar Myrdal for his "pioneering work in the theory of money and economic fluctuations and [...] penetrating analysis of the interdependence of economic, social and institutional phenomena". His account of how changing prices communicate information that helps individuals co-ordinate their plans is widely regarded as an important achievement in economics, leading to his Nobel Prize.
Hayek served in World War I during his teenage years and said that this experience in the war and his desire to help avoid the mistakes that had led to the war drew him into economics. At the University of Vienna, he studied economics, eventually receiving his doctoral degrees in law in 1921 and in political science in 1923. He subsequently lived and worked in Austria, Great Britain, the United States, and Germany; he became a British subject in 1938. Hayek's academic life was mostly spent at the London School of Economics, the University of Chicago, and the University of Freiburg. Although he is widely considered as a leader of the Austrian School of Economics, he also had close connections with the Chicago School of Economics. Hayek was also a major social theorist and political philosopher of the 20th century. His most notable work, "The Road to Serfdom", has sold over 2 million copies (as of 2010).
Hayek was appointed a Companion of Honour in 1984 for "services to the study of economics". He was the first recipient of the Hanns Martin Schleyer Prize in 1984. He also received the Presidential Medal of Freedom in 1991 from President George H. W. Bush. In 2011, his article "The Use of Knowledge in Society" was selected as one of the top 20 articles published in "The American Economic Review" during its first 100 years.
Friedrich August von Hayek was born in Vienna to August von Hayek and Felicitas Hayek ("née" von Juraschek). His father, from whom he received his middle name, was born in 1871 also in Vienna. He was a medical doctor employed by the municipal ministry of health with a passion for botany, about which he wrote a number of monographs. August von Hayek was also a part-time botany lecturer at the University of Vienna. His mother was born in 1875 to a wealthy conservative and land-owning family. As her mother died several years prior to Hayek's birth, Felicitas received a significant inheritance, which provided as much as half of her and her husband's income during the early years of their marriage. Hayek was the oldest of three brothers, Heinrich (1900–1969) and Erich (1904–1986), who were one-and-a-half and five years younger than him.
His father's career as a university professor influenced Hayek's goals later in life. Both of his grandfathers, who lived long enough for Hayek to know them, were scholars. was a leading economist in Austria-Hungary and a close friend of Eugen Böhm von Bawerk, one of the founders of the Austrian School of Economics. Hayek's paternal grandfather, Gustav Edler von Hayek, taught natural sciences at the Imperial "Realobergymnasium" (secondary school) in Vienna. He wrote works in the field of biological systematics, some of which are relatively well known.
On his mother's side, Hayek was second cousin to the philosopher Ludwig Wittgenstein. His mother often played with Wittgenstein's sisters and had known him well. As a result of their family relationship, Hayek became one of the first to read Wittgenstein's "Tractatus Logico-Philosophicus" when the book was published in its original German edition in 1921. Although he met Wittgenstein on only a few occasions, Hayek said that Wittgenstein's philosophy and methods of analysis had a profound influence on his own life and thought. In his later years, Hayek recalled a discussion of philosophy with Wittgenstein when both were officers during World War I. After Wittgenstein's death, Hayek had intended to write a biography of Wittgenstein and worked on collecting family materials and later assisted biographers of Wittgenstein. He was related to Wittgenstein on the non-Jewish side of the Wittgenstein family. Since his youth, Hayek frequently socialized with Jewish intellectuals and he mentions that people often speculated whether he was also of Jewish ancestry. That made him curious, so he spent some time researching his ancestors and found out that he has no Jewish ancestors within five generations. The surname Hayek uses the German spelling of the Czech surname Hájek.
Hayek displayed an intellectual and academic bent from a very young age. He read fluently and frequently before going to school. At his father's suggestion, as a teenager he read the genetic and evolutionary works of Hugo de Vries and August Weismann and the philosophical works of Ludwig Feuerbach. In school, Hayek was much taken by one instructor's lectures on Aristotle's ethics. In his unpublished autobiographical notes, Hayek recalled a division between him and his younger brothers who were only a few years younger than him, but he believed that they were somehow of a different generation. He preferred to associate with adults.
In 1917, Hayek joined an artillery regiment in the Austro-Hungarian Army and fought on the Italian front. Much of Hayek's combat experience was spent as a spotter in an aeroplane. Hayek suffered damage to his hearing in his left ear during the war and was decorated for bravery. During this time, Hayek also survived the 1918 flu pandemic.
Hayek then decided to pursue an academic career, determined to help avoid the mistakes that had led to the war. Hayek said of his experience: "The decisive influence was really World War I. It's bound to draw your attention to the problems of political organization". He vowed to work for a better world.
At the University of Vienna, Hayek earned doctorates in law and political science in 1921 and 1923 respectively and also studied philosophy, psychology and economics. For a short time, when the University of Vienna closed he studied in Constantin von Monakow's Institute of Brain Anatomy, where Hayek spent much of his time staining brain cells. Hayek's time in Monakow's lab and his deep interest in the work of Ernst Mach inspired his first intellectual project, eventually published as "The Sensory Order" (1952). It located connective learning at the physical and neurological levels, rejecting the "sense data" associationism of the empiricists and logical positivists. Hayek presented his work to the private seminar he had created with Herbert Furth called the Geistkreis.
During Hayek's years at the University of Vienna, Carl Menger's work on the explanatory strategy of social science and Friedrich von Wieser's commanding presence in the classroom left a lasting influence on him. Upon the completion of his examinations, Hayek was hired by Ludwig von Mises on the recommendation of Wieser as a specialist for the Austrian government working on the legal and economic details of the Treaty of Saint Germain. Between 1923 and 1924, Hayek worked as a research assistant to Professor Jeremiah Jenks of New York University, compiling macroeconomic data on the American economy and the operations of the Federal Reserve.
Initially sympathetic to Wieser's democratic socialism, Hayek's economic thinking shifted away from socialism and toward the classical liberalism of Carl Menger after reading von Mises' book "Socialism". It was sometime after reading "Socialism" that Hayek began attending von Mises' private seminars, joining several of his university friends, including Fritz Machlup, Alfred Schutz, Felix Kaufmann and Gottfried Haberler, who were also participating in Hayek's own more general and private seminar. It was during this time that he also encountered and befriended noted political philosopher Eric Voegelin, with whom he retained a long-standing relationship.
With the help of Mises, in the late 1920s he founded and served as director of the Austrian Institute for Business Cycle Research before joining the faculty of the London School of Economics (LSE) in 1931 at the behest of Lionel Robbins. Upon his arrival in London, Hayek was quickly recognised as one of the leading economic theorists in the world and his development of the economics of processes in time and the co-ordination function of prices inspired the ground-breaking work of John Hicks, Abba P. Lerner and many others in the development of modern microeconomics.
In 1932, Hayek suggested that private investment in the public markets was a better road to wealth and economic co-ordination in Britain than government spending programs as argued in an exchange of letters with John Maynard Keynes, co-signed with Lionel Robbins and others in "The Times." The nearly decade long deflationary depression in Britain dating from Winston Churchill's decision in 1925 to return Britain to the gold standard at the old pre-war and pre-inflationary par was the public policy backdrop for Hayek's dissenting engagement with Keynes over British monetary and fiscal policy. Well beyond that single public conflict, regarding the economics of extending the length of production to the economics of labour inputs, Hayek and Keynes disagreed on many essential economics matters. Their economic disagreements were both practical and fundamental in nature. Keynes called Hayek's book "Prices and Production" "one of the most frightful muddles I have ever read", famously adding: "It is an extraordinary example of how, starting with a mistake, a remorseless logician can end in Bedlam". The book in its published form may actually have been written entirely by its editor W. W. Bartley III and not by Hayek. | https://en.wikipedia.org/wiki?curid=11646 |
Fred Brooks
Frederick Phillips "Fred" Brooks Jr. (born April 19, 1931) is an American computer architect, software engineer, and computer scientist, best known for managing the development of IBM's System/360 family of computers and the OS/360 software support package, then later writing candidly about the process in his seminal book "The Mythical Man-Month". Brooks has received many awards, including the National Medal of Technology in 1985 and the Turing Award in 1999.
Born in Durham, North Carolina, he attended Duke University, graduating in 1953 with a Bachelor of Science degree in physics, and he received a Ph.D. in applied mathematics (computer science) from Harvard University in 1956, supervised by Howard Aiken.
Brooks served as the graduate teaching assistant for Ken Iverson at Harvard's graduate program in "automatic data processing", the first such program in the world.
Brooks joined IBM in 1956, working in Poughkeepsie, New York, and Yorktown, New York. He worked on the architecture of the IBM 7030 Stretch, a $10 million scientific supercomputer of which nine were sold, and the IBM 7950 Harvest computer for the National Security Agency. Subsequently, he became manager for the development of the IBM System/360 family of computers and the OS/360 software package. During this time he coined the term "computer architecture".
In 1964, Brooks accepted an invitation to come to the University of North Carolina at Chapel Hill and founded the University's computer science department. He chaired it for 20 years. he was still engaged in active research there, primarily in virtual environments and scientific visualization.
A few years after leaving IBM he wrote "The Mythical Man-Month". The seed for the book was planted by IBM's then-CEO Thomas Watson Jr., who asked in Brooks's exit interview why it was so much harder to manage software projects than hardware projects. In this book Brooks made the now-famous statement: "Adding manpower to a late software project makes it later." This has since come to be known as "Brooks's law". In addition to "The Mythical Man-Month", Brooks is also known for the paper "No Silver Bullet – Essence and Accident in Software Engineering".
In 2004 in a talk at the Computer History Museum and also in a 2010 interview in "Wired" magazine, Brooks was asked "What do you consider your greatest technological achievement?" Brooks responded, "The most important single decision I ever made was to change the IBM 360 series from a 6-bit byte to an 8-bit byte, thereby enabling the use of lowercase letters. That change propagated everywhere."
A "20th anniversary" edition of "The Mythical Man-Month" with four additional chapters was published in 1995.
As well as "The Mythical Man-Month", Brooks has authored or co-authored many books and peer reviewed papers including "Automatic Data Processing", "No Silver Bullet", "Computer Architecture", and "The Design of Design".
His contributions to human–computer interaction are described in Ben Shneiderman's HCI pioneers website.
Brooks has served on a number of US national boards and committees.
In chronological order:
In January 2005 he gave the Turing Lecture on the subject of "Collaboration and Telecollaboration in Design". In 1994 he was inducted as a Fellow of the Association for Computing Machinery.
Brooks is an evangelical Christian who is active with InterVarsity Christian Fellowship.
Brooks named his eldest son after Kenneth E. Iverson. | https://en.wikipedia.org/wiki?curid=11652 |
Figured bass
Figured bass, also called thoroughbass, is a kind of musical notation in which numerals and symbols (often accidentals) indicate intervals, chords, and non-chord tones that a musician playing piano, harpsichord, organ, lute (or other instruments capable of playing chords) play in relation to the bass note that these numbers and symbols appear above or below. Figured bass is closely associated with basso continuo, a historically improvised accompaniment used in almost all genres of music in the Baroque period of Classical music (1600–1750), though rarely in modern music.
Other systems for denoting or representing chords include plain staff notation, used in classical music; Roman numerals, commonly used in harmonic analysis;
chord letters, sometimes used in modern musicology; the Nashville Number System; and various chord names and symbols used in jazz and popular music (e.g., C Major or simply C; D minor, Dm, or D-; G7, etc.).
Basso continuo parts, almost universal in the Baroque era (1600–1750), provided the harmonic structure of the music by supplying a bassline and a chord progression. The phrase is often shortened to "continuo", and the instrumentalists playing the continuo part are called the "continuo group".
The makeup of the continuo group is often left to the discretion of the performers (or, for a large performance, the conductor), and practice varied enormously within the Baroque period. At least one instrument capable of playing chords must be included, such as a piano, harpsichord, organ, lute, theorbo, guitar, regal, or harp. In addition, any number of instruments that play in the bass register may be included, such as cello, double bass, bass viol, or bassoon. The most common combination, at least in modern performances, is harpsichord and cello for instrumental works and secular vocal works, such as operas, and organ and cello for sacred music. A double bass may be added, particularly when accompanying a lower-pitched solo voice (e.g., a bass singer).
Typically performers match the instrument families used in the full ensemble: including bassoon when the work includes oboes or other winds, but restricting it to cello and/or double bass if only strings are involved. Harps, lutes, and other handheld instruments are more typical of early 17th-century music. Sometimes instruments are specified by the composer: in "L'Orfeo" (1607) Monteverdi calls for an exceptionally varied instrumentation, with multiple harpsichords and lutes with a bass violin in the pastoral scenes followed by lamenting to the accompaniment of "organo di legno" and "chitarrone", while Charon stands watch to the sound of a regal.
The keyboard (or other chord-playing instrument) player "realizes" (adds in an improvised fashion) a continuo part by playing, in addition to the notated bass line, notes above it to complete chords, either determined ahead of time or improvised in performance. The figured bass notation, described below, is a guide, but performers are also expected to use their musical judgment and the other instruments or voices (notably the lead melody and any accidentals that might be present in it) as a guide. Experienced players sometimes incorporate motives found in the other instrumental parts into their improvised chordal accompaniment. Modern editions of such music usually supply a realized keyboard part, fully written out in staff notation for a player, in place of improvisation. With the rise in historically informed performance, however, the number of performers who are able to improvise their parts from the figures, as Baroque players would have done, has increased.
Basso continuo, though an essential structural and identifying element of the Baroque period, rapidly declined in the classical period (up to around 1800). A late example is C. P. E. Bach's Concerto in D minor for flute, strings and basso continuo (1747). Examples of its use in the 19th century are rarer, but they do exist: masses by Anton Bruckner, Beethoven, and Franz Schubert, for example, have a basso continuo part that was for an organist.
A part notated with figured bass consists of a bass line notated with notes on a musical staff plus added numbers and accidentals (or in some cases (back)slashes added to a number) beneath the staff to indicate what intervals above the bass notes should be played, and therefore which inversions of which chords are to be played.
The phrase "tasto solo" indicates that only the bass line (without any upper chords) is to be played for a short period, usually until the next figure is encountered. This instructs the chord-playing instrumentalist not to play any improvised chords for a period. The reason "tasto solo" had to be specified was because it was an accepted convention that if no figures were present in a section of otherwise figured bass line, the chord-playing performer would either assume that it was a root-position triad, or deduce from the harmonic motion that another figure was implied. For example, if a continuo part in the key of C begins with a C bass note in the first measure, which descends to a B in the second measure, even if there were no figures, the chord-playing instrumentalist would deduce that this was most likely a first inversion dominant chord (spelled B–D–G, from bottom note of the chord to the top).
Composers were inconsistent in the usages described below. Especially in the 17th century, the numbers were omitted whenever the composer thought the chord was obvious. Early composers such as Claudio Monteverdi often specified the octave by the use of compound intervals such as 10, 11, and 15.
Contemporary figured bass abbreviations for triads and seventh chords are shown in the table to the right.
The numbers indicate the number of scale steps above the given bass-line that a note should be played. For example:
Here, the bass note is a C, and the numbers 4 and 6 indicate that notes a fourth and a sixth above it should be played, that is an F and an A. In other words, the second inversion of an F major chord can be realized as:
In cases where the numbers 3 or 5 would normally be understood, these are usually left out. For example:
has the same meaning as
and can be realized as
although the performer may choose which octave to play the notes in and will often elaborate them in some way, such as by playing them as arpeggios rather than as block chords, or by adding improvised ornaments, depending on the tempo and texture of the music.
Sometimes, other numbers are omitted: a 2 on its own or indicates , for example. From the figured bass-writer's perspective, this bass note is obviously a third inversion seventh chord, so the sixth interval is viewed as an interval that the player should automatically infer. In many cases entire figures can be left out, usually where the chord is obvious from the progression or the melody.
Sometimes the chord changes but the bass note itself is held. In these cases the figures for the new chord are written wherever in the bar they are meant to occur.
When the bass note changes but the notes in the chord above it are to be held, a line is drawn next to the figure or figures, for as long as the chord is to be held, to indicate this:
Note that when the bass moves the chord intervals have effectively changed, in this case from to , but no additional numbers are written.
When an accidental is shown on its own without a number, it applies to the note a third above the lowest note; most commonly, this is the third of the chord. Otherwise, if a number is shown, the accidental affects the said interval. For example, this, showing the widespread default meaning of an accidental without number as applying to the third above the bass:
Sometimes the accidental is placed after the number rather than before it.
Alternatively, a cross placed next to a number indicates that the pitch of that note should be raised (augmented) by a semitone (so that if it is normally a flat it becomes a natural, and if it is normally a natural it becomes a sharp). A different way to indicate this is to draw a backslash through the number itself. The following three notations, therefore, all indicate the same thing:
More rarely, a "forward" slash through a number indicates that a pitch is to be lowered (diminished) by a semitone:
When sharps or flats are used with key signatures, they may have a slightly different meaning, especially in 17th-century music. A sharp might be used to cancel a flat in the key signature, or vice versa, instead of a natural sign.
Improvised organ accompaniments for choral works were common by the late 16th century, and separate organ parts showing only a bass line date back to at least 1587. In the mid-16th century, some Italian church composers began to write polychoral works. These pieces, for two or more choirs, were created in recognition of particularly festive occasions, or else to take advantage of certain architectural properties of the buildings in which they were performed which created natural reverberation. With eight or more polyphonic voice parts to keep track of in performance, works in polychoral style required some sort of instrumental accompaniment. They were also known as "cori spezzati", since the choirs were structured in musically independent or interlocking parts, and may sometimes also have been placed in physically different locations.
The concept of allowing two or more concurrently performing choirs to be independent structurally would probably not have arisen had there not been an already existing practice of choral accompaniment in church. Financial and administrative records indicate the presence of organs in churches dates back to the 15th century, although their precise use is not known. Many first-person accounts of church services from the 15th and 16th centuries imply organ accompaniment in some portions of the liturgy, as well as indicating that the unaccompanied practice of the "Cappella Sistina" was somewhat unusual. By early in the 16th century, it seems that accompaniment by organ at least in smaller churches was commonplace, and commentators of the time lamented on occasion the declining quality of church choirs. Even more tellingly, many manuscripts, especially from the middle of the century and later, feature written-out organ accompaniments. It is this last observation which leads directly to the foundations of continuo practice, in a somewhat similar one called "basso seguente" or "following bass".
Written-out accompaniments are found most often in early polychoral works (those composed, obviously, before the onset of concerted style and its explicit instrumental lines), and generally consist of a complete reduction (to what would later be called the "grand staff") of one choir's parts. In addition to this, however, for those parts of the music during which that choir rested was presented a single line consisting of the lowest note being sung at any given time, which could be in any vocal part. Even in early concerted works by the Gabrielis (Andrea and Giovanni), Monteverdi and others, the lowest part, that which modern performers colloquially call "continuo", is actually a "basso seguente", though slightly different, since with separate instrumental parts, the lowest note of the moment in the instrumental parts is often lower than any being sung.
The first known published instance of a basso seguente was a book of Introits and Alleluias by the Venetian Placido Falconio from 1575. What is known as "figured" continuo, which also features a bass line that because of its structural nature may differ from the lowest note in the upper parts, developed over the next quarter-century. The composer Lodovico Viadana is often credited with the first publication of such a continuo, in a 1602 collection of motets that according to his own account had been originally written in 1594. Viadana's continuo, however, did not include figures. The earliest extant part with sharp and flat signs above the staff is a motet by Giovanni Croce, also from 1594.
Following and figured basses developed concurrently in secular music; such madrigal (a song style) composers as Emilio de' Cavalieri and Luzzasco Luzzaschi began in the late 16th century to write works explicitly for a soloist with accompaniment, following an already standing practice of performing multi-voice madrigals this way, and also responding to the rising influence at certain aristocratic courts of featuring popular individual singers. This tendency toward solo-with-accompaniment texture in secular vocal music (non-religious music) culminated in the genre of monody, just as in sacred vocal music it resulted in the sacred concerto for various forces including few voices and even solo voices. The use of numerals to indicate accompanying sonorities in accompaniment parts began with the earliest operas, composed by Cavalieri and Giulio Caccini.
These new genres, just as the polychoral one probably was, were indeed made possible by the existence of a semi- or fully independent bass line. In turn, the separate bass line, with figures added above to indicate other chordal notes, shortly became "functional", as the sonorities became "harmonies", (see harmony and tonality), and music came to be seen in terms of a melody supported by chord progressions (homophony), rather than interlocking, equally important lines that are used in polyphony. The figured bass, therefore, was integral to the development of the Baroque, by extension the ”classical” style, which built on the innovations of the Baroque era, and by further extension most subsequent musical styles.
As part of the new galant style in the mid-18th century, with its emphasis on lighter and more varied textures, and singable melodies, orchestral music gradually phased out the basso continuo, and solo-with-accompaniment textures increasingly featured fully written-out accompaniments. By the second half of the 18th century, figured bass was almost entirely eliminated, except in sacred choral music, where it lingered until well after 1800.
Many composers and theorists of the 16th, 17th, and 18th centuries wrote "how-to guides" for chord-playing musicians, to aid them in realizing figured bass notation, including Gregor Aichinger, Filippo Bonaffino, Friedrich Erhard Niedt, Jean-Philippe Rameau, Georg Philipp Telemann, C. P. E. Bach, and Michael Praetorius.
In the 20th and 21st century, figured bass is also sometimes used by classical musicians as a shorthand way of indicating chords when a composer is sketching out ideas for a new piece or when a music student is analyzing the harmony of a notated piece of music (e.g., a Bach chorale or a Chopin piano prelude). Figured bass is not generally used in modern musical compositions, except for neo-Baroque pieces. A form of figured bass is used in notation of accordion music; another simplified form is used to notate guitar chords. In the 2000s, outside of professional Baroque ensembles that specialize in the performance practice of the Baroque era, the most common use of figured bass notation is to indicate the inversion in a harmonic analysis or composer's sketch context, however, often without the staff notation, using letter note names followed with the figure. For instance, if a piano piece had a C major triad in the right hand (C–E–G), with the bass note a G with the left hand, this would be a second inversion C major chord, which would be written G. If this same C major triad had an E in the bass, it would be a first inversion chord, which would be written E or E (this is different from the jazz notation, where a C means the major sixth chord C–E–G–A, i.e., a C major with an added 6th degree). The symbols can also be used with Roman numerals in analyzing functional harmony, a usage called "figured Roman"; see chord symbol. | https://en.wikipedia.org/wiki?curid=11656 |
Fashion
Fashion is a popular aesthetic expression at a particular time, place and in a specific context, especially in clothing, footwear, lifestyle, accessories, makeup, hairstyle, and body proportions. Whereas a trend often connotes a peculiar aesthetic expression and often lasting shorter than a season, fashion is a distinctive and industry-supported expression traditionally tied to the fashion season and collections. Style is an expression that lasts over many seasons and is often connected to cultural movements and social markers, symbols, class, and culture (ex. Baroque, Rococo, etc.). According to sociologist Pierre Bourdieu, fashion connotes "the latest fashion, the latest difference."
Even though they are often used together, the term fashion differs from clothes and costumes, where the first describes the material and technical garment, whereas the second has been relegated to special senses like fancy-dress or masquerade wear. Fashion instead describes the social and temporal system that "activates" dress as a social signifier in a certain time and context. Philosopher Giorgio Agamben connects fashion to the current intensity of the qualitative moment, to the temporal aspect the Greek called kairos, whereas clothes belong to the quantitative, to what the Greek called Chronos.
Exclusive brands aspire for the label "haute couture," but the term is technically limited to members of the "Chambre Syndicale de la Haute Couture" in Paris. It is more aspirational and inspired by art, culture and movement. It is extremely exclusive in nature.
With increasing mass-production of consumer commodities at Lower prices, and with global reach, sustainability has become an urgent issue amongst politicians, brands, and consumers.
Early Western travelers, traveling to India, Persia, Turkey, or China, would frequently remark on the absence of change in fashion in those countries. The Japanese "shōgun"s secretary bragged (not completely accurately) to a Spanish visitor in 1609 that Japanese clothing had not changed in over a thousand years. However, there is considerable evidence in Ming China of rapidly changing fashions in Chinese clothing. Changes in costume often took place at times of economic or social change, as occurred in ancient Rome and the medieval Caliphate, followed by a long period without significant changes. In 8th-century Moorish Spain, the musician Ziryab introduced to Córdoba sophisticated clothing-styles based on seasonal and daily fashions from his native Baghdad, modified by his inspiration. Similar changes in fashion occurred in the 11th century in the Middle East following the arrival of the Turks, who introduced clothing styles from Central Asia and the Far East.
Additionally, there is a long history of fashion in West Africa. The Cloth was used as a form of currency in trade with the Portuguese and Dutch as early as the 16th Century. Locally produced cloth and cheaper European imports were assembled into new styles to accommodate the growing elite class of West Africans and resident gold and slave traders. There was an exceptionally strong tradition of cloth-weaving in Oyo and the areas inhabited by the Igbo people.
The beginning in Europe of continual and increasingly rapid change in clothing styles can be fairly reliably dated. Historians, including James Laver and Fernand Braudel, date the start of Western fashion in clothing to the middle of the 14th century, though they tend to rely heavily on contemporary imagery and illuminated manuscripts were not common before the fourteenth century. The most dramatic early change in fashion was a sudden drastic shortening and tightening of the male over-garment from calf-length to barely covering the buttocks, sometimes accompanied with stuffing in the chest to make it look bigger. This created the distinctive Western outline of a tailored top worn over leggings or trousers.
The pace of change accelerated considerably in the following century, and women's and men's fashion, especially in the dressing and adorning of the hair, became equally complex. Art historians are, therefore, able to use fashion with confidence and precision to date images, often to within five years, particularly in the case of images from the 15th century. Initially, changes in fashion led to a fragmentation across the upper classes of Europe of what had previously been a very similar style of dressing and the subsequent development of distinctive national styles. These national styles remained very different until a counter-movement in the 17th to 18th centuries imposed similar styles once again, mostly originating from Ancien Régime France. Though the rich usually led fashion, the increasing affluence of early modern Europe led to the bourgeoisie and even peasants following trends at a distance, but still uncomfortably close for the elites – a factor that Fernand Braudel regards as one of the main motors of changing fashion.
In the 16th century, national differences were at their most pronounced. Ten 16th century portraits of German or Italian gentlemen may show ten entirely different hats. Albrecht Dürer illustrated the differences in his actual (or composite) contrast of Nuremberg and Venetian fashions at the close of the 15th century ("illustration, right"). The "Spanish style" of the late 16th century began the move back to synchronicity among upper-class Europeans, and after a struggle in the mid-17th century, French styles decisively took over leadership, a process completed in the 18th century.
Though different textile colors and patterns changed from year to year, the cut of a gentleman's coat and the length of his waistcoat, or the pattern to which a lady's dress was cut, changed more slowly. Men's fashions were primarily derived from military models, and changes in a European male silhouette were galvanized in theaters of European war where gentleman officers had opportunities to make notes of different styles such as the "Steinkirk" cravat or necktie.
Though there had been distribution of dressed dolls from France since the 16th century and Abraham Bosse had produced engravings of fashion in the 1620s, the pace of change picked up in the 1780s with increased publication of French engravings illustrating the latest Paris styles. By 1800, all Western Europeans were dressing alike (or thought they were); local variation became first a sign of provincial culture and later a badge of the conservative peasant.
Although tailors and dressmakers were no doubt responsible for many innovations, and the textile industry indeed led many trends, the history of fashion design is generally understood to date from 1858 when the English-born Charles Frederick Worth opened the first authentic "haute couture" house in Paris. The Haute house was the name established by the government for the fashion houses that met the standards of the industry. These fashion houses have to adhere to standards such as keeping at least twenty employees engaged in making the clothes, showing two collections per year at fashion shows, and presenting a certain number of patterns to costumers. Since then, the idea of the fashion designer as a celebrity in his or her own right has become increasingly dominant.
Although aspects of fashion can be feminine or masculine, some trends are androgynous. The idea of unisex dressing originated in the 1960s when designers such as Pierre Cardin and Rudi Gernreich created garments, such as stretch jersey tunics or leggings, meant to be worn by both males and females. The impact of unisex expands more broadly to encompass various themes in fashion, including androgyny, mass-market retail, and conceptual clothing. The fashion trends of the 1970s, such as sheepskin jackets, flight jackets, duffel coats, and unstructured clothing, influenced men to attend social gatherings without a tuxedo jacket and to accessorize in new ways. Some men's styles blended the sensuality and expressiveness despite the conservative trend, the growing gay-rights movement and an emphasis on youth allowed for a new freedom to experiment with style, fabrics such as wool crepe, which had previously been associated with women's attire was used by designers when creating male clothing.
The four major current fashion capitals are acknowledged to be Paris, Milan, New York City, and London, which are all headquarters to the most significant fashion companies and are renowned for their major influence on global fashion. Fashion weeks are held in these cities, where designers exhibit their new clothing collections to audiences. A succession of major designers such as Coco Chanel and Yves Saint-Laurent have kept Paris as the center most watched by the rest of the world, although "haute couture" is now subsidized by the sale of ready-to-wear collections and perfume using the same branding.
Modern Westerners have a vast number of choices available in the selection of their clothes. What a person chooses to wear can reflect his or her personality or interests. When people who have high cultural status start to wear new or different clothes, a fashion trend may start. People who like or respect these people become influenced by their style and begin wearing similarly styled clothes. Fashions may vary considerably within a society according to age, social class, generation, occupation, and geography and may also vary over time. If an older person dresses according to the fashion young people use, he or she may look ridiculous in the eyes of both young and older people. The terms "fashionista" and "fashion victim" refer to someone who slavishly follows current fashions.
One can regard the system of sporting various fashions as a fashion language incorporating various fashion statements using a grammar of fashion. (Compare some of the work of Roland Barthes.)
In recent years, Asian fashion has become increasingly significant in local and global markets. Countries such as China, Japan, India, and Pakistan have traditionally had large textile industries, which have often been drawn upon by Western designers, but now Asian clothing styles are also gaining influence based on their ideas.
The notion of the global fashion industry is a product of the modern age. Before the mid-19th century, most clothing was custom-made. It was handmade for individuals, either as home production or on order from dressmakers and tailors. By the beginning of the 20th century—with the rise of new technologies such as the sewing machine, the rise of global capitalism and the development of the factory system of production, and the proliferation of retail outlets such as department stores—clothing had increasingly come to be mass-produced in standard sizes and sold at fixed prices.
Although the fashion industry developed first in Europe and America, , it is an international and highly globalized industry, with clothing often designed in one country, manufactured in another, and sold worldwide. For example, an American fashion company might source fabric in China and have the clothes manufactured in Vietnam, finished in Italy, and shipped to a warehouse in the United States for distribution to retail outlets internationally. The fashion industry has long been one of the largest employers in the United States, and it remains so in the 21st century. However, U.S. employment declined considerably as production increasingly moved overseas, especially to China. Because data on the fashion industry typically are reported for national economies and expressed in terms of the industry's many separate sectors, aggregate figures for the world production of textiles and clothing are difficult to obtain. However, by any measure, the clothing industry accounts for a significant share of world economic output.
The fashion industry consists of four levels:
These levels consist of many separate but interdependent sectors. These sectors are Textile Design and Production, Fashion Design and Manufacturing, Fashion Retailing, Marketing and Merchandising, Fashion Shows, and Media and Marketing. Each sector is devoted to the goal of satisfying consumer demand for apparel under conditions that enable participants in the industry to operate at a profit.
Fashion trends influenced by several factors, including cinema, celebrities, climate, creative explorations, political, economic, social, and technological. Examining these factors is called a PEST analysis. Fashion forecasters can use this information to help determine the growth or decline of a particular trend.
Politics has played a central role in the development of fashion. For example, First Lady Jacqueline Kennedy was a fashionable icon of the early 1960s who led the formal dressing trend. By wearing a Chanel suit, a structural Givenchy shift dress, or a soft color Cassini coat with large buttons, it created her elegant look and led a delicate trend.
Furthermore, the political revolution also made much impact on the fashion trend. For example, during the 1960s, the economy had become wealthier, the divorce rate was increasing, and the government approved the birth control pill. This revolution inspired the younger generation to rebellion. In 1964, the leg-baring mini-skirt became a significant fashion trend of the 1960s. Given that fashion designers began to experiment with the shapes of garment, loose sleeveless, micro-minis, flared skirts, and trumpet sleeves. In this case, the mini-skirt trend became an icon of the 1960s.
Moreover, the political movement built an impressive relationship with fashion trends. For instance, during the Vietnam war, the youth of America made a movement that affected the whole country. In the 1960s, the fashion trend was full of fluorescent colors, prints patterns, bell-bottom jeans, fringed vests, and skirt became a protest outfit of the 1960s. This trend was called Hippie, and it is still affecting the current fashion trend.
Technology plays a significant role in most aspects of today's society. Technological influences are growing more apparent in the fashion industry. Advances and new developments are shaping and creating current and future trends.
Developments such as wearable technology have become an essential trend in fashion. They will continue with advances such as clothing constructed with solar panels that charge devices and smart fabrics that enhance wearer comfort by changing color or texture based on environmental changes.
The fashion industry is seeing how 3D printing technology has influenced designers such as Iris Van Herpen and Kimberly Ovitz. These designers have been heavily experimenting and developing 3D printed couture pieces. As the technology grows, the 3D printers will become more accessible to designers and eventually, consumers, which could potentially shape the fashion industry entirely.
Internet technology such as online retailers and social media platforms have given way for trends to be identified, marketed, and sold immediately. Styles and trends are easily conveyed online to attract trendsetters. Posts on Instagram or Facebook can quickly increase awareness about new trends in fashion, which subsequently may create high demand for specific items or brands, new "buy now button" technology can link these styles with direct sales.
Machine vision technology has been developed to track how fashions spread through society. The industry can now see the direct correlation on how fashion shows influence street-chic outfits. The effects can now be quantified and provide valuable feedback to fashion houses, designers, and consumers regarding trends.
Military technology has played an essential role in the fashion industry. The camouflage pattern in clothing was developed to help military personnel be less visible to enemy forces. A trend emerged in the 1960s, and camouflage fabric was introduced to streetwear. The camouflage fabric trend disappeared and resurfaced several times since then. Camouflage started to appear in high fashion by the 1990s. Designers such as Valentino, Dior, and Dolce & Gabbana combined camouflage into their runway and ready-to-wear collections.
Fashion relates to social and cultural context of an environment. According to Matika, "Elements of popular culture become fused when a person's trend is associated with a preference for a genre of music…like music, news or literature, fashion has been fused into everyday lives." Fashion is not only seen as pure aesthetic values; fashion is also a medium for performers to create an overall atmosphere and express their opinions altogether through music video. The latest music video ‘Formation’ by Beyoncé, according to Carlos, "The pop star pays homage to her Creole root... tracing the roots of the Louisiana cultural nerve center from the post-abolition era to present day, Beyoncé catalogs the evolution of the city's vibrant style and its tumultuous history all at once. Atop a New Orleans police car in a red-and-white Gucci high-collar dress and combat boots, she sits among the ruins of Hurricane Katrina, immediately implanting herself in the biggest national debate on police brutality and race relations in modern day."
Runway show is a reflection of fashion trend and a designer's thought. For designer like Vivienne Westwood, runway shows are a platform for her voice on politics and current events. For her AW15 menswear show, according to Water, "where models with severely bruised faces channeled eco-warriors on a mission to save the planet." Another recent example is a staged feminist protest march for Chanel's SS15 show, rioting models chanting words of empowerment with signs like "Feminist but feminine" and "Ladies first." According to Water, "The show tapped into Chanel's long history of championing female independence: founder Coco Chanel was a trailblazer for liberating the female body in the post-WWI era, introducing silhouettes that countered the restrictive corsets then in favour."
With increasing environmental awareness, the economic imperative to "Spend now, think later" is getting increasingly scrutinized. Today's consumer tends to be more mindful about consumption, looking for just enough and better, more durable options. People have also become more conscious of the impact their everyday consumption has on the environment and society, and these initiatives are often described as a move towards sustainable fashion, yet critics argue a circular economy based on growth is an oxymoron, or an increasing spiral of consumption, rather than a utopian cradle-to-cradle circular solution.
In today's linear economical system, manufacturers extract resources from the earth to make products that will soon be discarded in landfills, on the other hand, under the circular model, the production of goods operates like systems in nature, where the waste and demise of a substance becomes the food and source of growth for something new. Companies such as MUD Jeans, which is based in the Netherlands employs a leasing scheme for jeans. This Dutch company "represents a new consuming philosophy that is about using instead of owning," according to MUD's website. The concept also protects the company from volatile cotton prices. Consumers pay €7.50 a month for a pair of jeans; after a year, they can return the jeans to Mud, trade them for a new pair and start another year-long lease, or keep them. MUD is responsible for any repairs during the lease period. Another ethical fashion company, Patagonia set up the first multi-seller branded store on EBay in order to facilitate secondhand sales; consumers who take the Common Threads pledge can sell in this store and have their gear listed on Patagonia.com's "Used Gear" section.
Consumption as a share of gross domestic product in China has fallen for six decades, from 76 percent in 1952 to 28 percent in 2011. China plans to reduce tariffs on a number of consumer goods and expand its 72-hour transit visa plan to more cities in an effort to stimulate domestic consumption.
The announcement of import tax reductions follows changes in June 2015, when the government cut the tariffs on clothing, cosmetics and various other goods by half. Among the changes — easier tax refunds for overseas shoppers and accelerated openings of more duty-free shops in cities covered by the 72-hour visa scheme. The 72-hour visa was introduced in Beijing and Shanghai in January 2013 and has been extended to 18 Chinese cities.
According to reports at the same time, Chinese consumer spending in other countries such as Japan has slowed even though the yen has dropped. There is clearly a trend in the next 5 years that the domestic fashion market will show an increase.
China is an interesting market for fashion retail as Chinese consumers' motivation to shop for fashion items are unique from Western Audiences. Demographics have limited association with shopping motivation, with occupation, income and education level having no impact; unlike in Western Countries. Chinese high-street shoppers prefer adventure and social shopping, while online shoppers are motivated by idea shopping. Another difference is how gratification and idea shopping influence spending over ¥1k per month on fashion items, and regular spending influenced by value shopping.
Consumers of different groups have varying needs and demands. Factors taken into consideration when thinking of consumers' needs include key demographics.
To understand consumers' needs and predict fashion trends, fashion companies have to do market research There are two research methods: primary and secondary. Secondary methods are taking other information that has already been collected, for example using a book or an article for research. Primary research is collecting data through surveys, interviews, observation, and/or focus groups. Primary research often focuses on large sample sizes to determine customer's motivations to shop.
Benefits of primary research is specific information about a fashion brand's consumer is explored. Surveys are helpful tools; questions can be open-ended or closed-ended. A negative factor surveys and interviews present is that the answers can be biased, due to wording in the survey or on face-to-face interactions. Focus groups, about 8 to 12 people, can be beneficial because several points can be addressed in depth. However, there are drawbacks to this tactic, too. With such a small sample size, it is hard to know if the greater public would react the same way as the focus group. Observation can really help a company gain insight on what a consumer truly wants. There is less of a bias because consumers are just performing their daily tasks, not necessarily realizing they are being observed. For example, observing the public by taking street style photos of people, the consumer did not get dressed in the morning knowing that would have their photo taken necessarily. They just wear what they would normally wear. Through observation patterns can be seen, helping trend forecasters know what their target market needs and wants.
Knowing the needs of the consumers will increase a fashion companies' sales and profits. Through research and studying the consumers' lives the needs of the customer can be obtained and help fashion brands know what trends the consumers are ready for.
Consumption is driven not only by need, the symbolic meaning for consumers is also a factor. Consumers engaging in symbolic consumption may develop a sense of self over an extended period of time as various objects are collected as part of the process of establishing their identity and, when the symbolic meaning is shared in a social group, to communicate their identity to others. For teenagers consumption plays a role in distinguishing the child self from the adult. Researchers have found that the fashion choices of teenagers are used for self-expression and also to recognize other teens who wear similar clothes. The symbolic association of clothing items can link individuals personality and interests, with music as a prominent factor influencing fashion decisions.
The media plays a significant role when it comes to fashion. For instance, an important part of fashion is fashion journalism. Editorial critique, guidelines, and commentary can be found on television and in magazines, newspapers, fashion websites, social networks, and fashion blogs. In recent years, fashion blogging and YouTube videos have become a major outlet for spreading trends and fashion tips, creating an online culture of sharing one's style on a website or Instagram account. Through these media outlets readers and viewers all over the world can learn about fashion, making it very accessible. In addition to fashion journalism, another media platform that is important in fashion industry is advertisement. Advertisements provide information to audiences and promote the sales of products and services. Fashion industry utilizes advertisements to attract consumers and promote its products to generate sales. Few decades ago when technology was still underdeveloped, advertisements heavily relied on radio, magazines, billboards, and newspapers. These days, there are more various ways in advertisements such as television ads, online-based ads using internet websites, and posts, videos, and live streaming in social media platforms.
At the beginning of the 20th century, fashion magazines began to include photographs of various fashion designs and became even more influential than in the past. In cities throughout the world these magazines were greatly sought after and had a profound effect on public taste in clothing. Talented illustrators drew exquisite fashion plates for the publications which covered the most recent developments in fashion and beauty. Perhaps the most famous of these magazines was "La Gazette du Bon Ton", which was founded in 1912 by Lucien Vogel and regularly published until 1925 (with the exception of the war years).
"Vogue", founded in the United States in 1892, has been the longest-lasting and most successful of the hundreds of fashion magazines that have come and gone. Increasing affluence after World War II and, most importantly, the advent of cheap color printing in the 1960s, led to a huge boost in its sales and heavy coverage of fashion in mainstream women's magazines, followed by men's magazines in the 1990s. One such example of "Vogue"'s popularity is the younger version, "Teen Vogue", which covers clothing and trends that are targeted more toward the "fashionista on a budget". Haute couture designers followed the trend by starting ready-to-wear and perfume lines which are heavily advertised in the magazines and now dwarf their original couture businesses. A recent development within fashion print media is the rise of text-based and critical magazines which aim to prove that fashion is not superficial, by creating a dialogue between fashion academia and the industry. Examples of this trend are: "Fashion Theory" (1997) and "Vestoj" (2009). Television coverage began in the 1950s with small fashion features. In the 1960s and 1970s, fashion segments on various entertainment shows became more frequent, and by the 1980s, dedicated fashion shows such as "Fashion Television" started to appear. "FashionTV" was the pioneer in this undertaking and has since grown to become the leader in both Fashion Television and new media channels. The Fashion Industry is beginning to promote their styles through Bloggers on social media's. Vogue specified Chiara Ferragni as "blogger of the moment" due to the rises of followers through her Fashion Blog, that became popular.
A few days after the 2010 Fall Fashion Week in New York City came to a close, "The New Islander"'s Fashion Editor, Genevieve Tax, criticized the fashion industry for running on a seasonal schedule of its own, largely at the expense of real-world consumers. "Because designers release their fall collections in the spring and their spring collections in the fall, fashion magazines such as "Vogue" always and only look forward to the upcoming season, promoting parkas come September while issuing reviews on shorts in January", she writes. "Savvy shoppers, consequently, have been conditioned to be extremely, perhaps impractically, farsighted with their buying."
The fashion industry has been the subject of numerous films and television shows, including the reality show "Project Runway" and the drama series "Ugly Betty". Specific fashion brands have been featured in film, not only as product placement opportunities, but as bespoke items that have subsequently led to trends in fashion.
Videos in general have been very useful in promoting the fashion industry. This is evident not only from television shows directly spotlighting the fashion industry, but also movies, events and music videos which showcase fashion statements as well as promote specific brands through product placements.
There are some fashion advertisements that were accused of racism and led to boycotts from the customers. Globally known, Swedish fashion brand H&M faced this issue with one of its children's wear advertisements in 2018. A black child wearing a hoodie with a slogan written as "coolest monkey in the jungle" right at the center was featured in the ad. When it was released, it immediate became controversial and even led to boycott. A lot of people including celebrities posted on social media about their resentments towards H&M and refusal to work with and buy its products. H&M issued a statement saying "we apologise to anyone this may have offended", which seemed insincere to some.
Another fashion advertisement regarding racism is from GAP, American worldwide clothing brand. GAP collaborated with Ellen DeGeneres in 2016 for the advertisement. It features playful, four young girls where tall white girl is leaning with her arm on a shorter black girl's head. When this ad was released, some viewers harshly criticized that it underlies passive racism. A representative from The Root, black culture magazine commented on the ad that it portrays the message that the black people are undervalued and seen like props for white people to look better. There were different points of views on this issue, some saying that people are being too sensitive, and some getting offended. Regardless of various views and thoughts, GAP replaced the ad to different image and apologized to critics.
Many fashion brands have published ads that were too provocative and sexy to attract customers’ attention. British high fashion brand, Jimmy Choo, was blamed for having sexism in its ad which featured a female British mode wearing the brand's boots. In this two-minute ad, men whistle at a model, walking on the street with red, sleeveless mini dress. This ad gained lots of backlashes and criticism by the viewers since sexual harassment and misconduct was a huge issue during this time and even till now. Many people showed their dismay through social media posts, leading Jimmy Choo to pull down the ad from social media platforms.
French luxury fashion brand Yves Saint Laurent also faced this issue with its print ad shown in Paris in 2017. A female model is wearing a fishnet tights with roller-skate stilettos, almost lying down with her legs opened in front of the camera. This advertisement brought harsh comments from the viewers and French advertising organization directors for going against the advertising codes related to "respect for decency, dignity and those prohibiting submission, violence or dependence, as well as the use of stereotypes." They even said that this ad is causing "mental harm to adolescents." Lot of sarcastic comments were made in social media about the ad and the poster was removed from the city.
Fashion public relations involves being in touch with a company's audiences and creating strong relationships with them, reaching out to media and initiating messages that project positive images of the company. Social media plays an important role in modern-day fashion public relations; enabling practitioners to reach a wide range of consumers through various platforms.
Building brand awareness and credibility is a key implication of good public relations. In some cases, great hype is built about new designers' collections before they are released into the market, due to the immense exposure generated by practitioners. Social media, such as blogs, micro blogs, podcasts, photo and video sharing sites have all become increasingly important to fashion public relations. The interactive nature of these platforms allows practitioners to engage and communicate with the public in real time, and tailor their clients' brand or campaign messages to the target audience. With blogging platforms such as Instagram, Tumblr, Wordpress, and other sharing sites, bloggers have emerged as expert fashion commentators, shaping brands and having a great impact on what is ‘on trend’. Women in the fashion public relations industry such as Sweaty Betty PR founder Roxy Jacenko and Oscar de la Renta's PR girl Erika Bearman, have acquired copious followers on their social media sites, by providing a brand identity and a behind the scenes look into the companies they work for.
Social media is changing the way practitioners deliver messages, as they are concerned with the media, and also customer relationship building. PR practitioners must provide effective communication among all platforms, in order to engage the fashion public in an industry socially connected via online shopping. Consumers have the ability to share their purchases on their personal social media pages (such as Facebook, Twitter, Instagram, etc.), and if practitioners deliver the brand message effectively and meet the needs of its public, word-of-mouth publicity will be generated and potentially provide a wide reach for the designer and their products.
Anthropology, the study of culture and human societies, studies fashion by asking why certain styles are deemed socially appropriate and others are not. A certain way is chosen and that becomes the fashion as defined by a certain people as a whole, so if a particular style has a meaning in an already occurring set of beliefs that style will become fashion. According to Ted Polhemus and Lynn Procter, fashion can be described as adornment, of which there are two types: fashion and anti-fashion. Through the capitalization and commoditisation of clothing, accessories, and shoes, etc., what once constituted anti-fashion becomes part of fashion as the lines between fashion and anti-fashion are blurred.
The definition of fashion and anti-fashion is as follows: Anti-fashion is fixed and changes little over time. Anti-fashion is different depending on the cultural or social group one is associated with or where one lives, but within that group or locality the style changes little. Fashion is the exact opposite of anti-fashion. Fashion changes very quickly and is not affiliated with one group or area of the world but is spread out throughout the world wherever people can communicate easily with each other. For example, Queen Elizabeth II's 1953 coronation gown is an example of anti-fashion because it is traditional and does not change over any period whereas a gown from fashion designer Dior's collection of 1953 is fashion because the style will change every season as Dior comes up with a new gown to replace the old one. In the Dior gown the length, cut, fabric, and embroidery of the gown change from season to season. Anti-fashion is concerned with maintaining the status quo while fashion is concerned with social mobility. Time is expressed in terms of continuity in anti-fashion and as change in fashion. Fashion has changing modes of adornment while anti-fashion has fixed modes of adornment. Indigenous and peasant modes of adornment are an example of anti-fashion. Change in fashion is part of the larger system and is structured to be a deliberate change in style.
Today, people in rich countries are linked to people in poor countries through the commoditization and consumption of what is called fashion. People work long hours in one area of the globe to produce things that people in another part of the globe are anxious to consume. An example of this is the chain of production and consumption of Nike shoes, which are produced in Taiwan and then purchased in North America. At the production end, there is nation-building a hard working ideology that leads people to produce and entices people to consume with a vast amount of goods for the offering. Commodities are no longer just utilitarian but are fashionable, be they running shoes or sweat suits.
The change from anti-fashion to fashion because of the influence of western consumer-driven civilization can be seen in eastern Indonesia. The ikat textiles of the Ngada area of eastern Indonesia are changing because of modernization and development. Traditionally, in the Ngada area there was no idea similar to that of the Western idea of fashion, but anti-fashion in the form of traditional textiles and ways to adorn oneself were widely popular. Textiles in Indonesia have played many roles for the local people. Textiles defined a person's rank and status; certain textiles indicated being part of the ruling class. People expressed their ethnic identity and social hierarchy through textiles. Because some Indonesians bartered ikat textiles for food, the textiles constituted economic goods, and as some textile design motifs had spiritual religious meanings, textiles were also a way to communicate religious messages.
In eastern Indonesia, both the production and use of traditional textiles have been transformed as the production, use and value associated with textiles have changed due to modernization. In the past, women produced the textiles either for home consumption or to trade with others. Today, this has changed as most textiles are not being produced at home. Western goods are considered modern and are valued more than traditional goods, including the sarong, which retain a lingering association with colonialism. Now, sarongs are used only for rituals and ceremonial occasions, whereas western clothes are worn to church or government offices. Civil servants working in urban areas are more likely than peasants to make the distinction between western and traditional clothes. Following Indonesia's independence from the Dutch, people increasingly started buying factory made shirts and sarongs. In textile-producing areas the growing of cotton and production of naturally colored thread became obsolete. Traditional motifs on textiles are no longer considered the property of a certain social class or age group. Wives of government officials are promoting the use of traditional textiles in the form of western garments such as skirts, vests and blouses. This trend is also being followed by the general populace, and whoever can afford to hire a tailor is doing so to stitch traditional ikat textiles into western clothes. Thus, traditional textiles are now fashion goods and are no longer confined to the black, white and brown colour palette but come in array of colours. Traditional textiles are also being used in interior decorations and to make handbags, wallets and other accessories, which are considered fashionable by civil servants and their families. There is also a booming tourist trade in the eastern Indonesian city of Kupang where international as well as domestic tourists are eager to purchase traditionally printed western goods.
The use of traditional textiles for fashion is becoming big business in eastern Indonesia, but these traditional textiles are losing their ethnic identity markers and are being used as an item of fashion.
In the fashion industry, intellectual property is not enforced as it is within the film industry and music industry. Robert Glariston, an intellectual property expert, mentioned in a fashion seminar held in LA that "Copyright law regarding clothing is a current hot-button issue in the industry. We often have to draw the line between designers being inspired by a design and those outright stealing it in different places." To take inspiration from others' designs contributes to the fashion industry's ability to establish clothing trends. For the past few years, WGSN has been a dominant source of fashion news and forecasts in encouraging fashion brands worldwide to be inspired by one another. Enticing consumers to buy clothing by establishing new trends is, some have argued, a key component of the industry's success. Intellectual property rules that interfere with this process of trend-making would, in this view, be counter-productive. On the other hand, it is often argued that the blatant theft of new ideas, unique designs, and design details by larger companies is what often contributes to the failure of many smaller or independent design companies.
Since fakes are distinguishable by their poorer quality, there is still a demand for luxury goods, and as only a trademark or logo can be copyrighted, many fashion brands make this one of the most visible aspects of the garment or accessory. In handbags, especially, the designer's brand may be woven into the fabric (or the lining fabric) from which the bag is made, making the brand an intrinsic element of the bag.
In 2005, the World Intellectual Property Organization (WIPO) held a conference calling for stricter intellectual property enforcement within the fashion industry to better protect small and medium businesses and promote competitiveness within the textile and clothing industries.
There has been great debate about politics' place in fashion and traditionally, the fashion industry has maintained a rather apolitical stance. Considering the U.S.'s political climate in the surrounding months of the 2016 presidential election, during 2017 fashion weeks in London, Milan, New York, Paris and São Paulo amongst others, many designers took the opportunity to take political stances leveraging their platforms and influence to reach the masses.
Aiming to "amplify a greater message of unity, inclusion, diversity, and feminism in a fashion space", Mara Hoffman invited the founders of the "Women's March on Washington" to open her show which featured modern silhouettes of utilitarian wear, described by critics as "Made for a modern warrior" and "Clothing for those who still have work to do". Prabal Gurung debuted his collection of T-shirts featuring slogans such as "The Future is Female", "We Will Not Be Silenced", and "Nevertheless She Persisted", with proceeds going to the ACLU, Planned Parenthood, and Gurung's own charity, "Shikshya Foundation Nepal". Similarly, "The Business of Fashion" launched the "#TiedTogether" movement on Social Media, encouraging member of the industry from editors to models, to wear a white bandana advocating for "unity, solidarity, and inclusiveness during fashion week".
Fashion may be used to promote a cause, such as to promote healthy behavior, to raise money for a cancer cure, or to raise money for local charities such as the Juvenile Protective Association or a children's hospice.
One fashion cause is trashion, which is using trash to make clothes, jewelry, and other fashion items in order to promote awareness of pollution. There are a number of modern trashion artists such as Marina DeBris, Ann Wizer, and Nancy Judd.
African-Americans have used fashion through the years, to express themselves and their ideas. It has grown and developed with time. African-American influencers often have been known to start trends though modern day social media, and even in past years they have been able to reach others with their fashion and style.
Celebrities like Rihanna, Lupita Nyong'o, Zendaya, and Michelle Obama have been a few of the many fashion idols in the black female community. For men, Pharrell Williams, Kanye West, and Ice Cube have also helped define modern day fashion for black men. Today's fashion scene is not just clothes, but also hair and makeup. Recent trends have included the embracing of natural hair, traditional clothing worn with modern clothing, or traditional patterns used in modern clothing styles. All of these trends come with the long existing and persevering movement of "Black is Beautiful".
In the mid to end of the 1900s, African American style changed and developed with the times. Around the 1950s is really when the black community was able to create their own distinct styles. The term "Sunday attire" was coined, communities emphasized "Correct" dress, it was especially important when "stepping out" for social occasions with community members, a habit that continues in the early 2000s. Hair-dos and hairstyles also became a fashion statement, for example the "conk" which is hair that is slightly flattened and waved. Afros also emerged and they were often used to symbolize the rejection of white beauty standards at the time. Around the 1970s is when flashy costumes began to appear and black artists really started to define their presences through fashion. Around this time is also when movements started using fashion as one of their outlets.
Black activists and supporters used fashion to express their solidarity and support of this civil rights movement. Supporters adorned symbolic clothing, accessories and hairstyles, usually native to Africa. Politics and fashion were fused together during this time and the use of these symbolic fashion statements sent a message to America and the rest of the world that African Americans were proud of their heritage. They aimed to send an even stronger message that black is beautiful and they were not afraid to embrace their identities. An example would the Kente cloth, it is a brightly colored strip of cloth that is stitched and woven together to create different accessories. This woven cloth of brightly colored strips of fabric became a strong symbolic representation of pride in African identity for African Americans of the 1960s and later. It was developed into what is called a dashiki, a flowing, loose fitting, tunic style shirt. This cloth became one of the most notorious symbols of this revolution.
The Black Panther Party (BPP) was an essential piece of the Black Power movement that allowed members that were involved advocate for the African American race in different subjects like equality and politics. The BPP members wore a very distinctive uniform: a black leather jacket, black pants, light blue shirts, a black beret, an afro, dark sunglasses, and usually a fist in the air. Their image gave off a very militant like feel to it. This notable uniform was established in 1996, but a different uniform was still in place before; just the sunglasses and leather jackets. Each member wore this uniform at events, rallies, and in their day-today life. Very few members changed the essential parts of the outfit, but some added personal touches such as necklaces or other jewelry that was usually were a part of African culture. The Black Panther uniform did succeeded in intimidating enemies and onlookers and clearly sent a message of black pride and power even though the initial intention of this party was to communicate solidarity among the Black Panther Party members.
Since the 1970s, fashion models of color, especially black men and women, have experienced an increase in discrimination in the fashion industry. In the years from 1970 to 1990, black designers and models were very successful, but as the 1990s came to an end, the fashion aesthetic changed and it did not include black models or designers. In today's fashion, black models, influencers, and designers account for one of the smallest percentages of the industry. There are many theories about this lack of diversity, that it can be attributed to the economic differences usually associated with race and class, or it can reflect the differences in arts schooling given to mostly black populated schools, and also blatant racism.
A report from New York Fashion (Spring 2015) week found that while 79.69% of models on the runway were white, only 9.75% of models were black, 7.67% were Asian, and 2.12% were Latina. The lack of diversity also accounts for not only designers but models too, out of four hundred and seventy members of The Council of Fashion Designers of America (CFDA) only twelve of the members are black. From the same study on New York Fashion Week, it was shown that only 2.7% of the 260 designers presented were black men, and an even smaller percentage were black female designers. Even the relationship between independent designers and retailers can show the racial gap, only 1% of designers stocked at department stores being people of color. It was also found that in editorial spreads, over eighty percent of models pictured were white and only nine percent were black models. These numbers have stayed stagnant over the past few years.
Many fashion designers have come under fire over the years for what is known as tokenism. Designer or editors will add one or two members on an underrepresented group to help them appear as inclusive and diverse, and to also help them give the illusion that they have equality. This idea of tokenism helps designers avoid accusations of racism, sexism, body shaming, etc.
There are many examples of cultural appropriation in fashion. In many instances, designers can be found using aspects of culture inappropriately, in most cases taking traditional clothing from middle eastern, African, and Hispanic culture and adding it to their runway fashion. Some examples are in a 2018 Gucci runway show, white models wore Sikh headdresses, causing a lot of backlash. Victoria's secret was also under fire for putting traditional native headdresses on their models during a lingerie runway show. Marc Jacobs sent down models sporting dreadlocks in his spring 2017 New York Fashion Week show, also facing immense criticism. | https://en.wikipedia.org/wiki?curid=11657 |
Fourier analysis
In mathematics, Fourier analysis () is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.
Today, the subject of Fourier analysis encompasses a vast spectrum of mathematics. In the sciences and engineering, the process of decomposing a function into oscillatory components is often called Fourier analysis, while the operation of rebuilding the function from these pieces is known as Fourier synthesis. For example, determining what component frequencies are present in a musical note would involve computing the Fourier transform of a sampled musical note. One could then re-synthesize the same sound by including the frequency components as revealed in the Fourier analysis. In mathematics, the term "Fourier analysis" often refers to the study of both operations.
The decomposition process itself is called a Fourier transformation. Its output, the Fourier transform, is often given a more specific name, which depends on the domain and other properties of the function being transformed. Moreover, the original concept of Fourier analysis has been extended over time to apply to more and more abstract and general situations, and the general field is often known as harmonic analysis. Each transform used for analysis (see list of Fourier-related transforms) has a corresponding inverse transform that can be used for synthesis.
Fourier analysis has many scientific applications – in physics, partial differential equations, number theory, combinatorics, signal processing, digital image processing, probability theory, statistics, forensics, option pricing, cryptography, numerical analysis, acoustics, oceanography, sonar, optics, diffraction, geometry, protein structure analysis, and other areas.
This wide applicability stems from many useful properties of the transforms:
In forensics, laboratory infrared spectrophotometers use Fourier transform analysis for measuring the wavelengths of light at which a material will absorb in the infrared spectrum. The FT method is used to decode the measured signals and record the wavelength data. And by using a computer, these Fourier calculations are rapidly carried out, so that in a matter of seconds, a computer-operated FT-IR instrument can produce an infrared absorption pattern comparable to that of a prism instrument.
Fourier transformation is also useful as a compact representation of a signal. For example, JPEG compression uses a variant of the Fourier transformation (discrete cosine transform) of small square pieces of a digital image. The Fourier components of each square are rounded to lower arithmetic precision, and weak components are eliminated entirely, so that the remaining components can be stored very compactly. In image reconstruction, each image square is reassembled from the preserved approximate Fourier-transformed components, which are then inverse-transformed to produce an approximation of the original image.
When processing signals, such as audio, radio waves, light waves, seismic waves, and even images, Fourier analysis can isolate narrowband components of a compound waveform, concentrating them for easier detection or removal. A large family of signal processing techniques consist of Fourier-transforming a signal, manipulating the Fourier-transformed data in a simple way, and reversing the transformation.
Some examples include:
Most often, the unqualified term Fourier transform refers to the transform of functions of a continuous real argument, and it produces a continuous function of frequency, known as a "frequency distribution". One function is transformed into another, and the operation is reversible. When the domain of the input (initial) function is time (), and the domain of the output (final) function is ordinary frequency, the transform of function at frequency is given by the complex number:
Evaluating this quantity for all values of produces the "frequency-domain" function. Then can be represented as a recombination of complex exponentials of all possible frequencies:
which is the inverse transform formula. The complex number, , conveys both amplitude and phase of frequency .
See Fourier transform for much more information, including:
The Fourier transform of a periodic function, , with period , becomes a Dirac comb function, modulated by a sequence of complex coefficients:
for all integer values of , and where is the integral over any interval of length "P".
The inverse transform, known as Fourier series, is a representation of in terms of a summation of a potentially infinite number of harmonically related sinusoids or complex exponential functions, each with an amplitude and phase specified by one of the coefficients:
When , is expressed as a periodic summation of another function, :
the coefficients are proportional to samples of at discrete intervals of :
A sufficient condition for recovering (and therefore ) from just these samples (i.e. from the Fourier series) is that the non-zero portion of be confined to a known interval of duration , which is the frequency domain dual of the Nyquist–Shannon sampling theorem.
See Fourier series for more information, including the historical development.
The DTFT is the mathematical dual of the time-domain Fourier series. Thus, a convergent periodic summation in the frequency domain can be represented by a Fourier series, whose coefficients are samples of a related continuous time function:
which is known as the DTFT. Thus the DTFT of the sequence is also the Fourier transform of the modulated Dirac comb function.
The Fourier series coefficients (and inverse transform), are defined by:
Parameter corresponds to the sampling interval, and this Fourier series can now be recognized as a form of the Poisson summation formula. Thus we have the important result that when a discrete data sequence, , is proportional to samples of an underlying continuous function, , one can observe a periodic summation of the continuous Fourier transform, . That is a cornerstone in the foundation of digital signal processing. Furthermore, under certain idealized conditions one can theoretically recover and exactly. A sufficient condition for perfect recovery is that the non-zero portion of be confined to a known frequency interval of width . When that interval is , the applicable reconstruction formula is the Whittaker–Shannon interpolation formula.
Another reason to be interested in is that it often provides insight into the amount of aliasing caused by the sampling process.
Applications of the DTFT are not limited to sampled functions. See Discrete-time Fourier transform for more information on this and other topics, including:
Similar to a Fourier series, the DTFT of a periodic sequence, , with period , becomes a Dirac comb function, modulated by a sequence of complex coefficients (see ):
The sequence is what is customarily known as the DFT of . It is also -periodic, so it is never necessary to compute more than coefficients. The inverse transform is given by:
When is expressed as a periodic summation of another function:
the coefficients are proportional to samples of at discrete intervals of :
Conversely, when one wants to compute an arbitrary number () of discrete samples of one cycle of a continuous DTFT, , it can be done by computing the relatively simple DFT of , as defined above. In most cases, is chosen equal to the length of non-zero portion of . Increasing , known as "zero-padding" or "interpolation", results in more closely spaced samples of one cycle of . Decreasing , causes overlap (adding) in the time-domain (analogous to aliasing), which corresponds to decimation in the frequency domain. (see ) In most cases of practical interest, the sequence represents a longer sequence that was truncated by the application of a finite-length window function or FIR filter array.
The DFT can be computed using a fast Fourier transform (FFT) algorithm, which makes it a practical and important transformation on computers.
See Discrete Fourier transform for much more information, including:
For periodic functions, both the Fourier transform and the DTFT comprise only a discrete set of frequency components (Fourier series), and the transforms diverge at those frequencies. One common practice (not discussed above) is to handle that divergence via Dirac delta and Dirac comb functions. But the same spectral information can be discerned from just one cycle of the periodic function, since all the other cycles are identical. Similarly, finite-duration functions can be represented as a Fourier series, with no actual loss of information except that the periodicity of the inverse transform is a mere artifact.
It is common in practice for the duration of "s"(•) to be limited to the period, or . But these formulas do not require that condition.
When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform:
From this, various relationships are apparent, for example:
The Fourier variants can also be generalized to Fourier transforms on arbitrary locally compact Abelian topological groups, which are studied in harmonic analysis; there, the Fourier transform takes functions on a group to functions on the dual group. This treatment also allows a general formulation of the convolution theorem, which relates Fourier transforms and convolutions. See also the Pontryagin duality for the generalized underpinnings of the Fourier transform.
More specific, Fourier analysis can be done on cosets, even discrete cosets.
In signal processing terms, a function (of time) is a representation of a signal with perfect "time resolution", but no frequency information, while the Fourier transform has perfect "frequency resolution", but no time information.
As alternatives to the Fourier transform, in time–frequency analysis, one uses time–frequency transforms to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the short-time Fourier transform, the Gabor transform or fractional Fourier transform (FRFT), or can use different functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the (continuous) Fourier transform being the continuous wavelet transform.
A primitive form of harmonic series dates back to ancient Babylonian mathematics, where they were used to compute ephemerides (tables of astronomical positions).
The classical Greek concepts of deferent and epicycle in the Ptolemaic system of astronomy were related to Fourier series (see Deferent and epicycle: Mathematical formalism).
In modern times, variants of the discrete Fourier transform were used by Alexis Clairaut in 1754 to compute an orbit,
which has been described as the first formula for the DFT,
and in 1759 by Joseph Louis Lagrange, in computing the coefficients of a trigonometric series for a vibrating string. Technically, Clairaut's work was a cosine-only series (a form of discrete cosine transform), while Lagrange's work was a sine-only series (a form of discrete sine transform); a true cosine+sine DFT was used by Gauss in 1805 for trigonometric interpolation of asteroid orbits.
Euler and Lagrange both discretized the vibrating string problem, using what would today be called samples.
An early modern development toward Fourier analysis was the 1770 paper "Réflexions sur la résolution algébrique des équations" by Lagrange, which in the method of Lagrange resolvents used a complex Fourier decomposition to study the solution of a cubic:
Lagrange transformed the roots into the resolvents:
where is a cubic root of unity, which is the DFT of order 3.
A number of authors, notably Jean le Rond d'Alembert, and Carl Friedrich Gauss used trigonometric series to study the heat equation, but the breakthrough development was the 1807 paper "Mémoire sur la propagation de la chaleur dans les corps solides" by Joseph Fourier, whose crucial insight was to model "all" functions by trigonometric series, introducing the Fourier series.
Historians are divided as to how much to credit Lagrange and others for the development of Fourier theory: Daniel Bernoulli and Leonhard Euler had introduced trigonometric representations of functions, and Lagrange had given the Fourier series solution to the wave equation, so Fourier's contribution was mainly the bold claim that an arbitrary function could be represented by a Fourier series.
The subsequent development of the field is known as harmonic analysis, and is also an early instance of representation theory.
The first fast Fourier transform (FFT) algorithm for the DFT was discovered around 1805 by Carl Friedrich Gauss when interpolating measurements of the orbit of the asteroids Juno and Pallas, although that particular FFT algorithm is more often attributed to its modern rediscoverers Cooley and Tukey.
In signal processing, the Fourier transform often takes a time series or a function of continuous time, and maps it into a frequency spectrum. That is, it takes a function from the time domain into the frequency domain; it is a decomposition of a function into sinusoids of different frequencies; in the case of a Fourier series or discrete Fourier transform, the sinusoids are harmonics of the fundamental frequency of the function being analyzed.
When the function is a function of time and represents a physical signal, the transform has a standard interpretation as the frequency spectrum of the signal. The magnitude of the resulting complex-valued function at frequency represents the amplitude of a frequency component whose initial phase is given by the phase of .
Fourier transforms are not limited to functions of time, and temporal frequencies. They can equally be applied to analyze "spatial" frequencies, and indeed for nearly any function domain. This justifies their use in such diverse branches as image processing, heat conduction, and automatic control. | https://en.wikipedia.org/wiki?curid=11659 |
Fat Man
"Fat Man" was the codename for the nuclear bomb that was detonated over the Japanese city of Nagasaki by the United States on 9 August 1945. It was the second of the only two nuclear weapons ever used in warfare, the first being Little Boy, and its detonation marked the third nuclear explosion in history. It was built by scientists and engineers at Los Alamos Laboratory using plutonium from the Hanford Site, and it was dropped from the Boeing B-29 Superfortress "Bockscar" piloted by Major Charles Sweeney.
The name Fat Man refers to the early design of the bomb because it had a wide, round shape; it was also known as the Mark III. Fat Man was an implosion-type nuclear weapon with a solid plutonium core. The first of that type to be detonated was the Gadget in the Trinity nuclear test less than a month earlier on 16 July at the Alamogordo Bombing and Gunnery Range in New Mexico. Two more were detonated during the Operation Crossroads nuclear tests at Bikini Atoll in 1946, and some 120 were produced between 1947 and 1949, when it was superseded by the Mark 4 nuclear bomb. The Fat Man was retired in 1950.
Robert Oppenheimer held conferences in Chicago in June 1942, prior to the Army taking over wartime atomic research, and in Berkeley, California, in July, at which various engineers and physicists discussed nuclear bomb design issues. They chose a gun-type design in which two sub-critical masses would be brought together by firing a "bullet" into a "target". Richard C. Tolman suggested an implosion-type nuclear weapon, but the proposal attracted little interest.
The feasibility of a plutonium bomb was questioned in 1942. Wallace Akers, the director of the British "Tube Alloys" project, told James Bryant Conant on 14 November that James Chadwick had "concluded that plutonium might not be a practical fissionable material for weapons because of impurities." Conant consulted Ernest Lawrence and Arthur Compton, who acknowledged that their scientists at Berkeley and Chicago respectively knew about the problem, but they could offer no ready solution. Conant informed Manhattan Project director Brigadier General Leslie R. Groves Jr., who in turn assembled a special committee consisting of Lawrence, Compton, Oppenheimer, and McMillan to examine the issue. The committee concluded that any problems could be overcome simply by requiring higher purity.
Oppenheimer reviewed his options in early 1943 and gave priority to the gun-type weapon, but he created the E-5 Group at the Los Alamos Laboratory under Seth Neddermeyer to investigate implosion as a hedge against the threat of pre-detonation. Implosion-type bombs were determined to be significantly more efficient in terms of explosive yield per unit mass of fissile material in the bomb, because compressed fissile materials react more rapidly and therefore more completely. Nonetheless, it was decided that the plutonium gun would receive the bulk of the research effort, since it was the project with the least amount of uncertainty involved. It was assumed that the uranium gun-type bomb could be easily adapted from it.
The gun-type and implosion-type designs were codenamed "Thin Man" and "Fat Man" respectively. These code names were created by Robert Serber, a former student of Oppenheimer's who worked on the Manhattan Project. He chose them based on their design shapes; the Thin Man was a very long device, and the name came from the Dashiell Hammett detective novel "The Thin Man" and series of movies. The Fat Man was round and fat and was named after Sydney Greenstreet's character in Hammett's "The Maltese Falcon". Little Boy came last as a variation of Thin Man.
Neddermeyer discarded Serber and Tolman's initial concept of implosion as assembling a series of pieces in favor of one in which a hollow sphere was imploded by an explosive shell. He was assisted in this work by Hugh Bradner, Charles Critchfield, and John Streib. L. T. E. Thompson was brought in as a consultant, and discussed the problem with Neddermeyer in June 1943. Thompson was skeptical that an implosion could be made sufficiently symmetric. Oppenheimer arranged for Neddermeyer and Edwin McMillan to visit the National Defense Research Committee's Explosives Research Laboratory near the laboratories of the Bureau of Mines in Bruceton, Pennsylvania (a Pittsburgh suburb), where they spoke to George Kistiakowsky and his team. But Neddermeyer's efforts in July and August at imploding tubes to produce cylinders tended to produce objects that resembled rocks. Neddermeyer was the only person who believed that implosion was practical, and only his enthusiasm kept the project alive.
Oppenheimer brought John von Neumann to Los Alamos in September 1943 to take a fresh look at implosion. After reviewing Neddermeyer's studies, and discussing the matter with Edward Teller, von Neumann suggested the use of high explosives in shaped charges to implode a sphere, which he showed could not only result in a faster assembly of fissile material than was possible with the gun method, but which could greatly reduce the amount of material required, because of the resulting higher density. The idea that, under such pressures, the plutonium metal itself would be compressed came from Teller, whose knowledge of how dense metals behaved under heavy pressure was influenced by his pre-war theoretical studies of the Earth's core with George Gamow. The prospect of more-efficient nuclear weapons impressed Oppenheimer, Teller, and Hans Bethe, but they decided that an expert on explosives would be required. Kistiakowsky's name was immediately suggested, and Kistiakowsky was brought into the project as a consultant in October 1943.
The implosion project remained a backup until April 1944, when experiments by Emilio G. Segrè and his P-5 Group at Los Alamos on the newly reactor-produced plutonium from the X-10 Graphite Reactor at Oak Ridge and the B Reactor at the Hanford site showed that it contained impurities in the form of the isotope plutonium-240. This has a far higher spontaneous fission rate and radioactivity than plutonium-239. The cyclotron-produced isotopes, on which the original measurements had been made, held much lower traces of plutonium-240. Its inclusion in reactor-bred plutonium appeared unavoidable. This meant that the spontaneous fission rate of the reactor plutonium was so high that it would be highly likely that it would predetonate and blow itself apart during the initial formation of a critical mass. The distance required to accelerate the plutonium to speeds where predetonation would be less likely would need a gun barrel too long for any existing or planned bomber. The only way to use plutonium in a workable bomb was therefore implosion.
The impracticability of a gun-type bomb using plutonium was agreed at a meeting in Los Alamos on 17 July 1944. All gun-type work in the Manhattan Project was directed at the Little Boy, enriched-uranium gun design, and the Los Alamos Laboratory was reorganized, with almost all of the research focused on the problems of implosion for the Fat Man bomb. The idea of using shaped charges as three-dimensional explosive lenses came from James L. Tuck, and was developed by von Neumann. To overcome the difficulty of synchronizing multiple detonations, Luis Alvarez and Lawrence Johnston invented exploding-bridgewire detonators to replace the less precise primacord detonation system. Robert Christy is credited with doing the calculations that showed how a solid subcritical sphere of plutonium could be compressed to a critical state, greatly simplifying the task, since earlier efforts had attempted the more-difficult compression of a hollow spherical shell. After Christy's report, the solid-plutonium core weapon was referred to as the "Christy Gadget".
The task of the metallurgists was to determine how to cast plutonium into a sphere. The difficulties became apparent when attempts to measure the density of plutonium gave inconsistent results. At first contamination was believed to be the cause, but it was soon determined that there were multiple allotropes of plutonium. The brittle α phase that exists at room temperature changes to the plastic β phase at higher temperatures. Attention then shifted to the even more malleable δ phase that normally exists in the range. It was found that this was stable at room temperature when alloyed with aluminum, but aluminum emits neutrons when bombarded with alpha particles, which would exacerbate the pre-ignition problem. The metallurgists then hit upon a plutonium–gallium alloy, which stabilized the δ phase and could be hot pressed into the desired spherical shape. As plutonium was found to corrode readily, the sphere was coated with nickel.
The size of the bomb was constrained by the available aircraft. The only Allied aircraft considered capable of carrying the Fat Man without major modification were the British Avro Lancaster and the American Boeing B-29 Superfortress. At the time, the B-29 represented the epitome of bomber technology with significant advantages in MTOW, range, speed, flight ceiling, and survivability. Without the availability of the B-29, dropping the bomb would likely have been impossible. However, this still constrained the bomb to a maximum length of , width of and weight of . Removing the bomb rails allowed a maximum width of .
Drop tests began in March 1944, and resulted in modifications to the Silverplate aircraft due to the weight of the bomb. High-speed photographs revealed that the tail fins folded under the pressure, resulting in an erratic descent. Various combinations of stabilizer boxes and fins were tested on the Fat Man shape to eliminate its persistent wobble until an arrangement dubbed a "California Parachute", a cubical open-rear tail box outer surface with eight radial fins inside of it, four angled at 45° and four perpendicular to the line of fall holding the outer square-fin box to the bomb's rear end, was approved. In drop tests in early weeks, the Fat Man missed its target by an average of , but this was halved by June as the bombardiers became more proficient with it.
The early Y-1222 model Fat Man was assembled with some 1,500 bolts. This was superseded by the Y-1291 design in December 1944. This redesign work was substantial, and only the Y-1222 tail design was retained. Later versions included the Y-1560, which had 72 detonators; the Y-1561, which had 32; and the Y-1562, which had 132. There were also the Y-1563 and Y-1564, which were practice bombs with no detonators at all. The final wartime Y-1561 design was assembled with just 90 bolts.
On 16 July 1945, a Y-1561 model Fat Man, known as the Gadget, was detonated in a test explosion at a remote site in New Mexico, known as the "Trinity" test. It gave a yield of about . Some minor changes were made to the design as a result of the Trinity test. Philip Morrison recalled that "There were some changes of importance... The fundamental thing was, of course, very much the same."
The bomb was long and in diameter. It weighed .
The plutonium pit was in diameter and contained an "Urchin" modulated neutron initiator that was in diameter. The depleted uranium tamper was an diameter sphere, surrounded by a thick shell of boron-impregnated plastic. The plastic shell had a diameter cylindrical hole running through it, like the hole in a cored apple, in order to allow insertion of the pit as late as possible. The missing tamper cylinder containing the pit could be slipped in through a hole in the surrounding diameter aluminum pusher. The pit was warm to touch, emitting 2.4 W/kg-Pu, about 15 W for the core.
The explosion symmetrically compressed the plutonium to twice its normal density before the "Urchin" added free neutrons to initiate a fission chain reaction.
The result was the fission of about of the of plutonium in the pit, i.e. of about 16% of the fissile material present. of matter in the bomb is converted into the active energy of heat and radiation, releasing the energy equivalent to the detonation of .
The first plutonium core was transported with its polonium-beryllium modulated neutron initiator in the custody of Project Alberta courier Raemer Schreiber in a magnesium field carrying case designed for the purpose by Philip Morrison. Magnesium was chosen because it does not act as a tamper. It left Kirtland Army Air Field on a C-54 transport aircraft of the 509th Composite Group's 320th Troop Carrier Squadron on 26 July and arrived at North Field on Tinian on 28 July. Three Fat Man high-explosive pre-assemblies (designated F31, F32, and F33) were picked up at Kirtland on 28 July by three B-29s: "Luke the Spook" and "Laggin' Dragon" from the 509th Composite Group's 393d Bombardment Squadron, and another from the 216th Army Air Forces Base Unit. The cores were transported to North Field, arriving on 2 August, when F31 was partly disassembled in order to check all its components. F33 was expended near Tinian during a final rehearsal on 8 August. F32 presumably would have been used for a third attack or its rehearsal.
On 7 August, the day after the bombing of Hiroshima, Rear Admiral William R. Purnell, Commodore William S. Parsons, Tibbets, General Carl Spaatz and Major General Curtis LeMay met on Guam to discuss what should be done next. Since there was no indication of Japan surrendering, they decided to proceed with their orders and drop another bomb. Parsons said that Project Alberta would have it ready by 11 August, but Tibbets pointed to weather reports indicating poor flying conditions on that day due to a storm and asked if the bomb could be made ready by 9 August. Parsons agreed to try to do so.
Fat Man F31 was assembled on Tinian by Project Alberta personnel, and the physics package was fully assembled and wired. It was placed inside its ellipsoidal aerodynamic bombshell and wheeled out, where it was signed by nearly 60 people, including Purnell, Brigadier General Thomas F. Farrell, and Parsons. It was then wheeled to the bomb bay of the B-29 Superfortress named "Bockscar" after the plane's command pilot Captain Frederick C. Bock, who flew "The Great Artiste" with his crew on the mission. "Bockscar" was flown by Major Charles W. Sweeney and his crew, with Commander Frederick L. Ashworth from Project Alberta as the weaponeer in charge of the bomb.
"Bockscar" lifted off at 03:47 on the morning of 9 August 1945, with Kokura as the primary target and Nagasaki the secondary target. The weapon was already armed, but with the green electrical safety plugs still engaged. Ashworth changed them to red after ten minutes so that Sweeney could climb to in order to get above storm clouds. During the pre-flight inspection of "Bockscar", the flight engineer notified Sweeney that an inoperative fuel transfer pump made it impossible to use of fuel carried in a reserve tank. This fuel would still have to be carried all the way to Japan and back, consuming still more fuel. Replacing the pump would take hours; moving the Fat Man to another aircraft might take just as long and was dangerous as well, as the bomb was live. Colonel Paul Tibbets and Sweeney therefore elected to have "Bockscar" continue the mission.
The target for the bomb was the city of Kokura, but it was found to be obscured by clouds and drifting smoke from fires started by a major firebombing raid by 224 B-29s on nearby Yahata the previous day. This covered 70% of the area over Kokura, obscuring the aiming point. Three bomb runs were made over the next 50 minutes, burning fuel and repeatedly exposing the aircraft to the heavy defenses of Yahata, but the bombardier was unable to drop visually. By the time of the third bomb run, Japanese anti-aircraft fire was getting close; Second Lieutenant Jacob Beser was monitoring Japanese communications, and he reported activity on the Japanese fighter direction radio bands.
Sweeney then proceeded to the alternative target of Nagasaki. It was obscured by cloud, as well, and Ashworth ordered Sweeney to make a radar approach. At the last minute, however, bombardier Captain Kermit K. Beahan found a hole in the clouds. The Fat Man was dropped and exploded at 11:02 local time, following a 43-second free-fall, at an altitude of about . There was poor visibility due to cloud cover and the bomb missed its intended detonation point by almost two miles, so the damage was somewhat less extensive than that in Hiroshima.
An estimated 35,000–40,000 people were killed outright by the bombing at Nagasaki. A total of 60,000–80,000 fatalities resulted, including from long-term health effects, the strongest of which was leukemia with an attributable risk of 46% for bomb victims. Others died later from related blast and burn injuries, and hundreds more from radiation illnesses from exposure to the bomb's initial radiation. Most of the direct deaths and injuries were among munitions or industrial workers.
Mitsubishi's industrial production in the city was also severed by the attack; the dockyard would have produced at 80 percent of its full capacity within three to four months, the steelworks would have required a year to get back to substantial production, the electric works would have resumed some production within two months and been back at capacity within six months, and the arms plant would have required 15 months to return to 60 to 70 percent of former capacity. The Mitsubishi-Urakami Ordnance Works was the factory that manufactured the type 91 torpedoes released in the attack on Pearl Harbor; it was destroyed in the blast.
The second atomic bombing, on Nagasaki, came only three days after the bombing of Hiroshima, when the devastation at Hiroshima had yet to be fully comprehended by the Japanese. The lack of time between the bombings led Martin J. Sherwin to speculate that the second bombing was "certainly unnecessary", and Bruce Cumings that it was "gratuitous at best and genocidal at worst". Fredrik Logevall argued that President Harry S. Truman hoped that after the first atomic bomb, Japan would capitulate soon, but "the swift capitulation of Japan was crucial in order to preempt a Soviet military move into Asia" that Truman had requested. On the other hand, Robert James Maddox argued that more than one bomb was necessary because military hard-liners in the Japanese government refused even to admit that the single Hiroshima bomb was atomic.
After the war, two Y-1561 Fat Man bombs were used in the Operation "Crossroads" nuclear tests at Bikini Atoll in the Pacific. The first was known as "Gilda" after Rita Hayworth's character in the 1946 movie Gilda, and it was dropped by the B-29 "Dave's Dream"; it missed its aim point by . The second bomb was nicknamed "Helen of Bikini" and was placed without its tail fin assembly in a steel caisson made from a submarine's conning tower; it was detonated beneath the landing craft "USS LSM-60". The two weapons yielded about each.
The Los Alamos Laboratory and the Army Air Forces had already commenced work on improving the design. The North American B-45 Tornado, Convair XB-46, Martin XB-48, and Boeing B-47 Stratojet bombers had bomb bays sized to carry the Grand Slam, which was much longer but not as wide as the Fat Man. The only American bombers that could carry the Fat Man were the B-29 and the Convair B-36. In November 1945, the Army Air Forces asked Los Alamos for 200 Fat Man bombs, but there were only two sets of plutonium cores and high-explosive assemblies at the time. The Army Air Forces wanted improvements to the design to make it easier to manufacture, assemble, handle, transport, and stockpile. The wartime Project W-47 was continued, and drop tests resumed in January 1946.
The Mark III Mod 0 Fat Man was ordered into production in mid-1946. High explosives were manufactured by the Salt Wells Pilot Plant, which had been established by the Manhattan Project as part of Project Camel, and a new plant was established at the Iowa Army Ammunition Plant. Mechanical components were made or procured by the Rock Island Arsenal; electrical and mechanical components for about 50 bombs were stockpiled at Kirtland Army Air Field by August 1946, but only nine plutonium cores were available. Production of the Mod 0 ended in December 1948, by which time there were still only 53 cores available. It was replaced by improved versions known as Mods 1 and 2 which contained a number of minor changes, the most important of which was that they did not charge the X-Unit firing system's capacitors until released from the aircraft. The Mod 0s were withdrawn from service between March and July 1949, and by October they had all been rebuilt as Mods 1 and 2. Some 120 Mark III Fat Man units were added to the stockpile between 1947 and 1949 when it was superseded by the Mark 4 nuclear bomb. The Mark III Fat Man was retired in 1950.
A nuclear strike would have been a formidable undertaking in the post-war 1940s due to the limitations of the Mark III Fat Man. The lead-acid batteries which powered the fuzing system remained charged for only 36 hours, after which they needed to be recharged. To do this meant disassembling the bomb, and recharging took 72 hours. The batteries had to be removed in any case after nine days or they corroded. The plutonium core could not be left in for much longer, because its heat damaged the high explosives. Replacing the core also required the bomb to be completely disassembled and reassembled. This required about 40 to 50 men and took between 56 and 72 hours, depending on the skill of the bomb assembly team, and the Armed Forces Special Weapons Project had only three teams in June 1948. The only aircraft capable of carrying the bomb were Silverplate B-29s, and the only group equipped with them was the 509th Bombardment Group at Walker Air Force Base in Roswell, New Mexico. They would first have to fly to Sandia Base to collect the bombs, and then to an overseas base from which a strike could be mounted.
The Soviet Union's first nuclear weapon was based closely on Fat Man's design thanks to spies Klaus Fuchs, Theodore Hall, and David Greenglass, who provided them with secret information concerning the Manhattan Project and Fat Man. It was detonated on 29 August 1949 as part of Operation "First Lightning". | https://en.wikipedia.org/wiki?curid=11660 |
False Claims Act
The False Claims Act (FCA), also called the "Lincoln Law", is an American federal law that imposes liability on persons and companies (typically federal contractors) who defraud governmental programs. It is the federal Government's primary litigation tool in combating fraud against the Government. The law includes a "qui tam" provision that allows people who are not affiliated with the government, called "relators" under the law, to file actions on behalf of the government (informally called "whistleblowing" especially when the relator is employed by the organization accused in the suit). Persons filing under the Act stand to receive a portion (15-30 percent, depending on certain factors) of any recovered damages. As of 2019, over 71 percent of all FCA actions were initiated by whistleblowers. Claims under the law have typically involved health care, military, or other government spending programs, and dominate the list of largest pharmaceutical settlements. The government has recovered more than $62 billion under the False Claims Act between 1987 and 2019.
"Qui tam" laws have history dating back to the Middle Ages in England. In 1318, King Edward II offered one third of the penalty to the relator when the relator successfully sued government officials who moonlighted as wine merchants. The Maintenance and Embracery Act 1540 of Henry VIII provided that common informers could sue for certain forms of interference with the course of justice in legal proceedings that were concerned with the title to land. This act is still in force today in the Republic of Ireland, although in 1967 it was extinguished in England. The idea of a common informer bringing suit for damages to the Commonwealth was later brought to Massachusetts, where "penalties for fraud in the sale of bread [are] to be distributed one third to inspector who discovered the fraud and the remainder for the benefit of the town where the offense occurred." Other statutes can be found on the colonial law books of Connecticut, New York, Virginia and South Carolina.
The American Civil War (1861–1865) was marked by fraud on all levels, both in the Union north and the Confederate south. During the war, unscrupulous contractors sold the Union Army decrepit horses and mules in ill health, faulty rifles and ammunition, and rancid rations and provisions, among other unscrupulous actions. In response, Congress passed the False Claims Act on March 2, 1863, . Because it was passed under the administration of President Abraham Lincoln, the False Claims Act is often referred to as the "Lincoln Law".
Importantly, a reward was offered in what is called the "qui tam" provision, which permits citizens to sue on behalf of the government and be paid a percentage of the recovery. "Qui tam" is an abbreviated form of the Latin legal phrase "qui tam pro domino rege quam pro se ipso in hac parte sequitur" ("he who brings a case on behalf of our lord the King, as well as for himself") In a "qui tam" action, the citizen filing suit is called a "relator". As an exception to the general legal rule of standing, courts have held that "qui tam" relators are "partially assigned" a portion of the government's legal injury, thereby allowing relators to proceed with their suits.
U.S. Senator Jacob M. Howard, who sponsored the legislation, justified giving rewards to whistle blowers, many of whom had engaged in unethical activities themselves. He said, "I have based the ["qui tam" provision] upon the old-fashioned idea of holding out a temptation, and ‘setting a rogue to catch a rogue,’ which is the safest and most expeditious way I have ever discovered of bringing rogues to justice."
In the massive military spending leading up to and during World War II, the US Attorney General relied on criminal provisions of the law to deal with fraud, rather than using the FCA. As a result, attorneys would wait for the Department of Justice to file criminal cases and then immediately file civil suits under the FCA, a practice decried as "parasitic" at the time. Congress moved to abolish the FCA but at the last minute decided instead to reduce the relator's share of the recovered proceeds.
The law was again amended in 1986, again due to issues with military spending. Under President Ronald Reagan's military buildup, reports of massive fraud among military contractors had become major news, and Congress acted to strengthen the FCA.
The first qui tam case under the amended False Claims Act was filed in 1987 by an eye surgeon against an eye clinic and one of its doctors, alleging unnecessary surgeries and other procedures were being performed. The case settled in 1988 for a total of $605,000. However, the law was primarily used in the beginning against defense contractors. By the late 1990s, health care fraud began to receive more focus, accounting for approximately 40% of recoveries by 2008 "Franklin v. Parke-Davis", filed in 1996, was the first case to apply the FCA to fraud committed by a pharma company against the government, due to bills submitted for payment by Medicaid/Medicare for treatments that those programs do not pay for as they are not FDA-approved or otherwise listed on a government formulary. FCA cases against pharma companies are often related to off-label marketing of drugs by drug companies, which is illegal under a different law, the Federal Food, Drug, and Cosmetic Act; the intersection occurs when off-label marketing leads to prescriptions being filled and bills for those prescriptions being submitted to Medicare/Medicaid.
As of 2019, over 72 percent of all federal FCA actions were initiated by whistleblowers. The government recovered $62.1 billion under the False Claims Act between 1987 and 2019 and of this amount, over $44.7 billion or 72% was from "qui tam" cases brought by relators. In 2014, whistleblowers filed over 700 False Claims Act lawsuits. In 2014, the Department of Justice had its highest annual recovery in False Claims Act history, obtaining more than $6.1 billion in settlements and judgments from civil cases involving fraud and false claims against the government. In fiscal year 2019, The Department of Justice recovered over $3 billion under the False Claims Act, $2.2 billion of which were generated by whistleblowers. Since 2010 , the federal government has recovered over $37.6 billion in False Claims Act settlements and judgments.
The Act establishes liability when any person or entity improperly receives from or avoids payment to the Federal government. The Act prohibits:
The statute provides that anyone who violates the law "is liable to the United States Government for a civil penalty of not less than $5,000 and not more than $10,000, as adjusted by the Federal Civil Penalties Inflation Adjustment Act of 1990, plus 3 times the amount of damages which the Government sustains because of the act of that person." The False Claims Act requires a separate penalty for each violation of the statute. Under the Civil Penalties Inflation Adjustment Act, False Claims Act penalties are periodically adjusted for inflation. In 2020, the penalties range from $11,665 to $23,331 per violation.
Certain claims are not actionable, including:
There are unique procedural requirements in False Claims Act cases. For example:
In addition, the FCA contains an anti-retaliation provision, which allows a relator to recover, in addition to his award for reporting fraud, double damages plus attorney fees for any acts of retaliation for reporting fraud against the Government. This provision specifically provides relators with a personal claim of double damages for harm suffered and reinstatement.
Under the False Claims Act, the Department of Justice is authorized to pay rewards to those who report fraud against the federal government and are not convicted of a crime related to the fraud, in an amount of between 15 and 25 (but up to 30 percent in some cases) of what it recovers based upon the whistleblower's report. The relator's share is determined based on the FCA itself, legislative history, Department of Justice guidelines released in 1997, and court decisions.
(False Claims Act Amendments
On May 20, 2009, the Fraud Enforcement and Recovery Act of 2009 (FERA) was signed into law. It includes the most significant amendments to the FCA since the 1986 amendments. FERA enacted the following changes:
With this revision, the FCA now prohibits knowingly (changes are in bold):
On March 23, 2010, the Patient Protection and Affordable Care Act (also referred to as the health reform bill or PPACA) was signed into law by President Barack Obama. The Affordable Care Act made further amendments to the False Claims Act, including:
The False Claims Act has a detailed process for making a claim under the Act. Mere complaints to the government agency are insufficient to bring claims under the Act. A complaint (lawsuit)
must be filed in U.S. District Court (federal court) "in camera" (under seal). After an investigation by the Department of Justice within 60 days, or frequently several months after an extension is granted, the Department of Justice decides whether it will pursue the case.
If the case is pursued, the amount of the reward is less than if the Department of Justice decides not to pursue the case and the plaintiff/relator continues the lawsuit himself. However, the success rate is higher in cases that the Department of Justice decides to pursue.
Technically, the government has several options in handing cases. These include:
In practice, there are two other options for the Department of Justice:
There is case law where claims may be prejudiced if disclosure of the alleged unlawful act has been reported in the press, if complaints were filed to an agency instead of filing a lawsuit, or if the person filing a claim under the act is not the first person to do so. Individual states in the U.S. have different laws regarding whistleblowing involving state governments.
The U.S. Internal Revenue Service (IRS) takes the position that, for Federal income tax purposes, "qui tam" payments to a relator under FCA are ordinary income and not capital gains. The IRS position was challenged by a relator in the case of "Alderson v. United States" and, in 2012, the U.S. Court of Appeals for the Ninth Circuit upheld the IRS' stance. As of 2013, this remained the only circuit court decision on tax treatment of these payments.
In a 2000 case, "Vermont Agency of Natural Resources v. United States ex rel. Stevens", 529 U.S. 765 (2000), the United States Supreme Court held that a private individual may not bring suit in federal court on behalf of the United States against a State (or state agency) under the FCA. In "Stevens", the Supreme Court also endorsed the "partial assignment" approach to "qui tam" relator standing to sue, which had previously been articulated by the Ninth Circuit Federal Court of Appeals and is an exception to the general legal rule for standing.
In a 2007 case, "Rockwell International Corp. v. United States", the United States Supreme Court considered several issues relating to the "original source" exception to the FCA's public-disclosure bar. The Court held that (1) the original source requirement of the FCA provision setting for the original-source exception to the public-disclosure bar on federal-court jurisdiction is jurisdictional; (2) the statutory phrase "information on which the allegations are based" refers to the relator's allegations and not the publicly disclosed allegations; the terms "allegations" is not limited to the allegations in the original complaint, but includes, at a minimum, the allegations in the original complaint as amended; (3) relator's knowledge with respect to the pondcrete fell short of the direct and independent knowledge of the information on which the allegations are based required for him to qualify as an original source; and (4) the government's intervention did not provide an independent basis of jurisdiction with respect to the relator.
In a 2008 case, "Allison Engine Co. v. United States ex rel. Sanders", the United States Supreme Court considered whether a false claim had to be presented directly to the Federal government, or if it merely needed to be paid with government money, such as a false claim by a subcontractor to a prime contractor. The Court found that the claim need not be presented directly to the government, but that the false statement must be made with the intention that it will be relied upon by the government in paying, or approving payment of, a claim. The Fraud Enforcement and Recovery Act of 2009 reversed the Court's decision and made the types of fraud to which the False Claims Act applies more explicit.
In a 2009 case, "United States ex rel. Eisenstein v. City of New York", the United States Supreme Court considered whether, when the government declines to intervene or otherwise actively participate in a "qui tam" action under the False Claims Act, the United States is a "party" to the suit for purposes of Federal Rule of Appellate Procedure 4(a)(1)(A) (which requires that a notice of appeal in a federal civil action generally be filed within 30 days after entry of a judgment or order from which the appeal is taken). The Court held that when the United States has declined to intervene in a privately initiated FCA action, it is not a "party" for FRAP 4 purposes, and therefore, petitioner's appeal filed after 30 days was untimely.
In a 2016 case, "Universal Health Services, Inc. v. United States ex rel. Escobar", the United States Supreme Court sought to clarify the standard for materiality under the FCA. The court unanimously upheld the implied certification theory of FCA liability and strengthened the FCA's materiality requirement.
As of 2020, 29 states and the District of Columbia have false-claims laws modeled on the federal statute to protect their publicly funded programs from fraud by including qui tam provisions, which enables them to recover money at state level. Some of these state False Claims Act statutes provide similar protections to those of the federal law, while others limit recovery to claims of fraud related to the Medicaid program.
The California False Claims Act was enacted in 1987, but lay relatively dormant until the early 1990s, when public entities, frustrated by what they viewed as a barrage of unjustified and unmeritorious claims, began to employ the False Claims Act as a defensive measure.
In 1995, the State of Texas passed the Texas Medicaid Fraud Prevention Act (TMFPA), which specifically aims at combating fraud against the Texas Medicaid Program, which provides healthcare and prescription drug coverage to low-income individuals. The Texas law enacts state qui tam provisions that allow individuals to report fraud and initiate action against violations of the TMFPA, imposes consequences for noncompliance and includes whistleblower protections.
In Australia, The Treasury Laws Amendment (Enhancing Whistleblower Protections) Act, was passed in December 2018 and went into effect in 2019. The law expanded protections for whistleblowers, allowing them to report misconduct anonymously, as well as applying anti-retaliation protections to additional kinds of whistleblowers. Importantly, the law does not provide for rewards for whistleblowers. There have been calls since 2011 for legislation modeled on the False Claims Act and for their application to the tobacco industry and carbon pricing schemes.
In October 2013, the UK Government announced that it is considering the case for financially incentivising individuals reporting fraud in economic crime cases by private sector organisations, in an approach much like the US False Claims Act. The 'Serious and Organised Crime Strategy' paper released by the UK's Secretary of State for the Home Department sets out how that government plans to take action to prevent serious and organised crime and strengthen protections against and responses to it. The paper asserts that serious and organised crime costs the UK more than £24 billion a year. In the context of anti-corruption, the paper acknowledges that there is a need to not only target serious and organised criminals but also support those who seek to help identify and disrupt serious and organised criminality. Three UK agencies, the Department for Business, Innovation & Skills, the Ministry of Justice and the Home Office were tasked with considering the case for a US-style False Claims Act in the UK. In July 2014, the Financial Conduct Authority and the Bank of England Prudential Regulation Authority recommended Parliament enact strong measure to encourage and protect whistleblowers, but without offering whistleblower rewards – rejecting the US model.
Under Rule 9(b) of the Federal Rules of Civil Procedure, allegations of fraud or mistake must be pleaded with particularity. All appeals courts to have addressed the issue of whether Rule 9(b) pleading standards apply to qui tam actions have held that the heightened standard applies. The Fifth Circuit, the Sixth Circuit, the Seventh Circuit, the Eighth Circuit, the Tenth Circuit, and the Eleventh Circuit have all found that plaintiffs must allege specific false claims.
In 2010, the First Circuit decision in "U.S. ex rel. Duxbury v. Ortho Biotech Prods., L.P."(2009) and the Eleventh Circuit ruling in "U.S. ex rel. Hopper v. Solvay Pharms., Inc."(2009) were both appealed to the U.S. Supreme Court. The Court denied "certiorari" for both cases, however, declining to resolve the divergent appeals court decisions.
In 2009, the American Civil Liberties Union (ACLU), Government Accountability Project (GAP) and OMB Watch filed suit against the Department of Justice challenging the constitutionality of the "seal provisions" of the FCA that require the whistleblower and the court to keep lawsuits confidential for at least 60 days. The plaintiffs argued that the requirements infringe the First Amendment rights of the public and the whistleblower, and that they violate the division of powers, since courts are not free to release the documents until the executive branch acts. The government moved for dismissal, and the district court granted that motion in 2009. The plaintiffs appealed, and in 2011 their appeal was denied.
In 2004, the billing groups associated with the University of Washington agreed to pay $35 million to resolve civil claims brought by whistleblower Mark Erickson, a former compliance officer, under the False Claims Act. The settlement, approved by the UW Board of Regents, resolved claims that they systematically overbilled Medicaid and Medicare and that employees destroyed documents to hide the practice. The fraud settlement, the largest against a teaching hospital since the University of Pennsylvania agreed to pay $30 million in 1995, ended a five-year investigation that resulted in guilty pleas from two prominent doctors. The whistleblower was awarded $7.25M.
In 2010, a subsidiary of Johnson & Johnson agreed to pay over $81 million in civil and criminal penalties to resolve allegations in a FCA suit filed by two whistleblowers. The suit alleged that Ortho-McNeil-Janssen Pharmaceuticals, Inc. (OMJPI) acted improperly concerning the marketing, promotion and sale of the anti-convulsant drug Topamax. Specifically, the suit alleged that OMJPI "illegally marketed Topamax by, among other things, promoting the sale and use of Topamax for a variety of psychiatric conditions other than those for which its use was approved by the Food and Drug Administration, (i.e., "off-label" uses)." It also states that "certain of these uses were not medically accepted indications for which State Medicaid programs provided coverage" and that as a result "OMJPI knowingly caused false or fraudulent claims for Topamax to be submitted to, or caused purchase by, certain federally funded healthcare programs.
In response to a complaint from whistleblower Jerry H. Brown II, the US Government filed suit against Maersk for overcharging for shipments to US forces fighting in Iraq and Afghanistan. In a settlement announced on 3 January 2012, the company agreed to pay $31.9 million in fines and interest, but made no admission of wrongdoing. Brown was entitled to $3.6 million of the settlement.
The largest healthcare fraud settlement in history was made by GlaxoSmithKline in 2012 when it paid a total of $3 billion to resolve four qui tam lawsuits brought under the False Claims Act and related criminal charges. The claims include allegations Glaxo engaged in off-label marketing and paid kickbacks to doctors to prescribe certain drugs, including Paxil, Wellbutrin and Advair.
In 2013, Wyeth Pharmaceuticals Inc., a pharmaceutical company acquired by Pfizer, Inc. in 2009, paid $490.9 million to resolve its criminal and civil liability arising from the unlawful marketing of its drug Rapamune for uses that were not FDA-approved and potentially harmful. The case, "U.S. ex rel. Sandler and Paris v. Wyeth Pharmaceuticals and Pfizer, Inc." was brought by multiple whistleblowers and culminated in one of the largest False Claims Act recoveries for a single drug.
In 2014, CareFusion paid $40.1 million to settle allegations of violating the False Claims Act by promoting off label use of its products in the case United States ex rel. Kirk v. CareFusion et al., No. 10-2492. The government alleged that CareFusion promoted the sale of its drug ChloraPrep for uses that were not approved by the FDA. ChloraPrep is the commercial name under which CareFusion produced the drug chlorhexidine, used to clean the skin before surgery. In 2017, this case was called into question and was under review by the DOJ because the lead attorney for the DOJ serving as Assistant Attorney General in the case, Jeffery Wertkin, was arrested by the FBI on January 31, 2017 for allegedly attempting to sell a copy of a complaint in a secret whistleblower suit that was under seal.
In 2017, bio-pharmaceutical giant Celgene Corporation paid $240 million to settle allegations it sold and marketed its drugs Thalomid and Revlimid off-label in "U.S. ex rel. Brown v. Celgene", CV 10-03165 (RK) (C.D. Cal.). The case, brought by former Celgene sales representative, Beverly Brown, alleged violations under the False Claims Act including promoting Thalomid and Revlimid off-label for uses that were not FDA-approved and, in many cases, unsafe and not medically necessary, offered illegal kickbacks to influence healthcare providers to select its products, and concealed potential adverse events related to use of its drugs. | https://en.wikipedia.org/wiki?curid=11661 |
Fantastic Four
The Fantastic Four are a fictional superhero team appearing in American comic books published by Marvel Comics. The group debuted in "Fantastic Four" #1 (cover dated Nov. 1961), which helped to usher in a new level of realism in the medium. The Fantastic Four was the first superhero team created by artist/co-plotter Jack Kirby and editor/co-plotter Stan Lee, who developed a collaborative approach to creating comics with this title that they would use from then on.
The four individuals traditionally associated with the Fantastic Four, who gained superpowers after exposure to cosmic rays during a scientific mission to outer space, are Mister Fantastic (Reed Richards), a scientific genius and the leader of the group, who can stretch his body into incredible lengths and shapes; the Invisible Woman (Susan "Sue" Storm), who eventually married Reed, who can render herself invisible and later project powerful invisible force fields; the Human Torch (Johnny Storm), Sue's younger brother, who can generate flames, surround himself with them and fly; and the monstrous Thing (Ben Grimm), their grumpy but benevolent friend, a former college football star and Reed's college roommate as well as a good pilot, who possesses tremendous superhuman strength, durability, and endurance due to the nature of his stone-like flesh.
Since their original 1961 introduction, the Fantastic Four have been portrayed as a somewhat dysfunctional, yet loving, family. Breaking convention with other comic book archetypes of the time, they would squabble and hold grudges both deep and petty and eschewed anonymity or secret identities in favor of celebrity status. The team is also well known for its recurring encounters with characters such as the villainous monarch Doctor Doom, the planet-devouring Galactus, the Kree Empire's ruthless and tyrannical enforcer Ronan the Accuser, Annihilus, ruler of the Negative Zone, the sea-dwelling prince Namor, the spacefaring Silver Surfer, and the Skrull warrior Kl'rt.
The Fantastic Four have been adapted into other media, including four animated series and four live-action films.
Apocryphal legend has it that in 1961, longtime magazine and comic book publisher Martin Goodman was playing golf with either Jack Liebowitz or Irwin Donenfeld of rival company DC Comics, then known as National Periodical Publications, and that the top executive bragged about DC's success with the new superhero team the Justice League of America. While film producer and comics historian Michael Uslan has debunked the particulars of that story, Goodman, a publishing trend-follower, aware of the JLA's strong sales, did direct his comics editor, Stan Lee, to create a comic-book series about a team of superheroes. According to Lee, writing in 1974, "Martin mentioned that he had noticed one of the titles published by National Comics seemed to be selling better than most. It was a book called "The" "Justice League of America" and it was composed of a team of superheroes. ... 'If the Justice League is selling', spoke he, 'why don't we put out a comic book that features a team of superheroes?'"
Lee, who had served as editor-in-chief and art director of Marvel Comics and its predecessor companies, Timely Comics and Atlas Comics, for two decades, found that the medium had become creatively restrictive. Determined "to carve a real career for myself in the nowhere world of comic books", Lee concluded that, "For just this once, I would do the type of story I myself would enjoy reading... And the characters would be the kind of characters I could personally relate to: they'd be flesh and blood, they'd have their faults and foibles, they'd be fallible and feisty, and — most important of all — inside their colorful, costumed booties they'd still have feet of clay."
Lee said he created a synopsis for the first Fantastic Four story that he gave to penciller Jack Kirby, who then drew the entire story. Kirby turned in his penciled art pages to Lee, who added dialogue and captions. This approach to creating comics, which became known as the "Marvel Method", worked so well for Lee and Kirby that they used it from then on; the Marvel Method became standard for the company within a year.
Kirby recalled events somewhat differently. Challenged with Lee's version of events in a 1990 interview, Kirby responded: "I would say that's an outright lie", although the interviewer, Gary Groth, notes that this statement needs to be viewed with caution. Kirby claims he came up with the idea for the Fantastic Four in Marvel's offices, and that Lee had merely added the dialogue after the story had been pencilled. Kirby also sought to establish, more credibly and on numerous occasions, that the visual elements of the strip were his conceptions. He regularly pointed to a team he had created for rival publisher DC Comics in the 1950s, the Challengers of the Unknown. "[I]f you notice the uniforms, they're the same... I always give them a skintight uniform with a belt... the Challengers and the FF have a minimum of decoration. And of course, the Thing's skin is a kind of decoration, breaking up the monotony of the blue uniform." The chest insignia of a "4" within a circle, however, was designed by Lee. The characters wear no uniforms in the first two issues.
Given the conflicting statements, outside commentators have found it hard to identify with precise detail who created the Fantastic Four. Although Stan Lee's typed synopsis for the Fantastic Four exists, Earl Wells, writing in "The Comics Journal", points out that its existence does not assert its place in the creation: "[W]e have no way of knowing of whether Lee wrote the synopsis after a discussion with Kirby in which Kirby supplied most of the ideas". Comics historian R. C. Harvey believes that the Fantastic Four was a furtherance of the work Kirby had been doing previously, and so "more likely Kirby's creations than Lee's". But Harvey notes that the Marvel Method of collaboration allowed each man to claim credit, and that Lee's dialogue added to the direction the team took. Wells argues that it was Lee's contributions which set the framework within which Kirby worked, and this made Lee "more responsible". Comics historian Mark Evanier, a studio assistant to Jack Kirby in the 1970s, says that the considered opinion of Lee and Kirby's contemporaries was "that "Fantastic Four" was created by Stan and Jack. No further division of credit seemed appropriate".
The release of "The Fantastic Four" #1 (Nov. 1961) was an unexpected success. Lee had felt ready to leave the comics field at the time, but the positive response to "Fantastic Four" persuaded him to stay on. The title began to receive fan mail and Lee started printing the letters in a letter column with issue #3. Also with the third issue, Lee created the hyperbolic slogan "The Greatest Comic Magazine in the World!!" With the following issue, the slogan was changed to "The World's Greatest Comic Magazine!" and became a fixture on the issue covers into the 1990s, and on numerous covers in the 2000s.
Issue #4 (May 1962) reintroduced Namor the Sub-Mariner, an aquatic antihero who was a star character of Marvel's earliest iteration, Timely Comics, during the late 1930s and 1940s period that historians and fans call the Golden Age of Comics. Issue #5 (July 1962) introduced the team's most frequent nemesis, Doctor Doom. These earliest issues were published bimonthly. With issue #16 (July 1963), the cover title dropped its "The" and became simply "Fantastic Four".
While the early stories were complete narratives, the frequent appearances of these two antagonists, Doom and Namor, in subsequent issues indicated the creation of a long narrative by Lee and Kirby that extended over months. According to comics historian Les Daniels, "only narratives that ran to several issues would be able to contain their increasingly complex ideas". During its creators' lengthy run, the series produced many acclaimed storylines and characters that have become central to Marvel, including the hidden race of alien-human genetic experiments, the Inhumans; the Black Panther, an African king who would be mainstream comics' first black superhero; the rival alien races the Kree and the shapeshifting Skrulls; Him, who would become Adam Warlock; the Negative Zone and unstable molecules. The story frequently cited as Lee and Kirby's finest achievement is the three-part "Galactus Trilogy" that began in "Fantastic Four" #48 (March 1966), chronicling the arrival of Galactus, a cosmic giant who wanted to devour the planet, and his herald, the Silver Surfer. "Fantastic Four" #48 was chosen as #24 in the 100 Greatest Marvels of All Time poll of Marvel's readers in 2001. Editor Robert Greenberger wrote in his introduction to the story that, "As the fourth year of the Fantastic Four came to a close, Stan Lee and Jack Kirby seemed to be only warming up. In retrospect, it was perhaps the most fertile period of any monthly title during the Marvel Age." Daniels noted that "[t]he mystical and metaphysical elements that took over the saga were perfectly suited to the tastes of young readers in the 1960s", and Lee soon discovered that the story was a favorite on college campuses. The "Fantastic Four Annual" was used to spotlight several key events. The Sub-Mariner was crowned king of Atlantis in the first annual (1963). The following year's annual revealed the origin story of Doctor Doom. "Fantastic Four Annual" #3 (1965) presented the wedding of Reed Richards and Sue Storm. Lee and Kirby reintroduced the original Human Torch in "Fantastic Four Annual" #4 (1966) and had him battle Johnny Storm. Sue Richards' pregnancy was announced in "Fantastic Four Annual" #5 (1967), and the Richards' son, Franklin Richards was born in "Fantastic Four Annual" #6 (1968) in a story which introduced Annihilus as well.
Marvel filed for a trademark for "Fantastic Four" in 1967 and the United States Patent and Trademark Office issued the registration in 1970.
Kirby left Marvel in mid-1970, having drawn the first 102 issues plus an unfinished issue, partially published in "Fantastic Four" #108, with alterations, and later completed and published as "Fantastic Four: The Lost Adventure" (April 2008), "Fantastic Four" continued with Lee, Roy Thomas, Gerry Conway and Marv Wolfman as its consecutive regular writers, working with artists such as John Romita Sr., John Buscema, Rich Buckler and George Pérez, with longtime inker Joe Sinnott adding some visual continuity. Jim Steranko also contributed some covers during this time. A short-lived series starring the team, "Giant-Size Super-Stars", began in May 1974 and changed its title to "Giant-Size Fantastic Four" with issue #2. The fourth issue introduced Jamie Madrox, a character who later became part of the X-Men. "Giant-Size Fantastic Four" was canceled with issue #6 (Oct. 1975). Roy Thomas and George Pérez crafted a metafictional story for "Fantastic Four" #176 (Nov. 1976) in which the Impossible Man visited the offices of Marvel Comics and met numerous comics creators. Marv Wolfman and Keith Pollard crafted a multi-issue storyline involving the son of Doctor Doom which culminated in issue #200 (Nov. 1978). John Byrne joined the title with issue #209 (Aug. 1979), doing pencil breakdowns for Sinnott to finish. He and Wolfman introduced a new herald for Galactus named Terrax the Tamer in #211 (Oct. 1979).
Bill Mantlo briefly followed Wolfman as writer of the series and wrote a crossover with "Peter Parker, The Spectacular Spider-Man" #42 (May 1980). Byrne wrote and drew a giant-sized Fantastic Four promotional comic for Coca-Cola, which was rejected by Coca-Cola as being too violent and published as "Fantastic Four" #220–221 (July–Aug. 1980) instead. Writer Doug Moench and penciller Bill Sienkiewicz then took over for 10 issues. With issue #232 (July 1981), the aptly titled "Back to the Basics", Byrne began his run as writer, penciller and inker, the last under the pseudonym Bjorn Heyn for this issue only.
Byrne revitalized the slumping title with his run. Originally, Byrne was slated to write with Sienkiewicz providing the art. Sienkiewicz left to do "Moon Knight", and Byrne subsequently became writer, artist, and inker. Various editors were assigned to the comic; eventually Bob Budiansky became the regular editor. Byrne told Jim Shooter that he could not work with Budiansky, although they ultimately continued to work together. In 2006, Byrne said "that's my paranoia. I look back and I think that was Shooter trying to force me off the book". Byrne left following issue #293 (Aug. 1986) in the middle of a story arc, explaining he could not recapture the fun he had previously had on the series. One of Byrne's changes was making the Invisible Girl into the Invisible Woman: assertive and confident. During this period, fans came to recognize that she was quite powerful, whereas previously, she had been primarily seen as a superpowered mother and wife in the tradition of television moms like those played by Donna Reed and Florence Henderson.
Byrne staked new directions in the characters' personal lives, having the married Sue Storm and Reed Richards suffer a miscarriage and the Thing quitting the Fantastic Four, with She-Hulk being recruited as his long-term replacement. He also re-emphasized the family dynamic which he felt the series had drifted away from after the Lee/Kirby run, commenting that, ""Family"—and not "dysfunctional family"—is the central, key element to the FF. It is an absolutely vital dynamic between the characters." [emphases in original]
Byrne was followed by a quick succession of writers: Roger Stern, Tom DeFalco, and Roy Thomas. Steve Englehart took over as writer for issues 304–332 (except #320). The title had been struggling, so Englehart decided to make radical changes. He felt the title had become stale with the normal makeup of Reed, Sue, Ben, and Johnny, so in issue #308 Reed and Sue retired and were replaced with the Thing's new girlfriend, Sharon Ventura, and Johnny Storm's former love, Crystal. The changes increased readership through issue #321. At this point, Marvel made decisions about another Englehart comic, "West Coast Avengers", that he disagreed with, and in protest he changed his byline to S.F.X. Englehart (S.F.X. is the abbreviation for Simple Sound Effects). In issue #326, Englehart was told to bring Reed and Sue back and undo the other changes he had made. This caused Englehart to take his name entirely off the book. He used the pseudonym John Harkness, which he had created years before for work he didn't want to be associated with. According to Englehart, the run from #326 through his last issue, #332, was "one of the most painful stretches of [his] career." Writer-artist Walt Simonson took over as writer with #334 (December 1989), and three issues later began pencilling and inking as well. With brief inking exceptions, two fill-in issues, and a three-issue stint drawn by Arthur Adams, Simonson remained in all three positions through #354 (July 1991).
Simonson, who had been writing the team comic "The Avengers", had gotten approval for Reed and Sue to join that team after Engelhart had written them out of "Fantastic Four". Yet by "The Avengers" #300, where they were scheduled to join the team, Simonson was told the characters were returning to "Fantastic Four". This led to Simonson quitting "The Avengers" after that issue. Shortly afterward, he was offered the job of writing "Fantastic Four". Having already prepared a number of stories involving the Avengers with Reed and Sue in the lineup, he then rewrote these for "Fantastic Four". Simonson later recalled that working on "Fantastic Four" allowed him the latitude to use original Avengers members Thor and Iron Man, which he had been precluded from using in "The Avengers".
After another fill-in, the regular team of writer and Marvel editor-in-chief Tom DeFalco, penciller Paul Ryan and inker Dan Bulanadi took over, with Ryan self-inking beginning with #360 (Jan. 1992). That team, with the very occasional different inker, continued for years through #414 (July 1996). DeFalco nullified the Storm-Masters marriage by retconning that the alien Skrull Empire had kidnapped the real Masters and replaced her with a spy named Lyja. Once discovered, Lyja, who herself had fallen for Storm, helped the Fantastic Four rescue Masters. Ventura departed after being further mutated by Doctor Doom. Although some fans were not pleased with DeFalco's run on "Fantastic Four", calling him "The Great Satan", the title's sales rose steadily over the period.
Other key developments included Franklin Richards being sent into the future and returning as a teenager; the return of Reed's time-traveling father, Nathaniel, who is revealed to be the father of time-travelling villain Kang the Conqueror and Reed's apparent death at the hands of a seemingly mortally wounded Doctor Doom. It would be two years before DeFalco resurrected the two characters, revealing that their "deaths" were orchestrated by the supervillain Hyperstorm.
The ongoing series was canceled with issue #416 (Sept. 1996) and relaunched with vol. 2 #1 (Nov. 1996) as part of the multi-series "Heroes Reborn" crossover story arc. The yearlong volume retold the team's first adventures in a more contemporary style, and set in a parallel universe. Following the end of that experiment, "Fantastic Four" was relaunched with vol. 3 #1 (Jan. 1998). Initially by the team of writer Scott Lobdell and penciller Alan Davis, it went after three issues to writer Chris Claremont (co-writing with Lobdell for #4–5) and penciller Salvador Larroca; this team enjoyed a long run through issue #32 (Aug. 2000).
Following the run of Claremont, Lobdell and Larroca, Carlos Pacheco took over as penciller and co-writer, first with Rafael Marín, then with Marín and Jeph Loeb. This series began using dual numbering, as if the original "Fantastic Four" series had continued unbroken, with issue #42 / #471 (June 2001). At the time, the Marvel Comics series begun in the 1960s, such as "Thor" and "The Amazing Spider-Man", were given such dual numbering on the front cover, with the present-day volume's numbering alongside the numbering from the original series. After issue #70 / #499 (Aug. 2003), the title reverted to its original vol. 1 numbering with issue #500 (Sept. 2003).
Karl Kesel succeeded Loeb as co-writer with issue #51 / #480 (March 2002), and after a few issues with temporary teams, Mark Waid took over as writer with #60 / 489 (October 2002) with artist Mike Wieringo with Marvel releasing a promotional variant edition of their otherwise $2.25 debut issue at the price of nine cents US. Pencillers Mark Buckingham, Casey Jones, and Howard Porter variously contributed through issue #524 (May 2005), with a handful of issues by other teams also during this time. Writer J. Michael Straczynski and penciller Mike McKone did issues #527–541 (July 2005 – Nov. 2006), with Dwayne McDuffie taking over as writer the following issue, and Paul Pelletier succeeding McKone beginning with #544 (May 2007).
As a result of the events of the "Civil War" company-crossover storyline, the Black Panther and Storm temporarily replaced Reed and Susan Richards on the team. During that period, the Fantastic Four also appeared in "Black Panther", written by Reginald Hudlin and pencilled primarily by Francis Portela. Beginning with issue #554 (April 2008), writer Mark Millar and penciller Bryan Hitch began what Marvel announced as a sixteen-issue run. Following the summer 2008 crossover storyline, "Secret Invasion", and the 2009 aftermath "Dark Reign", chronicling the U.S. government's assigning of the Nation's security functions to the seemingly reformed supervillain Norman Osborn, the Fantastic Four starred in a five-issue miniseries, "Dark Reign: Fantastic Four" (May–Sept. 2009), written by Jonathan Hickman, with art by Sean Chen. Hickman took over as the series regular writer as of issue #570 with Dale Eaglesham and later Steve Epting on art.
In the storyline "Three", which concluded in "Fantastic Four" #587 (cover date March 2011, published January 26, 2011), the Human Torch appears to die stopping a horde of monsters from the other-dimensional Negative Zone. The series ended with the following issue, #588, and relaunched in March 2011 as simply "FF". The relaunch saw the team assume a new name, the Future Foundation, adopt new black-and-white costumes, and accept longtime ally Spider-Man as a member. In October 2011, with the publication of "FF" #11 (cover-dated Dec. 2011), the "Fantastic Four" series reached its 599th issue.
In November 2011, to commemorate the 50th anniversary of the Fantastic Four and of Marvel Comics, the company published the 100-page "Fantastic Four" #600 (cover-dated Jan. 2012), which returned the title to its original numbering and featured the return of the Human Torch. It revealed the fate of the character of Johnny Storm after issue #587, showing that while he did in fact die, he was resurrected to fight as a gladiator for the entertainment of Annihilus. Storm later formed a resistance force called Light Brigade and defeated Annihilus.
Although it was launched as a continuation of the "Fantastic Four" title, "FF" continues publication as a separate series. Starting with issue #12, the title focuses upon the youthful members of the Future Foundation, including Franklin and Valeria Richards.
In the graphic novel "Fantastic Four: Season One", the Fantastic Four is given an updated origin story set in the present day instead of the 1960s. The hardcover compilation debuted at number four on "The New York Times" Best Seller list for graphic novels.
As part of Marvel NOW! "Fantastic Four" ended with #611, ending Jonathan Hickman's long run on "FF" titles, and the title was relaunched in November 2012 with the creative team of writer Matt Fraction and artist Mark Bagley. In the new title with its numbering starting at #1, the entire Fantastic Four family explore space together, with the hidden intent for Reed Richards to discover why his powers are fading.
Writer James Robinson and artist Leonard Kirk launched a new "Fantastic Four" series in February 2014 (cover dated April 2014).
Robinson later confirmed that "Fantastic Four" would be cancelled in 2015 with issue #645, saying that "The book is reverting to its original numbers, and the book is going away for a while. I'm moving towards the end of "Fantastic Four". I just want to reassure people that you will not leave this book with a bad taste in your mouth." In the aftermath of the "Secret Wars" storyline, the Thing is working with the Guardians of the Galaxy and the Human Torch is acting as an ambassador with the Inhumans. With Franklin's powers restored and Reed having absorbed the power of the Beyonders from Doom, the Richards family is working on travelling through and reconstructing the multiverse, but Peter Parker has purchased the Baxter Building to keep it "safe" until the team is ready to come back together.
A new volume for the Fantastic Four was released in August 2018, written by Dan Slott, as part of Marvel's Fresh Start event. The first issue of the new series was met with strong sales, and a positive critical reaction. When the Future Foundation is threatened by the Griever at the End of All Things, Mister Fantastic plays on her ego to convince her to provide him with equipment that will allow him to summon his teammates. When Human Torch and Thing are reunited with Mister Fantastic and Invisible Woman, the other superheroes that were part of the Fantastic Four at some point in their lives also arrived, including unexpectedly, X-Men's Iceman. With the gathered heroes assisted the Fantastic Four into causing so much damage to the Griever's equipment, she is forced to retreat in her final telepod or be trapped in that universe. This left the heroes to salvage components from the broken ship to create their own teleport system to return to their universe. The Fantastic Four and their extended family returned to Earth where they find that Liberteens members Ms. America, 2-D, Hope, and Iceberg have come together as the Fantastix with Ms. America taking the codename of Ms. Fantastix. Following the staged bank robbery that the Wrecking Crew committed and their involvement of being hired to humiliate the Fantastix in public, the Fantastic Four gave the Fantastix their blessing to continue using the Baxter Building.
In the storyline "Point of Origin", the Fantastic Four entrust Alicia, H.E.R.B.I.E. Franklin and Valerie to protect Earth while they begin their mission to learn a further origin of the cosmic radiation that granted them their powers in the first place, piloting a new space ship called Marvel-2. While in a middle of space adventure to find the origin behind their super power results from a cosmic ray, the Fantastic Four are attacked by a group who believed themselves to be the superheroes of Planet Spyre, the Unparalleled. As both Reed and Sue got separated from the Thing, while Human Torch is revealed to be the soulmate of the Unparalleled member named Sky, they learned the Unparalleled's leader and the Overseer of Planet Spyre, Revos was responsible for the cosmic rays that struck the team on their original trip as he wanted to stop them coming to his planet, and subsequently mutated his people to 'prepare for their return' before trying to eradicate the mutates who are unable to retain their original forms in the same manner as the Thing, accusing the mutates of being "villains and imperfects"; as a result, through his own paranoia and xenophobia, the Overseer himself is responsible for the fateful creation of the Fantastic Four and mutated his entire race to face a non-existent threat. Revos challenges Mr. Fantastic for a fight over their differences, until it has settled and they finally made peace. As the Fantastic Four are about to depart Spyre after helping its citizens clean up the Planet (as well as Reed providing the mutates with a variation of the temporary 'cure' he has created for Ben), Skye join them to learn about Earth and every unseen galaxy. When the incoming Kree-Skrull Empyre occur at the same time when the teen heroes being outlawed, the original Fantastic Four went to space with Avengers to stop this Empyre, leaving Franklin and Valerie being backed by Spider-Man and Wolverine to defend Earth.
Ancillary titles and features spun off from the flagship series include the 1970s quarterly "Giant-Size Fantastic Four" and the 1990s "Fantastic Four Unlimited" and "Fantastic Four Unplugged"; "Fantastic Force", an 18-issue spinoff (November 1994 – April 1996) featuring an adult Franklin Richards, from a different timeline, as Psi-Lord. A 12-issue series "Fantastic Four: The World's Greatest Comics Magazine" ran in 2001, paying homage to Stan Lee and Jack Kirby's legendary run. A spinoff title "Marvel Knights 4" (April 2004 – August 2006) was written by Roberto Aguirre-Sacasa and initially illustrated by Steve McNiven in his first Marvel work. As well, there have been numerous limited series featuring the group.
In 1996, Marvel launched "Fantastic Four 2099". The series was part of the company's Marvel 2099 imprint, which explored an alternate future of the Marvel Universe. The four protagonists inexplicably find themselves in 2099, with the world believing them to be clones of the original members of the Fantastic Four. The series ran for 8 issues (Jan. – Aug. 1996), serving as a companion to "Doom 2099"—an original Marvel 2099 title which featured an individual claiming to be the original Victor von Doom.
In 2004, Marvel launched "Ultimate Fantastic Four". As part of the company's Ultimate Marvel imprint, the series re-imagined the team as teenagers. The series ran for 60 issues (Feb. 2004 – Feb. 2009). In 2008, Marvel also launched "Marvel Adventures: Fantastic Four", an out-of-continuity series aimed at younger readers.
Although it was launched by Marvel as a continuation of the Fantastic Four title in 2011, "FF" continued publication as a separate series after the regular series resumed in 2012. From issues #12, the title focused upon the youthful members of the Future Foundation, including Franklin and Valeria Richards. A second volume was launched as part of Marvel NOW! by Matt Fraction and Mike Allred depicting a substitute Fantastic Four team starring Scott Lang, Medusa, She-Hulk, and Ms. Thing.
The Human Torch was given a solo strip in "Strange Tales" in 1962 in order to bolster sales of the title. The series began in "Strange Tales" #101 (October 1962), in 12- to 14-page stories plotted by Lee and initially scripted by his brother, Larry Lieber, and drawn by penciller Kirby and inker Dick Ayers.
Here, Johnny was seen living with his older sister, Susan, in fictional Glenview, Long Island, New York, where he continued high school and, with youthful naiveté, attempted to maintain a "secret identity". In "Strange Tales" #106 (March 1963), Johnny discovered that his friends and neighbors knew of his dual identity all along, from Fantastic Four news reports, but were humoring him. Supporting characters included Johnny's girlfriend, Doris Evans, usually in consternation as Johnny cheerfully flew off to battle bad guys. She was seen again in a 1973 issue of "Fantastic Four", having become a heavyset but cheerful wife and mother. Ayers took over the penciling after ten issues, later followed by original Golden Age Human Torch creator Carl Burgos and others. The Fantastic Four made occasional cameo appearances, and the Thing became a co-star with issue #123 (Aug. 1964).
The Human Torch shared the split book "Strange Tales" with fellow feature Doctor Strange for the majority of its run, before being replaced in issue #135 (August 1965) by Nick Fury, Agent of S.H.I.E.L.D.. The Silver Age stories were republished in 1974, along with some Golden Age Human Torch stories, in a short-lived ongoing "Human Torch" series.
A later ongoing solo series in Marvel's manga-influenced Tsunami imprint, "Human Torch", ran 12 issues (June 2003 – June 2004), followed by the five-issue limited series "Spider-Man/Human Torch" (March–July 2005), an untold tales team-up arc spanning the course of their friendship.
The Thing appeared in two team-up issues of "Marvel Feature" (#11–12, September–November 1973). Following their success, he was given his own regular team-up title "Marvel Two-in-One", co-starring with Marvel heroes not only in the present day but occasionally in other time periods (fighting alongside the World War II-era Liberty Legion in #20 and the 1930s hero Doc Savage in #21, for example) and in alternate realities. The series ran 100 issues (January 1974 – June 1983), with seven summer annuals (1976–1982) and was immediately followed by the solo title "The Thing" #1–36 (July 1983 – June 1986). Another ongoing solo series, also titled "The Thing", ran eight issues (January–August 2006).
In April 2019, Marvel Comics announced that it will publish "Invisible Woman", a five-issue miniseries. This will be Sue Storm's first solo title. Adam Hughes drew the cover for Issue #1.
The Fantastic Four is formed after four civilian astronauts are exposed to cosmic rays during an unauthorized outer space test flight in an experimental rocket ship designed by Dr. Reed Richards. Pilot Ben Grimm and crew-members Susan Storm and her brother Johnny Storm survive an emergency crash-landing in a field on Earth. Upon exiting the rocket, the four discover they have developed incredible superpowers, and decide to use these powers to help others.
In the first issue the crew talks about Reed Richards' rocketship flying to the stars. Stan Lee's original synopsis described the crew's plan to fly to Mars, but Lee later shortly afterward wrote that due to "the rate the Communists are progressing in space, maybe we better make this a flight to the STARS, instead of just to Mars, because by the time this mag goes on sale, the Russians may have already MADE a flight to Mars!"
In a significant departure from preceding superhero conventions, the Fantastic Four make no effort to maintain secret identities or, until issue #3, to wear superhero costumes, instead maintaining a public profile and enjoying celebrity status for scientific and heroic contributions to society. At the same time, they are often prone to arguing and even fighting with one another. Despite their bickering, the Fantastic Four consistently prove themselves to be "a cohesive and formidable team in times of crisis."
While there have been a number of lineup changes to the group, the four characters who debuted in "Fantastic Four" #1 remain the core and most frequent lineup.
The Fantastic Four has had several different headquarters, most notably the Baxter Building, located at 42nd Street and Madison Avenue in New York City. The Baxter Building was replaced by Four Freedoms Plaza at the same location after its destruction at the hands of Kristoff Vernard, adopted son of the team's seminal foe Doctor Doom. (Prior to the completion of Four Freedoms Plaza, the team took up temporary residence at Avengers Mansion.) Pier 4, a waterfront warehouse, served as a temporary headquarters after Four Freedoms Plaza was destroyed by the ostensible superhero team the Thunderbolts shortly after the revelation that they were actually the supervillain team the Masters of Evil in disguise. Pier 4 was eventually destroyed during a battle with the longtime Fantastic Four supervillain Diablo, after which the team received a new Baxter Building, courtesy of one of team leader Reed Richards' former professors, Noah Baxter. This second Baxter Building was constructed in Earth's orbit and teleported into the vacant lot formerly occupied by the original.
A number of characters are closely affiliated with the team, share complex personal histories with one or more of its members but have never actually held an official membership. Some of these characters include, but are not limited to: Namor the Sub-Mariner (previously an antagonist), Alicia Masters, Lyja the Lazerfist, H.E.R.B.I.E., Kristoff Vernard (Doctor Doom's former protégé), Wyatt Wingfoot, Sue and Johnny's father Franklin Storm, the receptionist android Roberta, governess Agatha Harkness, and Reed and Sue's children Franklin Richards and Valeria Richards.
Several allies of the Fantastic Four have served as temporary members of the team, including Crystal, Medusa, Power Man (Luke Cage), Nova (Frankie Raye), She-Hulk, Ms. Marvel (Sharon Ventura), Ant-Man (Scott Lang), Namorita, Storm, and the Black Panther. A temporary lineup from "Fantastic Four" #347–349 (December 1990 – February 1991) consisted of the Hulk (in his "Joe Fixit" persona), Spider-Man, Wolverine, and Ghost Rider (Daniel Ketch).
Other notable characters who have been involved with the Fantastic Four include Alyssa Moy, Caledonia (Alysande Stuart of Earth-9809), Fantastic Force, the Inhumans (particularly the royal family members Black Bolt, Crystal, Medusa, Gorgon, Karnak, Triton, and Lockjaw), Reed's father Nathaniel Richards, the Silver Surfer (previously an antagonist), Thundra, postal worker Willie Lumpkin, Baxter Building landlord Walter Collins, the Thing's rivals the Yancy Street Gang and Uatu the Watcher.
Author Christopher Knowles states that Kirby's work on creations such as the Inhumans and the Black Panther served as "a showcase of some of the most radical concepts in the history of the medium".
Writers and artists over many years have created a variety of characters to challenge the Fantastic Four. Knowles states that Kirby helped to create "an army of villains whose rage and destructive power had never been seen before," and "whose primary impulse is to smash the world." Some of the team's oldest and most frequent enmities have involved such foes as the Mole Man, the Skrulls, Namor the Sub-Mariner, Doctor Doom, the Puppet Master, Kang the Conqueror/Rama-Tut/Immortus, Blastaar, the Frightful Four, Annihilus, Galactus, and Klaw. Other prominent antagonists of the Fantastic Four have included the Wizard, the Impossible Man, the Red Ghost and the Super-Apes, the Mad Thinker, the Super-Skrull, the Molecule Man, Diablo, Dragon Man, Psycho-Man, Ronan the Accuser, Salem's Seven, Terrax the Tamer, Terminus, Hyperstorm and Lucia von Bardas.
Fantastic Four Incorporated, also known as Fantastic Enterprises, is a fictional organization appearing in American comic books published by Marvel Comics. It was founded by Reed Richards to license use of Richard's patents and funded the Fantastic Four's operation and their source of income. Staff are:
The Fantastic Four's characterization was initially different from all other superheroes at the time. One major difference is that they do not conceal their identities, leading the public to be both suspicious and in awe of them. Also, they frequently argued and disagreed with each other, hindering their work as a team. Described as "heroes with hangups" by Stan Lee, the Thing has a temper, and the Human Torch resents being a child among adults. Mr. Fantastic blames himself for the Thing's transformation. Social scientist Bradford W. Wright describes the team as a "volatile mix of human emotions and personalities". In spite of their disagreements, they ultimately function well as a team.
The first issue of "The Fantastic Four" proved a success, igniting a new direction for superhero comics and soon influencing many other superhero comics. Readers grew fond of Ben's grumpiness, Johnny's tendency to annoy others and Reed and Sue's spats. Stan Lee was surprised at the reaction to the first issue, leading him to stay in the comics field despite previous plans to leave. Comics historian Stephen Krensky said that "Lee's natural dialogue and flawed characters appealed to 1960s kids looking to 'get real'".
As of 2005, 150 million comics featuring the Fantastic Four had been sold.
There have been four "The Fantastic Four" animated TV series and three released feature films. The Fantastic Four also guest-starred in the "Secret Wars" story arc of the 1990s "Spider-Man" animated series, and the Thing guest-starred (with a small cameo from the other Fantastic Four members) in the "Fantastic Fortitude" episode of the 1996 "The Incredible Hulk" series. The Fantastic Four also appeared in the 2010 series "".
There was a short-lived radio show in 1975 that adapted early Lee/Kirby stories and is notable for casting a pre-"Saturday Night Live" Bill Murray as the Human Torch. Also in the cast were Bob Maxwell as Reed Richards, Cynthia Adler as Sue Storm, Jim Pappas as Ben Grimm and Jerry Terheyden as Doctor Doom. Other Marvel characters featured in the series included Ant-Man, Prince Namor, Nick Fury and the Hulk. Stan Lee narrated the series and the scripts were taken almost verbatim from the comic books. The radio show was packaged into five-minute segments, with five segments comprising a complete adventure. The team appeared on the Power Records album "Fantastic Four: "The Way It Began"" book and record set, an audio dramatization of "Fantastic Four" #126.
The Fantastic Four has been the subject of four animated television series. The first, "Fantastic Four", produced by Hanna-Barbera, ran 20 episodes on ABC from September 9, 1967 to September 21, 1968. The second "Fantastic Four" series, produced by DePatie-Freleng, ran 13 episodes from September 9, 1978, to December 16, 1978; this series features a H.E.R.B.I.E. Unit in place of the Human Torch.
In 1979, the Thing was featured as half of the Saturday morning cartoon "Fred and Barney Meet the Thing". The character of the Thing received a radical make-over for the series. The title character for this program was Benjy Grimm, a teenage boy who possessed a pair of magic Thing-rings which could transform him into the Thing when he put them together and said "Thing-ring, do your thing!" The other members of the Fantastic Four do not appear in the series, nor do the animated "The Flintstones" stars Fred Flintstone and Barney Rubble, despite the title of the program.
The third "Fantastic Four" was broadcast as part of "The Marvel Action Hour" umbrella, with introductions by Stan Lee. This series ran 26 episodes from September 24, 1994 to February 24, 1996. The fourth series, "", debuted on September 2, 2006, on Cartoon Network and ran for 26 episodes.
Different Fantastic Four members appear briefly and with little or no dialogue and are mentioned various times throughout the first season of "". The most expansive appearances are in the episode "The Private War of Doctor Doom", in which the Avengers team up with the Fantastic Four to battle the titular supervillain, and in the final episode of season two, in which the groups team up to battle Galactus. The Thing becomes a member of the New Avengers in episode 23 of season 2.
The Fantastic Four appear in the "Hulk and the Agents of S.M.A.S.H." episode "Monster No More." The Agents of S.M.A.S.H. assist the Fantastic Four in thwarting the Tribbitite Invasion.
A film adaptation of the characters, "The Fantastic Four", was completed in 1994 by producer Roger Corman and starred Alex Hyde-White as Reed Richards/Mr. Fantastic, Rebecca Staab as Sue Storm-Richards/Invisible Woman, Jay Underwood as Johnny Storm/Human Torch, Michael Bailey Smith as Ben Grimm and Carl Ciarfalio as The Thing and Joseph Culp as Victor von Doom/Doctor Doom. The film was not released to theaters or on home video, but it has since been made available through bootleg video distributors. It was made because Constantin Film owned the film rights and would have lost them if it failed to begin production by a certain deadline, a tactic known as creating an ashcan copy. According to producer Bernd Eichinger, Avi Arad had Marvel purchase the film for a few million dollars.
In 2005, the second film adaptation, "Fantastic Four" directed by Tim Story, was released by 20th Century Fox. Despite mixed reviews from critics, it earned US$155 million in North America and $330 million worldwide. The sequel, "", directed by Story and written by Don Payne, was released in 2007. Despite mixed-to-negative reviews, the sequel earned $132 million in North America and a total of $330.6 million worldwide. Both films feature Ioan Gruffudd as Reed Richards / Mr. Fantastic, Jessica Alba as Susan Storm / Invisible Woman, Chris Evans as Johnny Storm / Human Torch, Michael Chiklis as Ben Grimm / The Thing, and Julian McMahon as Victor Von Doom / Dr. Doom. Stan Lee makes cameo appearances as the mailman Willie Lumpkin in the first film and as himself in the second film.
A reboot directed by Josh Trank (also titled "Fantastic Four", but stylized as "Fant4stic") was released on August 7, 2015. The film stars Miles Teller as Reed Richards, Kate Mara as Sue Storm, Michael B. Jordan as Johnny Storm, Jamie Bell as Ben Grimm and Toby Kebbell as Doctor Doom. It is based on "Ultimate Fantastic Four". It earned poor reviews and box office results. On March 20, 2019, due to the acquisition of 21st Century Fox by Disney, the film rights of "Fantastic Four" reverted to Marvel Studios.
In July 2019 at the San Diego Comic Con, producer and head of Marvel Studios Kevin Feige, announced that a Fantastic Four film set within the Marvel Cinematic Universe is in development.
In 1985, the Fantastic Four starred in "Questprobe #3 The Fantastic Four", an adventure game from Adventure International for the Atari 8-bit series. In 1997, the group starred in the "Fantastic Four" video game. The team appeared in the "" video game, based on the 1990s Spider-Man animated series, for the Super NES and Sega Genesis. The Thing and the Human Torch appeared in the 2005 game "".
All of the Fantastic Four appear as playable characters in the game "" with Doctor Doom being the main enemy. The members of the Fantastic Four are also featured in "", although the team is separated over the course of the game, with Mister Fantastic being 'locked' into the Pro-Registration side of the game's storyline and the Thing briefly becoming unavailable to the player - just as he left America in protest of the war - until he returns to assist in preventing civilian casualties during the conflict. The Fantastic Four will also return in "" this time as playable DLC (downloadable content) alongside additional members of Marvel Knights and the X-Men.
The Human Torch has an appearance in a mini-game where the player races against him in all versions of "Ultimate Spider-Man", except on the Game Boy Advance platform. The Fantastic Four star in tie-in videogames based on the 2005 film "Fantastic Four", and . The Fantastic Four are also playable characters in "Marvel Heroes", and "Lego Marvel Super Heroes".
The Fantastic Four starred in their own virtual pinball game Fantastic Four for "Pinball FX 2" released by Zen Studios. | https://en.wikipedia.org/wiki?curid=11664 |
Filtration
Filtration is a physical, biological or chemical operation that separates solid matter and fluid from a mixture with a filter medium that has a complex structure through which only the fluid can pass. Solid particles that cannot pass through the filter medium are described as oversize and the fluid that passes through is called the filtrate. Oversize particles may form a filter cake on top of the filter and may also block the filter lattice, preventing the fluid phase from crossing the filter, known as blinding. The size of the largest particles that can successfully pass through a filter is called the effective pore size of that filter. The separation of solid and fluid is imperfect; solids will be contaminated with some fluid and filtrate will contain fine particles (depending on the pore size, filter thickness and biological activity). Filtration occurs both in nature and in engineered systems; there are biological, geological, and industrial forms.
There are many different methods of filtration; all aim to attain the separation of substances. Separation is achieved by some form of interaction between the substance or objects to be removed and the filter. The substance that is to pass through the filter must be a fluid, i.e. a liquid or gas. Methods of filtration vary depending on the location of the targeted material, i.e. whether it is dissolved in the fluid phase or suspended as a solid.
There are several filtration techniques depending on the desired outcome namely, hot, cold and vacuum filtration. Some of the major purposes of getting the desired outcome are, for the removal of impurities from a mixture or, for the isolation of solids from a mixture.
Hot filtration method is mainly used to separate solids from a hot solution. This is done in order to prevent crystal formation in the filter funnel and other apparatuses that comes in contact with the solution. As a result, the apparatus and the solution used are heated in order to prevent the rapid decrease in temperature which in turn, would lead to the crystallization of the solids in the funnel and hinder the filtration process.
One of the most important measures to prevent the formation of crystals in the funnel and to undergo effective hot filtration is the use stemless filter funnel. Due to the absence of stem in the filter funnel, there is a decrease in the surface area of contact between the solution and the stem of the filter funnel, hence preventing re-crystallization of solid in the funnel, adversely affecting filtration process.
Cold Filtration method is the use of ice bath in order to rapidly cool down the solution to be crystallized rather than leaving it out to cool it down slowly in the room temperature. This technique results to the formation of very small crystals as opposed to getting large crystals by cooling the solution down at room temperature.
Vacuum Filtration technique is most preferred for small batch of solution in order to quickly dry out small crystals. This method requires a Büchner funnel, filter paper of smaller diameter than the funnel, Büchner flask, and rubber tubing to connect to vacuum source.
Two main types of filter media are employed in laboratories: a surface filter, a solid sieve which traps the solid particles, with or without the aid of filter paper (e.g. Büchner funnel, Belt filter, Rotary vacuum-drum filter, Cross-flow filters, Screen filter); and a depth filter, a bed of granular material which retains the solid particles as it passes (e.g. sand filter). The first type allows the solid particles, i.e. the residue, to be collected intact; the second type does not permit this. However, the second type is less prone to clogging due to the greater surface area where the particles can be trapped. Also, when the solid particles are very fine, it is often cheaper and easier to discard the contaminated granules than to clean the solid sieve.
Filter media can be cleaned by rinsing with solvents or detergents. Alternatively, in engineering applications, such as swimming pool water treatment plants, they may be cleaned by backwashing. Self-cleaning screen filters utilize point-of-suction backwashing to clean the screen without interrupting system flow.
Fluids flow through a filter due to a difference in pressure—fluid flows from the high-pressure side to the low-pressure side of the filter, leaving some material behind. The simplest method to achieve this is by gravity and can be seen in the coffeemaker example. In the laboratory, pressure in the form of compressed air on the feed side (or vacuum on the filtrate side) may be applied to make the filtration process faster, though this may lead to clogging or the passage of fine particles. Alternatively, the liquid may flow through the filter by the force exerted by a pump, a method commonly used in industry when a reduced filtration time is important. In this case, the filter need not be mounted vertically.
Certain filter aids may be used to aid filtration. These are often incompressible diatomaceous earth, or kieselguhr, which is composed primarily of silica. Also used are wood cellulose and other inert porous solids such as the cheaper and safer perlite.
These filter aids can be used in two different ways. They can be used as a precoat before the slurry is filtered. This will prevent gelatinous-type solids from plugging the filter medium and also give a clearer filtrate. They can also be added to the slurry before filtration. This increases the porosity of the cake and reduces resistance of the cake during filtration. In a rotary filter, the filter aid may be applied as a precoat; subsequently, thin slices of this layer are sliced off with the cake.
The use of filter aids is usually limited to cases where the cake is discarded or where the precipitate can be chemically separated from the filter.
Filtration is a more efficient method for the separation of mixtures than decantation, but is much more time-consuming. If very small amounts of solution are involved, most of the solution may be soaked up by the filter medium.
An alternative to filtration is centrifugation—instead of filtering the mixture of solid and liquid particles, the mixture is centrifuged to force the (usually) denser solid to the bottom, where it often forms a firm cake. The liquid above can then be decanted. This method is especially useful for separating solids which do not filter well, such as gelatinous or fine particles. These solids can clog or pass through the filter, respectively.
Examples of filtration include
An experiment to prove the existence of microscopic organisms involves the comparison of water passed through unglazed porcelain and unfiltered water. When left in sealed containers the filtered water takes longer to go foul, demonstrating that very small items (such as bacteria) can be removed from fluids by filtration.
In the kidney, renal filtration is the filtration of blood in the glomerulus, followed by selective reabsorption of many substances essential for the body to maintain homeostasis. | https://en.wikipedia.org/wiki?curid=11665 |
Follies
Follies is a musical with music and lyrics by Stephen Sondheim and a book by James Goldman.
The story concerns a reunion in a crumbling Broadway theatre, scheduled for demolition, of the past performers of the "Weismann's Follies", a musical revue (based on the "Ziegfeld Follies"), that played in that theatre between the World Wars. It focuses on two couples, Buddy and Sally Durant Plummer and Benjamin and Phyllis Rogers Stone, who are attending the reunion. Sally and Phyllis were showgirls in the Follies. Both couples are deeply unhappy with their marriages. Buddy, a traveling salesman, is having an affair with a girl on the road; Sally is still as much in love with Ben as she was years ago; and Ben is so self-absorbed that Phyllis feels emotionally abandoned. Several of the former showgirls perform their old numbers, sometimes accompanied by the ghosts of their former selves. The musical numbers in the show have been interpreted as pastiches of the styles of the leading Broadway composers of the 1920s and '30s, and sometimes as parodies of specific songs.
The Broadway production opened on April 4, 1971, directed by Harold Prince and Michael Bennett, and with choreography by Bennett. The musical was nominated for eleven Tony Awards and won seven. The original production, the second-most costly performed on Broadway to that date, ran for over 500 performances but ultimately lost its entire investment. The musical has had a number of major revivals, and several of its songs have become standards, including "Broadway Baby", "I'm Still Here", "Too Many Mornings", "Could I Leave You?", and "Losing My Mind".
After the failure of "Do I Hear A Waltz?" (1965), for which he had written the lyrics to Richard Rodgers's music, Sondheim decided that he would henceforth work only on projects where he could write both the music and lyrics himself. He asked author and playwright James Goldman to join him as bookwriter for a new musical. Inspired by a "New York Times" article about a gathering of former showgirls from the Ziegfeld Follies, they decided upon a story about ex-showgirls.
Originally titled "The Girls Upstairs", the musical was originally to be produced by David Merrick and Leland Hayward in late 1967, but the plans ultimately fell through, and Stuart Ostrow became the producer, with Joseph Hardy to direct. These plans also did not work out, and finally Harold Prince, who had worked previously with Sondheim, became the producer and director. He had agreed to work on "The Girls Upstairs" if Sondheim would agree to work on "Company"; Michael Bennett, the young choreographer of "Company", was also brought onto the project. It was Prince who changed the title to "Follies"; he was "intrigued by the psychology of a reunion of old chorus dancers and loved the play on the word 'follies'".
In 1971, on the soon-to-be demolished stage of the Weismann Theatre, a reunion is being held to honor the Weismann's "Follies" shows past, and the beautiful chorus girls who performed there every year between the two World Wars. The once resplendent theatre is now little but planks and scaffolding ("Prologue"/"Overture"). As the ghosts of the young showgirls slowly drift through the theatre, a majordomo enters with his entourage of waiters and waitresses. They pass through the spectral showgirls without seeing them.
Sally Durant Plummer, "blond, petite, sweet-faced" and at 49 "still remarkably like the girl she was thirty years ago", a former Weismann girl, is the first guest to arrive; her ghostly youthful counterpart moves towards her. Phyllis Rogers Stone, a stylish and elegant woman, also arrives with her husband Ben, a renowned philanthropist and politician. As their younger counterparts approach them, Phyllis comments to Ben about their past. He feigns a lack of interest; there is an underlying tension in their relationship. As more guests arrive, Sally's husband, Buddy, enters. He is a salesman, in his early 50s, appealing and lively, whose smiles cover inner disappointment.
Finally, Weismann enters to greet his guests. Roscoe, the old master of ceremonies, introduces the former showgirls ("Beautiful Girls"). Former Weismann performers at the reunion include Max and Stella Deems, who lost their radio jobs and became store owners in Miami; Solange La Fitte, a coquette, who is vibrant and flirtatious even at 66; Hattie Walker, who has outlived five younger husbands; Vincent and Vanessa, former dancers who now own an Arthur Murray franchise; Heidi Schiller, for whom Franz Lehár once wrote a waltz (or was it Oscar Straus? Facts never interest her; what matters is the song!); and Carlotta Campion, a film star who has embraced life and benefited from every experience.
As the guests reminisce, the stories of Ben, Phyllis, Buddy, and Sally unfold. Phyllis and Sally were roommates while in the Follies, and Ben and Buddy were best friends at school in New York. When Sally sees Ben, her former lover, she greets him self-consciously ("Don't Look at Me"). Buddy and Phyllis join their spouses and the foursome reminisces about the old days of their courtship and the theatre, their memories vividly coming to life in the apparitions of their young counterparts ("Waiting For The Girls Upstairs"). Each of the four is shaken at the realization of how life has changed them. Elsewhere, Willy Wheeler (portly, in his sixties) cartwheels for a photographer. Emily and Theodore Whitman, ex-vaudevillians in their seventies, perform an old routine ("The Rain on the Roof"). Solange proves she is still fashionable at what she claims is 66 ("Ah, Paris!"), and Hattie Walker performs her old showstopping number ("Broadway Baby").
Buddy warns Phyllis that Sally is still in love with Ben, and she is shaken by how the past threatens to repeat itself. Sally is awed by Ben's apparently glamorous life, but Ben wonders if he made the right choices and considers how things might have been ("The Road You Didn't Take"). Sally tells Ben how her days have been spent with Buddy, trying to convince him (and herself) ("In Buddy’s Eyes"). But it is clear that Sally is still in love with Ben – even though their affair ended badly when Ben decided to marry Phyllis. She shakes loose from the memory and begins to dance with Ben, who is touched by the memory of the Sally he once cast aside.
Phyllis interrupts this tender moment and has a biting encounter with Sally. Before she has a chance to really let loose, they are both called on to participate in another performance – Stella Deems and the ex-chorines line up to perform an old number ("Who's That Woman?"), as they are mirrored by their younger selves. Afterward, Phyllis and Ben angrily discuss their lives and relationship, which has become numb and emotionless. Sally is bitter and has never been happy with Buddy, although he has always adored her. She accuses him of having affairs while he is on the road, and he admits he has a steady girlfriend, Margie, in another town, but always returns home. Carlotta amuses a throng of admirers with a tale of how her dramatic solo was cut from the Follies because the audience found it humorous, transforming it as she sings it into a toast to her own hard-won survival ("I'm Still Here").
Ben confides to Sally that his life is empty. She yearns for him to hold her, but young Sally slips between them and the three move together ("Too Many Mornings"). Ben, caught in the passion of memories, kisses Sally as Buddy watches from the shadows. Sally thinks this is a sign that the two will finally get married, and Ben is about to protest until Sally interrupts him with a kiss and runs off to gather her things, thinking that the two will leave together. Buddy leaves the shadows furious, and fantasizes about the girl he should have married, Margie, who loves him and makes him feel like "a somebody", but bitterly concludes he does not love her back ("The Right Girl"). He tells Sally that he's done, but she is lost in a fantasy world, and tells him that Ben has asked her to marry him. Buddy tells her she must be either crazy or drunk, but he's already supported Sally through rehab clinics and mental hospitals and cannot take any more. Ben drunkenly propositions Carlotta, with whom he once had a fling, but she has a young lover and coolly turns him down. Heidi Schiller, joined by her younger counterpart, performs "One More Kiss", her aged voice a stark contrast to the sparkling coloratura of her younger self. Phyllis kisses a waiter and confesses to him that she had always wanted a son. She then tells Ben that their marriage can't continue the way it has been. Ben replies by saying that he wants a divorce, and Phyllis assumes the request is due to his love for Sally. Ben denies this, but still wants Phyllis out. Angry and hurt, Phyllis considers whether to grant his request ("Could I Leave You?").
Phyllis begins wondering at her younger self, who worked so hard to become the socialite that Ben needed. Ben yells at his younger self for not appreciating all the work that Phyllis did. Both Buddys enter to confront the Bens about how they stole Sally. Sally and her younger self enter and Ben firmly tells Sally that he never loved her. All the voices begin speaking and yelling at each other. Suddenly, at the peak of madness and confusion, the couples are engulfed by their follies, which transform the rundown theatre into a fantastical "Loveland", an extravaganza even more grand and opulent than the gaudiest Weismann confection: "the place where lovers are always young and beautiful, and everyone lives only for love". Sally, Phyllis, Ben, and Buddy show their "real and emotional lives" in "a sort of group nervous breakdown."
What follows is a series of musical numbers performed by the principal characters, each exploring their biggest desires. The two younger couples sing in a counterpoint of their hopes for the future ("You're Gonna Love Tomorrow/Love Will See Us Through"). Buddy then appears, dressed in "plaid baggy pants, garish jacket, and a shiny derby hat", and performs a high-energy vaudeville routine depicting how he is caught between his love for Sally and Margie's love for him ("The God-Why-Don't-You-Love-Me Blues"). Sally appears next, dressed as a torch singer, singing of her passion for Ben from the past - and her obsession with him now ("Losing My Mind"). In a jazzy dance number, accompanied by a squadron of chorus boys, Phyllis reflects on the two sides of her personality, one naive and passionate and the other jaded and sophisticated and her desire to combine them ("The Story of Lucy and Jessie"). Resplendent in top hat and tails, Ben begins to offer his devil-may-care philosophy ("Live, Laugh, Love"), but stumbles and anxiously calls to the conductor for the lyrics, as he frantically tries to keep going. Ben becomes frenzied, while the dancing ensemble continues as if nothing was wrong. Amidst a deafening discord, Ben screams at all the figures from his past and collapses as he cries out for Phyllis.
"Loveland" has dissolved back into the reality of the crumbling and half-demolished theatre; dawn is approaching. Ben admits to Phyllis his admiration for her, and Phyllis shushes him and helps Ben regain his dignity before they leave. After exiting, Buddy escorts the emotionally devastated Sally back to their hotel with the promise to work things out later. Their ghostly younger selves appear, watching them go. The younger Ben and Buddy softly call to their "girls upstairs", and the Follies end.
Source: "Follies" Score
≠ Some productions substitute "Ah, But Underneath" when the actress portraying Phyllis is not primarily a dancer.
≠≠ Omitted from some productions
"Note: this is the original song list from the original Broadway production in 1971. Variations are discussed in Versions"
Songs cut prior to the Broadway premiere include: "All Things Bright and Beautiful" (used in the prologue), "Can That Boy Foxtrot!", "Who Could Be Blue?", "Little White House", "So Many People", "It Wasn't Meant to Happen", "Pleasant Little Kingdom", and "Uptown Downtown". The musical numbers "Ah, But Underneath" (replacing "The Story of Lucy and Jessie"), "Country House", "Make the Most of Your Music" (replacing "Live, Laugh, Love"), "Social Dancing" and a new version of "Loveland" have been incorporated into various productions.
Hal Prince said: ""Follies" examines obsessive behavior, neurosis and self-indulgence more microscopically than anything I know of." Bernadette Peters quoted Sondheim on the character of "Sally": "He said early on that [Sally] is off balance, to put it mildly. He thinks she’s very neurotic, and she is very neurotic, so he said to me, 'Congratulations. She’s crazy.'" Martin Gottfried wrote: "The concept behind "Follies" is theater nostalgia, representing the rose-colored glasses through which we face the fact of age ... the show is conceived in ghostliness. At its very start, ghosts of Follies showgirls stalk the stage, mythic giants in winged, feathered, black and white opulence. Similarly, ghosts of the Twenties shows slip through the evening as the characters try desperately to regain their youth through re-creations of their performances and inane theater sentiments of their past."
Joanne Gordon, author and Chair and Artistic Director, Theatre, at California State University, Long Beach, wrote ""Follies" is in part an affectionate look at the American musical theater between the two World Wars and provides Sondheim with an opportunity to use the traditional conventions of the genre to reveal the hollowness and falsity of his characters' dreams and illusions. The emotional high generated by the reunion of the Follies girls ultimately gives way to anger, disappointment, and a weary resignation to reality." ""Follies" contains two scores: the Follies pastiche numbers and the book numbers." Some of the Follies numbers imitate the style of particular composers of the early 20th century: "Losing My Mind" is in the style of a George Gershwin ballad "The Man I Love". Sondheim noted that the song "The God-Why-Don't-You-Love-Me Blues" is "another generic pastiche: vaudeville music for chases and low comics, but with a patter lyric...I tried to give it the sardonic knowingness of Lorenz Hart or Frank Loesser."
"Loveland", the final musical sequence, (that "consumed the last half-hour of the original" production) is akin to an imaginary 1941 Ziegfeld Follies sequence, with Sally, Phyllis, Ben and Buddy performing "like comics and torch singers from a Broadway of yore." "Loveland" features a string of vaudeville-style numbers, reflecting the leading characters' emotional problems, before returning to the theatre for the end of the reunion party. The four characters are "whisked into a dream show in which each acts out his or her own principal 'folly'".
Goldman continued to revise the book of the musical right up to his death, which occurred shortly before the 1998 Paper Mill Playhouse production. Sondheim, too, has added and removed songs that he judged to be problematic in various productions. Ted Chapin explains: "Today, "Follies" is rarely performed twice in exactly the same version. James Goldman's widow made the observation that the show has morphed throughout its entire life...The London production had new songs and dialogue. The Paper Mill Playhouse production used some elements from London but stayed close to the original. The 2001 Roundabout Broadway revival, the first major production following Goldman's death in 1998, was again a combination of previous versions."
Major changes were made for the original production in London, which attempted to establish a lighter tone and favored a happier ending than the original Broadway production. According to Joanne Gordon, "When "Follies" opened in London...it had an entirely different, and significantly more optimistic, tone. Goldman's revised book offered some small improvements over the original."
According to Sondheim, the producer Cameron Mackintosh asked for changes for the 1987 London production. "I was reluctantly happy to comply, my only serious balk being at his request that I cut "The Road You Didn't Take" ... I saw no reason not to try new things, knowing we could always revert to the original (which we eventually did). The net result was four new songs...For reasons which I've forgotten, I rewrote "Loveland" for the London production. There were only four showgirls in this version, and each one carried a shepherd's crook with a letter of the alphabet on it."
The musical was written in one act, and the original director, Prince, did not want an intermission, while the co-director, Bennett, wanted two acts. It was originally performed in one act. The 1987 West End, 2005 Barrington Stage Company, the 2001 Broadway revival and Kennedy Center 2011 productions were performed in two acts. However, the August 23, 2011, Broadway preview performance was performed without an intermission. By opening the 2011 Broadway revival was performed with the intermission, in two acts. The 2017 National Theatre production is performed without an interval.
"Follies" had its pre-Broadway tryout at the Colonial Theatre, Boston, from February 20 through March 20, 1971.
"Follies" premiered on Broadway on April 4, 1971 at the Winter Garden Theatre. It was directed by Harold Prince and Michael Bennett, with choreography by Bennett, scenic design by Boris Aronson, costumes by Florence Klotz, and lighting by Tharon Musser. It starred Alexis Smith (Phyllis), John McMartin (Ben), Dorothy Collins (Sally), Gene Nelson (Buddy), along with several veterans of the Broadway and vaudeville stage. The supporting role of Carlotta was created by Yvonne De Carlo, and usually is given to a well-known veteran performer who can belt out a song. Other notable performers in the original productions were: Fifi D'Orsay as Solange LaFitte, Justine Johnston as Heidi Schiller, Mary McCarty as Stella Deems, Arnold Moss as Dimitri Weismann, Ethel Shutta as Hattie Walker, and Marcie Stringer and Charles Welch as Emily and Theodore Whitman.
The show closed on July 1, 1972 after 522 performances and 12 previews. According to "Variety", the production was a "total financial failure, with a cumulative loss of $792,000." Prince planned to present the musical on the West Coast and then on a national tour. However, the show did not do well in its Los Angeles engagement and plans for a tour ended.
Frank Rich, for many years the chief drama critic for "The New York Times", had first garnered attention, while an undergraduate at Harvard University, with a lengthy essay for the "Harvard Crimson" about the show, which he had seen during its pre-Broadway run in Boston. He predicted that the show eventually would achieve recognition as a Broadway classic. Rich later wrote that audiences at the original production were baffled and restless.
For commercial reasons, the cast album was cut from two LPs to one early in production. Most songs were therefore heavily abridged and several were left entirely unrecorded. According to Craig Zadan, "It's generally felt that ... Prince made a mistake by giving the recording rights of "Follies" to Capitol Records, which in order to squeeze the unusually long score onto one disc, mutilated the songs by condensing some and omitting others." Chapin confirms this: "Alas ... final word came from Capitol that they would not go for two records... [Dick Jones] now had to propose cuts throughout the score in consultation with Steve." "One More Kiss" was omitted from the final release but was restored for CD release. Chapin relates that "there was one song that Dick Jones [producer of the cast album] didn't want to include on the album but which Steve Sondheim most definitely did. The song was "One More Kiss", and the compromise was that if there was time, it would be recorded, even if Jones couldn't promise it would end up on the album. (It did get recorded but didn't make its way onto the album until the CD reissue years later.)"
The musical was produced at The Muny, St. Louis, Missouri in July 1972 and then transferred to the Shubert Theatre, Century City, California, running from July 22, 1972 through October 1, 1972. It was directed by Prince and starred Dorothy Collins (Sally; replaced by Janet Blair), Alexis Smith (Phyllis), John McMartin (Ben; replaced by Edward Winter), Gene Nelson (Buddy), and Yvonne De Carlo (Carlotta) reprising their original roles. The production was the premiere attraction at the newly constructed 1,800-seat theatre, which, coincidentally, was itself razed thirty years later (in 2002, in order to build a new office building), thus mirroring the "Follies" plot line upon which the musical is based.
A full production ran at the Forum Theatre, Wythenshawe, England, from 30 April 1985, directed by Howard Lloyd-Lewis, design by Chris Kinman, costumes by Charles Cusick-Smith, lighting by Tim Wratten, musical direction by Simon Lowe, and choreographed by Paul Kerryson. The cast included Mary Millar (Sally Durant Plummer), Liz Izen (Young Sally), Meg Johnson (Stella Deems), Les Want (Max Deems), Betty Benfield (Heidi Schiller), Joseph Powell (Roscoe), Chili Bouchier (Hattie Walker), Shirley Greenwood (Emily Whitman), Bryan Burdon (Theodore Whitman), Monica Dell (Solange LaFitte), Jeannie Harris (Carlotta Campion), Josephine Blake (Phyllis Rogers Stone), Kevin Colson (Ben), Debbie Snook (Young Phyllis), Stephen Hale (Young Ben), Bill Bradley (Buddy Plummer), Paul Burton (Young Buddy), David Scase (Dimitri Weismann), Mitch Sebastian (Young Vincent), Kim Ismay (Young Vanessa), Lorraine Croft (Young Stella), and Meryl Richardson (Young Heidi).
A staged concert at Avery Fisher Hall, Lincoln Center, was performed on September 6 and 7, 1985. The concert starred Barbara Cook (Sally), George Hearn (Ben), Mandy Patinkin (Buddy), and Lee Remick (Phyllis), and featured Carol Burnett (Carlotta), Betty Comden (Emily), Adolph Green (Theodore), Liliane Montevecchi (Solange LaFitte), Elaine Stritch (Hattie Walker), Phyllis Newman (Stella Deems), Jim Walton (Young Buddy), Howard McGillin (Young Ben), Liz Callaway (Young Sally), Daisy Prince (Young Phyllis), Andre Gregory (Dmitri), Arthur Rubin (Roscoe), and Licia Albanese (Heidi Schiller). Rich, in his review, noted that "As performed at Avery Fisher Hall, the score emerged as an original whole, in which the 'modern' music and mock vintage tunes constantly comment on each other, much as the script's action unfolds simultaneously in 1971 (the year of the reunion) and 1941 (the year the Follies disbanded)."
Among the reasons the concert was staged was to provide an opportunity to record the entire score. The resulting album was more complete than the original cast album. However, director Herbert Ross took some liberties in adapting the book and score for the concert format—dance music was changed, songs were given false endings, new dialogue was spoken, reprises were added, and Patinkin was allowed to sing "The God-Why-Don't-You-Love-Me Blues" as a solo instead of a trio with two chorus girls. Portions of the concert were seen by audiences worldwide in the televised documentary about the making of the concert, also released on videotape and DVD, of" 'Follies' in Concert".
The musical played in the West End at the Shaftesbury Theatre on July 21, 1987 and closed on February 4, 1989 after 644 performances. The producer was Cameron Mackintosh, direction was by Mike Ockrent, with choreography by Bob Avian and design by Maria Björnson. The cast featured Diana Rigg (Phyllis), Daniel Massey (Ben), Julia McKenzie (Sally), David Healy (Buddy), Lynda Baron, Leonard Sachs, Maria Charles, Pearl Carr & Teddy Johnson. Dolores Gray was praised as Carlotta, continuing to perform after breaking her ankle, although in a reduced version of the part. During the run, Eartha Kitt replaced Gray, sparking somewhat of a comeback (she went on to perform her own one woman show at The Shaftesbury Theatre to sell-out houses for three weeks from 18 March 1989 after "Follies" closed). Other cast replacements included Millicent Martin as Phyllis. Julia McKenzie returned to the production for the final four performances.
The book "was extensively reworked by James Goldman, with Sondheim's cooperation and also given an intermission." The producer Cameron Mackintosh did not like "that there was no change in the characters from beginning to end... In the London production ... the characters come to understand each other." Sondheim "did not think the London script was as good as the original." However, he thought that it was "wonderful" that, at the end of the first act, "the principal characters recognized their younger selves and were able to acknowledge them throughout the last thirty minutes of the piece." Sondheim wrote four new songs: "Country House" (replacing "The Road You Didn't Take"), "Loveland" (replacing the song of the same title), "Ah, But Underneath" (replacing "The Story of Lucy and Jessie", for the non-dancer Diana Rigg), and "Make the Most of Your Music" (replacing "Live, Laugh, Love").
Critics who had seen the production in New York (such as Frank Rich) found it substantially more "upbeat" and lacking in the atmosphere it had originally possessed. According to the Associated Press (AP) reviewer, "A revised version of the Broadway hit "Follies" received a standing ovation from its opening-night audience and raves from British critics, who said the show was worth a 16-year wait." The AP quoted Michael Coveney of "The Financial Times", who wrote: ""Follies" is a great deal more than a camp love-in for old burlesque buffs and Sondheim aficionados." In "The New York Times" the critic Francis X. Clines wrote: "The initial critics' reviews ranged from unqualified raves to some doubts whether the reworked book of James Goldman is up to the inventiveness of Sondheim's songs. 'A truly fantastic evening,' "The Financial Times" concluded, while the London "Daily News" said, 'The musical is inspired,' and "The Times" described the evening as 'a wonderful idea for a show which has failed to grow into a story.'" The "Times" critic, Irving Wardle, also said, "It is not much of a story, and whatever possibilities it may have had in theory are scuppered by James Goldman’s book … a blend of lifeless small-talk, bitching and dreadful gags". Clines further commented: "In part, the show is a tribute to musical stage history, in which the 57-year-old Mr. Sondheim is steeped, for he first learned song writing at the knee of Oscar Hammerstein II and became the acknowledged master songwriter who bridged past musical stage romance into the modern musical era of irony and neurosis. "Follies" is a blend of both, and the new production is rounded out with production numbers celebrating love's simple hope for young lovers, its extravagant fantasies for Ziegfeld aficionados, and its fresh lesson for the graying principals."
This production was also recorded on two CDs and was the first full recording.
"Follies" was voted ninth in a BBC Radio 2 listener poll of the UK's "Nation's Number One Essential Musicals."
Michigan Opera Theatre (MOT) was the first major American opera company to present "Follies" as part of their main stage repertoire, running from October 21, 1988 through November 6. The MOT production starred Nancy Dussault (Sally), John-Charles Kelly (Buddy), Juliet Prowse (Phyllis) and Ron Raines (Ben), Edie Adams (Carlotta), Thelma Lee (Hattie), and Dennis Grimaldi (Vincent).
A production also ran from March to April 1995 at the Theatre Under the Stars, Houston, Texas and in April to May 1995 at the 5th Avenue Theatre, Seattle with Constance Towers (Phyllis), Judy Kaye (Sally), Edie Adams, Denise Darcel, Virginia Mayo and Karen Morrow (Carlotta). The 1998 Paper Mill Playhouse production (Millburn, New Jersey) was directed by Robert Johanson with choreography by Jerry Mitchell and starred Donna McKechnie (Sally), Dee Hoty (Phyllis), Laurence Guittard (Ben), Tony Roberts (Buddy), Kaye Ballard (Hattie ), Eddie Bracken (Weismann), and Ann Miller (Carlotta). Phyllis Newman and Liliane Montevecchi reprised the roles they played in the Lincoln Center production. "Ah, But Underneath" was substituted for "The Story of Lucy and Jessie" in order to accommodate non-dancer Hoty. This production received a full-length recording on two CDs, including not only the entire score as originally written, but a lengthy appendix of songs cut from the original production in tryouts.
Julianne Boyd directed a fully staged version of "Follies" in 2005 by the Barrington Stage Company (Massachusetts) in June–July 2005. Principal cast included Kim Crosby (Sally), Leslie Denniston (Phyllis), Jeff McCarthy (Ben), Lara Teeter (Buddy), Joy Franz (Solange), Marni Nixon (Heidi), and Donna McKechnie (Carlotta). Stephen Sondheim attended one of the performances.
The Dublin Concert was held in May 1996 at the National Concert Hall. Directed by Michael Scott, the cast included Lorna Luft, Millicent Martin, Mary Millar, Dave Willetts, Trevor Jones Bryan Smyth, Alex Sharpe, Christine Scarry, Aidan Conway and Enda Markey.
A concert was held at Theatre Royal, Drury Lane, London, on December 8, 1996, and broadcast on BBC Radio 2 on February 15, 1997. The cast starred Julia McKenzie (Sally), Donna McKechnie (Phyllis), Denis Quilley (Ben) and Ron Moody (Buddy). This show recreated the original Broadway score.
"Follies" was performed in concert at the Sydney Opera House with the Sydney Symphony Orchestra in February 1998 as the highlight of the Sydney Gay and Lesbian Mardi Gras and had three performances. It was directed and staged by Stephen Lloyd Helper and produced by Helper and Alistair Thomson for Mardi Gras. It starred Toni Lamond (Sally), Jill Perryman(Carlotta), Judi Connelli (Phyllis), Terence Donovan (Ben), Nancye Hayes (Hattie), Glenn Butcher (Buddy), Ron Haddrick (Dimitri), Susan Johnston (Heidi), and Leonie Page, Maree Johnson, Mitchell Butel, Maureen Howard. The Sydney Symphony was conducted by Maestro Tommy Tycho. It followed a similar presentation at the 1995 Melbourne Festival of Arts with a different cast and orchestra.
A Broadway revival opened at the Belasco Theatre on April 5, 2001 and closed on July 14, 2001 after 117 performances and 32 previews. This Roundabout Theatre limited engagement had been expected to close on September 30, 2001. Directed by Matthew Warchus with choreography by Kathleen Marshall, it starred Blythe Danner (Phyllis), Judith Ivey (Sally), Treat Williams (Buddy), Gregory Harrison (Ben), Marge Champion, Polly Bergen (Carlotta), Joan Roberts (the original Laurey from the original Broadway production of "Oklahoma!"; later replaced by Marni Nixon), Larry Raiken (Roscoe) and an assortment of famous names from the past. Former MGM and onetime Broadway star Betty Garrett, best known to younger audiences for her television work, played Hattie. It was significantly stripped down (earlier productions had featured extravagant sets and costumes) and was not a success critically.
According to an article in "The Hollywood Reporter", "almost every performance of the show played to a full house, more often than not to standing-room-only. Tickets always were tough to come by. The reason the final curtain came down Saturday was because, being a production by the Roundabout Theatre Company – a subscription-based 'not-for-profit' theater company – it was presented under special Equity terms, with its actors paid a minimal fee. To extend the show, it would have been necessary to negotiate new contracts with the entire company ... because of the Belasco's limited seating, it wasn't deemed financially feasible to do so."
Theatre writer and historian John Kenrick wrote, "the bad news is that this "Follies" is a dramatic and conceptual failure. The good news is that it also features some of the most exciting musical moments Broadway has seen in several seasons. Since you don't get those moments from the production, the book or the leads, that leaves the featured ensemble, and in "Follies" that amounts to a small army. ... Marge Champion and Donald Saddler are endearing as the old hoofers. ... I dare you not to fall in love with Betty Garrett's understated "Broadway Baby" – you just want to pick her up and hug her. Polly Bergen stops everything cold with "I’m Still Here," bringing a rare degree of introspection to a song that is too often a mere belt-fest... [T]he emotional highpoint comes when Joan Roberts sings 'One More Kiss'."
A production was mounted at London's Royal Festival Hall in a limited engagement. After previews from August 3, 2002, it opened officially on August 6, and closed on August 31, 2002. Paul Kerryson directed, and the cast starred David Durham as Ben, Kathryn Evans as Sally, Louise Gold as Phyllis, Julia Goss as Heidi and Henry Goodman as Buddy. Variety singer and performer Joan Savage sang "Broadway Baby". This production conducted by Julian Kelly featured the original Broadway score.
"Follies" was part of L.A.'s Reprise series, and it was housed at the Wadsworth Theatre, presented as a staged concert, running from June 15 to June 23, 2002. The production was directed by Arthur Allan Seidelman, set design by Ray Klausen, lighting design by Tom Ruzika, costumes by Randy Gardell, sound design by Philip G. Allen, choreography by Kay Cole, musical director Gerald Sternbach.
The production starred Bob Gunton (Ben), Warren Berlinger (Dimitri Weismann), Patty Duke (Phyllis), Vikki Carr (Sally), Harry Groener (Buddy), Carole Cook (Hattie), Carol Lawrence (Vanessa), Ken Page (Roscoe), Liz Torres (Stella), Amanda McBroom (Solange), Grover Dale (Vincent), Donna McKechnie (Carlotta), Carole Swarbrick (Christine), Stella Stevens (Dee Dee), Mary Jo Catlett (Emily), Justine Johnston (Heidi), Jean Louisa Kelly (Young Sally), Austin Miller (Young Buddy), Tia Riebling (Young Phyllis), Kevin Earley (Young Ben), Abby Feldman (Young Stella), Barbara Chiofalo (Young Heidi), Trevor Brackney (Young Vincent), Melissa Driscoll (Young Vanessa), Stephen Reed (Kevin), and Billy Barnes (Theodore). Hal Linden was originally going to play Ben, but left because he was cast in the Broadway revival of "Cabaret" as Herr Schultz. Tom Bosley was also originally cast as Dimitri Weismann.
New York City Center's Encores! "Great American Musicals in Concert" series featured "Follies" as its 40th production for six performances in February 2007 in a sold out semi-staged concert. The cast starred Donna Murphy (Phyllis), Victoria Clark (Sally), Victor Garber (Ben) and Michael McGrath (Buddy). Christine Baranski played Carlotta, and Lucine Amara sang Heidi. The cast also included Anne Rogers, Jo Anne Worley and Philip Bosco. The director and choreographer was Casey Nicholaw. This production used the original text and the "Loveland" lyrics performed in the 1987 London production.
The Kennedy Center for the Performing Arts production at the Eisenhower Theatre started previews on May 7, 2011, with an official opening on May 21, and closed on June 19, 2011. The cast starred Bernadette Peters as Sally, Jan Maxwell as Phyllis, Elaine Paige as Carlotta, Linda Lavin as Hattie, Ron Raines as Ben and Danny Burstein as Buddy. The production was directed by Eric Schaeffer, with choreography by Warren Carlyle, costumes by Gregg Barnes, set by Derek McLane and lighting by Natasha Katz. Also featured were Rosalind Elias as Heidi, Régine as Solange, Susan Watson as Emily, and Terri White as Stella. The budget was reported to be $7.3 million. The production played to 95% capacity.
Reviews were mixed, with Ben Brantley of "The New York Times" writing, "It wasn't until the second act that I fell in love all over again with "Follies"". Peter Marks of "The Washington Post" wrote that the revival "takes an audience halfway to paradise." He praised a "broodingly luminous Jan Maxwell" and Burstein's "hapless onetime stage-door Johnny", as well as "the show's final 20 minutes, when we ascend with the main characters into an ironic vaudeville dreamscape of assorted neuroses - the most intoxicating articulation of the musical's 'Loveland' sequence that I've ever seen." "Variety" gave a very favorable review to the "lavish and entirely satisfying production", saying that Schaeffer directs "in methodical fashion, building progressively to a crescendo exactly as Sondheim does with so many of his stirring melodies. Several show-stopping routines are provided by choreographer Warren Carlyle." Terry Teachout of the "Wall Street Journal" noted that "One of the signal achievements of this "Follies" is that it succeeds in untangling each and every strand of the show's knotty plot... Mr. Schaeffer is clearly unafraid of the darkness of "Follies", so much so that the first act is bitter enough to sting. Yet he and Warren Carlyle ... just as clearly revel in the richness of the knowing pastiche songs with which Mr. Sondheim evokes the popular music of the prerock era."
The production transferred to Broadway at the Marquis Theatre in a limited engagement starting previews on August 7, 2011, with the official opening on September 12, and closing on January 22, 2012 after 151 performances and 38 previews. The four principal performers reprised their roles, as well as Paige as Carlotta. Jayne Houdyshell as Hattie, Mary Beth Peil as Solange LaFitte, and Don Correia as Theodore joined the Broadway cast. A two-disc cast album of this production was recorded by PS Classics and was released on November 29, 2011.
Brantley reviewed the Broadway revival for "The New York Times", writing: "Somewhere along the road from Washington to Broadway, the Kennedy Center production of "Follies" picked up a pulse. ... I am happy to report that since then, Ms. Peters has connected with her inner frump, Mr. Raines has found the brittle skeleton within his solid flesh, and Ms. Maxwell and Mr. Burstein have only improved. Two new additions to the cast, Jayne Houdyshell and Mary Beth Peil, are terrific. This production has taken on the glint of crystalline sharpness." The production's run was extended, and its grosses exceeded expectations, but it did not recoup its investment.
The Broadway production won the Drama League Award, Distinguished Production of a Musical Revival for 2011-12 and the Drama Desk Award for Outstanding Revival of a Musical, Outstanding Actor in a Musical (Burstein) and Outstanding Costume Design (Barnes). Out of seven Tony Award nominations, including Best Revival of a Musical, it won only one, for Barnes' costumes.
The 2011 Broadway and Kennedy Center production transferred to the Ahmanson Theatre, Los Angeles, California, in a limited engagement, from May 3, 2012 through June 9. The majority of the Broadway cast reprised their roles, with the exception of Bernadette Peters, who had prior concert commitments and was replaced by Victoria Clark in the role of Sally, a role she has previously played in New York. Other new cast members included Carol Neblett as Heidi, Sammy Williams as Theodore and Obba Babatunde as Max.
For its first production in France, "Follies "was presented at the Toulon Opera House in March, 2013. This English-language production, using the full original orchestration, was directed by Olivier Bénézech and conducted by David Charles Abell. The cast featured Charlotte Page (Sally), Liz Robertson (Phyllis), Graham Bickley (Ben), Jérôme Pradon (Buddy), Nicole Croisille (Carlotta), Julia Sutton (Hattie) and Fra Fee (Young Buddy).
A concert version at the Melbourne Recital Centre, staged with a full 23-piece orchestra and Australian actors Philip Quast (Ben), David Hobson (Buddy), Lisa McCune (Sally), Anne Wood (Phyllis), Rowan Witt (Young Buddy), Sophie Wright (Young Sally), Nancy Hayes (Hattie), Debra Byrne (Carlotta), and Queenie van de Zandt (Stella). The production was directed by Tyran Parke and produced by StoreyBoard Entertainment.
A London revival was performed in the Olivier Theatre at the National Theatre (22 August until 4 November 2017 - later extended to 3 January 2018, as extensions are common practice at the National Theatre). The production was directed by Dominic Cooke, choreographed by Bill Deamer and starred Peter Forbes as Buddy, Imelda Staunton as Sally, Janie Dee as Phyllis, Philip Quast as Ben and Tracie Bennett as Carlotta. This production notably goes back to the original run of a one-act performance. The production was broadcast live to cinemas worldwide on 16 November through the National Theatre Live programme.
The production returned to the Olivier Theatre on 14 February 2019, playing until 11 May. Janie Dee and Peter Forbes returned as Phyllis and Buddy, while Joanna Riding and Alexander Hanson replaced Staunton and Quast as Sally and Ben. Bennett also reprised her Olivier-nominated performance. A recording of the National Theatre production was released on 18 January 2019.
The 2017 production was nominated for 10 Laurence Olivier Awards and won 2 for Best Musical Revival and Best Costume Design (by Vicki Mortimer).
The characters and original cast:
In the foreword to "Everything Was Possible", Frank Rich wrote: "From the start, critics have been divided about "Follies", passionately pro or con but rarely on the fence... Is it really a great musical, or merely the greatest of all cult musicals?" (Chapin, p. xi) Ted Chapin wrote, "Taken as a whole, the collection of reviews "Follies" received was as rangy as possible." (Chapin, p. 300) In his "The New York Times" review of the original Broadway production, Clive Barnes wrote: "...it is stylish, innovative, it has some of the best lyrics I have ever encountered, and above all it is a serious attempt to deal with the musical form." Barnes also called the story shallow and Sondheim's words a joy "...even when his music sends shivers of indifference up your spine."
Walter Kerr wrote in "The New York Times" about the original production: ""Follies" is intermissionless and exhausting, an extravaganza that becomes so tedious... because its extravaganzas have nothing to do with its pebble of a plot." On the other hand, Martin Gottfried wrote: ""Follies" is truly awesome and, if it is not consistently good, it is always great."
"Time Magazine" wrote about the original Broadway production: "At its worst moments, "Follies" is mannered and pretentious, overreaching for Significance. At its best moments—and there are many—it is the most imaginative and original new musical that Broadway has seen in years."
Frank Rich, in reviewing the 1985 concert in "The New York Times", wrote: "Friday's performance made the case that this Broadway musical... can take its place among our musical theater's very finest achievements." Ben Brantley, reviewing the 1998 Paper Mill Playhouse production in "The New York Times", concluded that it was a "...fine, heartfelt production, which confirms "Follies" as a landmark musical and a work of art..."
The "Time Magazine" reviewer wrote of the 2001 Broadway revival: "Even in its more modest incarnation, "Follies" has, no question, the best score on Broadway." He noted, though, that "I'm sorry the cast was reduced from 52 to 38, the orchestra from 26 players to 14...To appreciate the revival, you must buy into James Goldman's book, which is peddling a panoramically bleak take on marriage." Finally, he wrote: "But "Follies" never makes fun of the honorable musical tradition to which it belongs. The show and the score have a double vision: simultaneously squinting at the messes people make of their lives and wide-eyed at the lingering grace and lift of the music they want to hear. Sondheim's songs aren't parodies or deconstructions; they are evocations that recognize the power of a love song. In 1971 or 2001, "Follies" validates the legend that a Broadway show can be an event worth dressing up for."
Brantley, reviewing the 2007 Encores! concert for "The New York Times", wrote: "I have never felt the splendid sadness of "Follies" as acutely as I did watching the emotionally transparent concert production...At almost any moment, to look at the faces of any of the principal performers...is to be aware of people both bewitched and wounded by the contemplation of who they used to be. When they sing, in voices layered with ambivalence and anger and longing, it is clear that it is their past selves whom they are serenading."
There have been six recordings of "Follies" released: the original 1971 Broadway cast album; "Follies in Concert", Avery Fisher Hall (1985); the original London production (1987); and the Paper Mill Playhouse (1998). The cast recording of the 2011 Broadway revival, by PS Classics, was officially released on November 29, 2011, and also was in pre-sale prior to the store release. PS Classics co-founder Tommy Krasker said: "We've never had the kind of reaction that we've had for "Follies". Not only has it already outsold every other album at our website, but the steady stream of emails from customers has been amazing." This recording includes "extended segments of the show's dialogue." The "theatermania.com" reviewer wrote that "The result is an album that, more so than any of the other existing recordings, allows listeners to re-experience the heartbreaking collision of past and present that's at the core of the piece." The recording of the 2011 revival was nominated for a Grammy Award in the Musical Theater Album category. The 2017 London revival cast was recorded after the production closed in January 2018, and was released in early 2019.
In January 2015, it was reported that Rob Marshall is set to direct the movie, and Meryl Streep was rumored to star in it. Tony Award-winning playwright and Oscar-nominated screenwriter John Logan has expressed interest in writing a film adaptation of "Follies".
In November 2019, it was announced that Dominic Cooke will adapt the screenplay and direct the film, after having directed the successful 2017 revival in the National Theatre in London, which returned in 2019 because of popular demand. | https://en.wikipedia.org/wiki?curid=11668 |
Functional linguistics
Functional linguistics is the approach to the study of language that sees functionality of language and its elements to be the key to understanding linguistic processes and structures. Functional theories of language propose that since language is fundamentally a tool, it is reasonable to assume that its structures are best analyzed and understood with reference to the functions they carry out. Functional theories of grammar belong to structural and humanistic linguistics. They take into account the context where linguistic elements are used and study the way they are instrumentally useful or functional in the given environment. This means that functional theories of grammar tend to pay attention to the way language is actually used in communicative context. The formal relations between linguistic elements are assumed to be functionally-motivated.
Functional analysis is the examination of how linguistic elements function on different layers of linguistic structure, and how the levels interact with each other. Functions exist on all levels of grammar, even in phonology, where the phoneme has the function of distinguishing between lexical material.
In functional explanation, a linguistic structure is explained with an appeal to its function. Functional linguistics takes as its starting point the notion that communication is the primary function of language. Therefore, general phonological, morphosyntactic and semantic phenomena are thought of as being motivated by the needs of people to communicate successfully with each other. Thus, functional linguistics takes the perspective that the organisation of language reflects its functional value.
The concept of economy is metaphorically transferred from a social or economical context to a linguistic level. It is considered as a regulating force in language maintenance. Controlling the impact of language change or internal and external conflicts of the system, the economy principle means that systemic coherence is maintained without increasing energy cost. This is why all human languages, no matter how different they are, have high functional value as based on a compromise between the competing motivations of speaker-easiness (simplicity or "inertia") versus hearer-easiness (clarity or "energeia").
The principle of economy was elaborated by the French structural–functional linguist André Martinet. Martinet's concept is similar to Zipf's principle of least effort; although the idea had been discussed by various linguists in the late 19th and early 20th century.
Some key adaptations of functional explanation are found in the study of information structure. Based on earlier linguists' work, Prague Circle linguists Vilém Mathesius, Jan Firbas and others elaborated the concept of theme–rheme relations (topic and comment) to study pragmatic concepts such as sentence focus, and givenness of information, to successfully explain word-order variation. The method has been used widely in linguistics to uncover word-order patterns in the languages of the world. Its importance, however, is limited to within-language variation, with no apparent explanation of cross-linguistic word order tendencies.
Several principles from pragmatics have been proposed as functional explanations of linguistic structures, often in a typological perspective.
There are several distinct grammatical frameworks that employ a functional approach.
Dik characterises the functional approach as follows: | https://en.wikipedia.org/wiki?curid=11669 |
Fick's laws of diffusion
Fick's laws of diffusion describe diffusion and were derived by Adolf Fick in 1855. They can be used to solve for the diffusion coefficient, . Fick's first law can be used to derive his second law which in turn is identical to the diffusion equation.
A diffusion process that obeys Fick's laws is called normal or Fickian diffusion; otherwise, it is called anomalous diffusion or non-Fickian diffusion.
In 1855, physiologist Adolf Fick first reported | https://en.wikipedia.org/wiki?curid=11671 |
Fawlty Towers
Fawlty Towers is a British television sitcom written by John Cleese and Connie Booth and broadcast on BBC2 in 1975 and 1979. Two series of six episodes each were made. The show was ranked first on a list of the 100 Greatest British Television Programmes drawn up by the British Film Institute in 2000, and in 2019 it was named the "greatest ever British TV sitcom" by a panel of comedy experts compiled by the "Radio Times".
The series is set in Fawlty Towers, a fictional hotel in the seaside town of Torquay on the "English Riviera". The plots centre on the tense, rude and put-upon owner Basil Fawlty (Cleese), his bossy wife Sybil (Prunella Scales), the sensible chambermaid Polly (Booth) who often is the peacemaker and voice of reason, and the hapless and English-challenged Spanish waiter Manuel (Andrew Sachs), showing their attempts to run the hotel amidst farcical situations and an array of demanding and eccentric guests and tradespeople.
The idea of the show came from Cleese after he stayed at the Gleneagles Hotel in Torquay, Devon in 1970 (along with the rest of the Monty Python troupe) where he encountered the eccentric hotel owner Donald Sinclair. Stuffy and snobbish, Sinclair treated guests as though they were a hindrance to his running of the hotel (a waitress who worked for him stated "it was as if he didn't want the guests to be there"). Sinclair was the inspiration for Cleese's character Basil Fawlty.
In 1980, Cleese received the BAFTA for Best Entertainment Performance, and in a 2001 poll conducted by Channel 4 Basil Fawlty was ranked second on their list of the 100 Greatest TV Characters. The popularity of "Fawlty Towers" has endured, and it is often re-broadcast. The BBC profile for the series states, "the British sitcom by which all other British sitcoms must be judged, "Fawlty Towers" withstands multiple viewings, is eminently quotable ('don't mention the war'), and stands up to this day as a jewel in the BBC's comedy crown."
In May 1970, the Monty Python comedy group stayed at the now demolished Gleneagles Hotel in Torquay, Devon while filming on location in Paignton. John Cleese was fascinated with the behaviour of the owner, Donald Sinclair, later describing him as "the rudest man I've ever come across in my life". Among such behaviour by Sinclair was his criticism of Terry Gilliam's "too American" table etiquette and tossing Eric Idle's briefcase out of a window "in case it contained a bomb". Asked why would anyone want to bomb the hotel, Sinclair replied, “We’ve had a lot of staff problems". Michael Palin states Sinclair "seemed to view us as a colossal inconvenience". Rosemary Harrison, a waitress at the Gleneagles under Sinclair, described him as "bonkers" and lacking in hospitality, deeming him wholly unsuitable for a hotel proprietor. "It was as if he didn't want the guests to be there." Cleese and Connie Booth stayed on at the hotel after filming, furthering their research of its owner.
At the time, Cleese was a writer on the 1970s British TV sitcom "Doctor in the House" for London Weekend Television. An early prototype of the character that became known as Basil Fawlty was developed in an episode ("No Ill Feeling") of the third "Doctor" series (titled "Doctor at Large"). In this edition, the main character checks into a small-town hotel, his very presence seemingly winding up the aggressive and incompetent manager (played by Timothy Bateson) with a domineering wife. The show was broadcast on 30 May 1971.
Cleese said in 2008 that the first "Fawlty Towers" script he and Booth wrote was rejected by the BBC. At a 30th anniversary event honouring the show, Cleese said,
Cleese was paid £6,000 for 43 weeks' work and supplemented his income by appearing in television advertisements. He states, "I have to thank the advertising industry for making this possible. Connie and I used to spend six weeks writing each episode and we didn't make a lot of money out of it. If it hadn't been for the commercials I wouldn't have been able to afford to spend so much time on the script."
Although the series is set in Torquay, no part of it was shot in Southwest England. For the exterior filming, the Wooburn Grange Country Club in Buckinghamshire was used instead of a hotel. In several episodes of the series (notably "The Kipper and the Corpse", "The Anniversary", and "Basil the Rat"), the entrance gate at the bottom of the drive states the real name of the location. This listed building later served for a short time as a nightclub named "Basil's" after the series ended, before being destroyed by a fire in March 1991. The remnants of the building were demolished and a housing estate was built on the site. Very little trace of the original site exists today.
Other location filming was done mostly around Harrow, notably the 'damn good thrashing' scene in "Gourmet Night" in which Basil loses his temper and attacks his broken-down car with a tree branch. It was filmed at the T-junction of Lapstone Gardens and Mentmore Close (). In the episode "The Germans", the opening shot is of Northwick Park Hospital. In "Gourmet Night", the exterior of André's restaurant was filmed on Preston Road in the Harrow area (). The launderette next door to the restaurant still exists, but André's now is a Chinese and Indian restaurant called Wings.
Both Cleese and Booth were keen on every script being perfect, and some episodes took four months and required as many as ten drafts until they were satisfied.
The series focuses on the exploits and misadventures of short-fused hotelier Basil Fawlty and his acerbic wife Sybil, as well as their employees: waiter Manuel, Polly Sherman, and, in the second series, chef Terry. The episodes typically revolve around Basil's efforts to "raise the tone" of his hotel and his increasing frustration at numerous complications and mistakes, both his own and those of others, which prevent him from doing so.
Much of the humour comes from Basil's overly aggressive manner, engaging in angry but witty arguments with guests, staff and, in particular, Sybil, whom he addresses (in a faux-romantic way) with insults such as "that golfing puff adder", "my little piranha fish" and "my little nest of vipers". Despite this, Basil frequently feels intimidated, Sybil being able to cow him at any time, usually with a short, sharp cry of "Basil!" At the end of some episodes, Basil succeeds in annoying (or at least bemusing) the guests and frequently gets his comeuppance.
The plots occasionally are intricate and always farcical, involving coincidences, misunderstandings, cross-purposes and meetings both missed and accidental. The innuendo of the bedroom farce is sometimes present (often to the disgust of the socially conservative Basil) but it is his eccentricity, not his lust, that drives the plots. The events test to the breaking point what little patience Basil has, sometimes causing him to have a near breakdown by the end of the episode.
The guests at the hotel typically are comic foils to Basil's anger and outbursts. Guest characters in each episode provide different characteristics (working class, promiscuous, foreign) that he cannot stand. Requests both reasonable and impossible test his temper. Even the afflicted annoy him, as for example in the episode "Communication Problems", revolving around the havoc caused by the frequent misunderstandings between the staff and the hard-of-hearing Mrs. Richards. Near the end, Basil pretends to faint just at the mention of her name. This episode is typical of the show's careful weaving of humorous situations through comedy cross-talk. The show also uses mild black humour at times, notably when Basil is forced to hide a dead body and in his comments about Sybil ("Did you ever see that film, "How to Murder Your Wife"? ... Awfully good. I saw it six times.") and to the guests ("May I suggest that you consider moving to a hotel closer to the sea? Or preferably in it.").
Basil's physical outbursts are primarily directed at Manuel, an emotional but largely innocent Spaniard whose confused English vocabulary causes him to make elementary mistakes. At times, Basil beats Manuel with a frying pan and smacks his forehead with a spoon. The violence towards Manuel caused rare negative criticism of the show. Sybil and Polly, on the other hand, are more patient and understanding toward Manuel; everyone's usual excuse for his behaviour to guests is, "He's from Barcelona"; Manuel even once used the excuse for himself.
Basil longs for a touch of class, sometimes playing recordings of classical music. In one episode he is playing music by Brahms when Sybil remarks, after pestering him asking to do different tasks: "You could have them both done by now if you hadn't spent the whole morning skulking in there listening to that racket." Basil replies, with exasperation, "Racket?? That's Brahms! Brahms' Third Racket!" Basil often displays blatant snobbishness as he attempts to climb the social ladder, frequently expressing disdain for the "riff-raff", "cretins" and "yobbos" that he believes regularly populate his hotel. His desperation is readily apparent as he makes increasingly hopeless manoeuvres and painful faux pas in trying to curry favour with those he perceives as having superior social status. Yet he finds himself forced to serve those individuals that are "beneath" him. As such, Basil's efforts tend to be counter-productive, with guests leaving the hotel in disgust and his marriage (and sanity) stretching to breaking point.
Basil Fawlty, played by John Cleese, is a cynical and snobbish misanthrope who is desperate to belong to a higher social class. He sees a successful hotel as a means of achieving this, yet his job forces him to be polite to people he despises.
He is intimidated by his wife Sybil Fawlty. He yearns to stand up to her, but his plans frequently conflict with her demands. She is often verbally abusive (describing him as "an ageing, brilliantined stick insect") but although he towers over her, he often finds himself on the receiving end of her temper, verbally and physically (as in "The Builders").
Basil usually turns to Manuel or Polly to help him with his schemes, while trying his best to keep Sybil from discovering them. However, Basil occasionally laments the time when there was passion in their relationship, now seemingly lost. Also, it appears he still does care for her, and actively resists the flirtations of a French guest in one episode. The penultimate episode, "The Anniversary", is about his efforts to put together a surprise anniversary party involving their closest friends. Things go wrong as Basil pretends the anniversary date doesn't remind him of anything though he pretends to have a stab at it by reeling off a list of random anniversaries, starting with the Battle of Agincourt, for which he receives a slap from Sybil, who becomes increasingly frustrated and angry. He continues guessing even after Sybil is out of earshot, and mentions other anniversaries (none of which happened on 17 April), including the Battle of Trafalgar and Yom Kippur, just to enhance the surprise. Sybil believes he really has forgotten, and leaves in a huff. In an interview in the DVD box set, Cleese claims this episode deliberately takes a slightly different tone from the others, fleshing out their otherwise inexplicable status as a couple.
In keeping with the lack of explanation about the marriage, not much is revealed of the characters' back-stories. It is known that Basil served in the British Army and saw action in the Korean War, possibly as part of his National Service. (John Cleese himself was only 13 when the Korean War ended, making the character of Basil at least five or six years older than he.) Basil exaggerates this period of his life, proclaiming to strangers, "I killed four men." To this Sybil jokes that "He was in the Catering Corps. He used to poison them." Basil often is seen wearing regimental and old boy-style ties, perhaps spuriously, one of which in the colours of the Army Catering Corps. He also claims to have sustained a shrapnel injury to his leg; it tends to flare up at suspiciously convenient times. The only person towards whom Basil consistently exhibits tolerance and good manners is the old and senile Major Gowen, a veteran of one of the world wars (which one is never specified, though he once mentions to Mrs Peignoir that he was in France in 1918) who permanently resides at the hotel. When interacting with Manuel, Basil displays a rudimentary knowledge of Spanish (Basil states that he "learned classical Spanish, not the strange dialect he [Manuel] seems to have picked up"); this knowledge is also ridiculed, as in the first episode in which a guest, whom Basil has immediately dismissed as working-class, communicates fluently with Manuel in Spanish after Basil is unable to do so.
Cleese described Basil as thinking that "he could run a first-rate hotel if he didn't have all the guests getting in the way" and as being "an absolutely awful human being" but says that in comedy if an awful person makes people laugh they unaccountably feel affectionate towards him. Indeed, he is not entirely unsympathetic. The "Hotel Inspectors" and "Gourmet Night" episodes feature guests who are shown to be deeply annoying, with constant and unreasonable demands. In "Gourmet Night" the chef gets drunk and is unable to cook dinner, leaving Basil to scramble in an attempt to salvage the evening. Much of the time, Basil is an unfortunate victim of circumstance.
Sybil Fawlty, played by Prunella Scales, is Basil's wife. Energetic and petite, she prefers a working wardrobe of tight skirt-suits in shiny fabrics and sports a tower of permed hair augmented with hairpieces and wigs and necessitating the use of overnight curlers. She often is a more effective manager of the hotel, making sure Basil gets certain jobs done or stays out of the way when she is handling difficult guests. Typically when Basil is on the verge of meltdown due to a crisis (usually of his own making), it is Sybil who steps in to clear up the mess and bring some sense to the situation. Despite this, she rarely participates directly in the running of the hotel. During busy check-in sessions or meal times, while everyone else is busy working, Sybil is frequently talking on the phone to one of her friends with her phrase "Oohhh, I knoooooooow" or chatting to customers. She has a distinctive conversational tone and braying laugh, which Basil compares to "someone machine-gunning a seal". Being his wife, she is the only regular character who refers to Basil by his first name. When she barks his name at him, he flinchingly freezes in his tracks.
Basil refers to her by a number of epithets, occasionally to her face, including "that golfing puff-adder", "the dragon", "toxic midget", "the sabre-toothed tart", "my little kommandant", "my little piranha fish", "my little nest of vipers" and "you rancorous, coiffured old sow". Despite these nasty nicknames, Basil is terrified of her. The 1979 episode "The Psychiatrist" contains the only time he loses patience and snaps at her (Basil: "Shut up, I'm fed up." Sybil: "Oh, you've done it now.").
Prunella Scales speculated in an interview for "The Complete Fawlty Towers" DVD box set that Sybil married Basil because his origins were of a higher social class than hers.
Polly Sherman, played by Connie Booth, is a waitress and general assistant at the hotel with artistic aspirations. She is the most competent of the staff and the voice of sanity during chaotic moments, but is frequently embroiled in ridiculous masquerades as she loyally attempts to aid Basil in trying to cover a mistake or keep something from Sybil.
In "The Anniversary" she snaps and refuses to help Basil out when he wants her to impersonate Sybil in the semi-darkness of her bedroom in front of the Fawltys' friends, Basil having dug himself into a hole by claiming Sybil was ill instead of admitting she had stormed out earlier in annoyance with him. Polly finally agrees, but only on condition that Basil lends her money to purchase a car, which he has previously refused to do.
Polly generally is good-natured but sometimes shows her frustration, and has odd moments of malice. In "The Kipper and the Corpse", the pampered shih-tzu dog of an elderly guest bites Polly and Manuel. As revenge, Polly laces the dog's sausages with black pepper and Tabasco sauce ("bangers à la bang"), making it ill.
Despite her part-time employment (during meal times), Polly frequently is saddled with many other duties, including as manager in "The Germans" when Sybil and Basil are incapacitated. In the first series, Polly is said to be an art student who, according to Basil, has spent three years at college. In "Gourmet Night", she is seen to draw a sketch (presumably of Manuel), which everyone but Basil immediately recognises and she sells to the chef for 50p. Polly is not referred to as a student in the second series, although in both series she is shown to have a flair for languages, displaying ability in both Spanish and German. In "The Germans", Basil alludes to Polly's polyglot inclination by saying that she does her work "while learning two Oriental languages". Like Manuel, she has a room of her own at the hotel.
Manuel, a waiter played by Andrew Sachs, is a well-meaning but disorganised and confused Spaniard from Barcelona with a poor grasp of the English language and customs. He is verbally and physically abused by his boss. When told what to do, he often responds, "¿Qué?" ("What?"). Manuel's character is used to demonstrate Basil's instinctive lack of sensitivity and tolerance. Every episode involves Basil becoming enraged at Manuel's confusion at his boss's bizarre demands and even basic requests. Manuel is afraid of Fawlty's quick temper and violent assaults, yet often expresses his appreciation for being given employment. He is relentlessly enthusiastic and is proud of what little English he knows.
During the series, Sachs was seriously injured twice. Cleese describes using a real metal pan to knock Manuel unconscious in "The Wedding Party", although he would have preferred to use a rubber one. The original producer and director, John Howard Davies, said that he made Basil use a metal one and that he was responsible for most of the violence on the show, which he felt was essential to the type of comical farce they were creating. Later, when Sachs's clothes were treated to give off smoke after he escapes the burning kitchen in "The Germans", the corrosive chemicals ate through them and gave Sachs severe burns.
Manuel's exaggerated Spanish accent is part of the humour of the show. In fact, Sachs's original language was German; he emigrated to Britain as a child.
The character's nationality was switched to Italian (and the name to Paolo) for the Spanish dub of the show, while in Catalonia and France, Manuel is a Mexican.
The first episode of "Fawlty Towers" was recorded as a pilot on 24 December 1974, the rest of the series being recorded later in 1975. It was then originally broadcast on 19 September. The 12th and final episode was first shown on 25 October 1979. The first series was directed by John Howard Davies, the second by Bob Spiers. Both had their premieres on BBC2.
When originally transmitted, the individual episodes had no on-screen titles. The ones in common currency were first used for the VHS release of the series in the 1980s. There were working titles, such as "USA" for "Waldorf Salad", "Death" for "The Kipper and the Corpse" and "Rat" for "Basil the Rat", which have been printed in some programme guides. In addition, some of the early BBC audio releases of episodes on vinyl and cassette included other variations, such as "Mrs. Richards" and "The Rat" for "Communication Problems" and "Basil the Rat" respectively.
It has long been rumoured that a 13th episode of the series was written and filmed, but never progressed further than a rough cut. Lars Holger Holm, author of the book "Fawlty Towers: A Worshipper's Companion," has made detailed claims about the episode's content, but he provides no concrete evidence of its existence.
On the subject of whether more episodes would be produced, Cleese said (in an interview for the complete DVD box set, which was republished in the book "Fawlty Towers Fully Booked") that he once had the genesis of a feature-length special – possibly sometime during the mid-1990s. The plot, never fleshed out beyond his initial idea, would have revolved around the chaos that a now-retired Basil typically caused as he and Sybil flew to Barcelona to visit their former employee Manuel and his family. Of the idea, Cleese said:
We had an idea for a plot which I loved. Basil was finally invited to Spain to meet Manuel's family. He gets to Heathrow and then spends about 14 frustrating hours waiting for the flight. Finally, on the plane, a terrorist pulls a gun and tries to hijack the thing. Basil is so angry he overcomes the terrorist, and when the pilot says, 'We have to fly back to Heathrow' Basil says, 'No, fly us to Spain or I'll shoot you.' He arrives in Spain, is immediately arrested, and spends the entire holiday in a Spanish jail. He is released just in time to go back on the plane with Sybil.
It was very funny, but I couldn't do it at the time. Making 'Fawlty Towers' work at 90 minutes was a very difficult proposition. You can build up the comedy for 30 minutes, but at that length there has to be a trough and another peak. It doesn't interest me. I don't want to do it.
Cleese also may have been reticent because of Connie Booth's unwillingness to be involved. She had practically retreated from public life after the show finished (and had been initially unwilling to collaborate on a second series, which explains the four-year gap between productions).
The decision by Cleese and Booth to quit before a third series has often been lauded as it ensured the show's successful status would not be weakened with later, lower-quality work. Subsequently, it has inspired the makers of other shows to do likewise. Ricky Gervais and Stephen Merchant refused to make a third series of either "The Office" or "Extras" (both also limited to 12 episodes), citing "Fawlty Towers' " short lifespan. Rik Mayall, Ben Elton and Lise Mayer, the writers behind "The Young Ones", which also ran for only two series (each with six episodes), used this explanation as well. Victoria Wood also indicated this influenced her decision to limit "Dinnerladies" to 16 episodes over two series.
The origins, background and eventual cancellation of the series would later be humorously referenced in 1987's "The Secret Policeman's Third Ball" in a sketch in which Hugh Laurie and Stephen Fry present Cleese — whom they comically misname "Jim Cleese" — with a Dick Emery Lifetime Achievement Award ("Silver Dick") for his contributions to comedy, then launch into a comical series of questions regarding the show, including Cleese's marriage and divorce from Booth, innocently ridiculing Cleese and reducing him to tears, to a point at which he gets on his knees and crawls off the stage while crying.
The second series was transmitted three-and-a-half years later, with the first episode being broadcast on 19 February 1979. Due to an industrial dispute at the BBC, which resulted in a strike, the final episode was not completed until well after the others, being finally shown as a one-off instalment on 25 October 1979. The cancelled episode on 19 March was replaced with a repeat of "Gourmet Night" from series 1. In the second series the anagrams were created by Ian McClane, Bob Spier's assistant floor manager.
At first the series was not held in particularly high esteem. The "Daily Mirror" review of the show in 1975 had the headline "Long John Short On Jokes". Eventually though, as the series began to gain popularity, critical acclaim followed. Clive James writing in "The Observer" said the second episode had him "retching with laughter."
One critic of the show was Richard Ingrams, then television reviewer for "The Spectator", who wrote a caustic condemning piece on the programme. Cleese got his revenge by naming one of the guests in the second series "Mr. Ingrams", who is caught in his room with a blow-up doll.
In an interview for the "TV Characters" edition of Channel 4's 'talking heads' strand 100 Greatest (in which Basil placed second, between Homer Simpson and Edmund Blackadder), TV critic A. A. Gill theorised that the initially muted response may have been caused by Cleese seemingly ditching his label as a comic revolutionary – earned through his years with Monty Python – to do something more traditional.
In a list of the 100 Greatest British Television Programmes drawn up by the British Film Institute in 2000, voted for by industry professionals, "Fawlty Towers" was placed first. It was also voted fifth in the "Britain's Best Sitcom" poll in 2004, and second only to "Frasier" in The Ultimate Sitcom poll of comedy writers in January 2006. Basil Fawlty came top of the "Britain's Funniest Comedy Character" poll, held by Five on 14 May 2006. In 1997, "The Germans" was ranked No. 12 on TV Guide's 100 Greatest Episodes of All Time. Named in "Empire" magazine's 2016 list of the greatest TV shows of all time, the entry states,
Three British Academy Television Awards (BAFTAs) were awarded to people for their involvement with the series. Both of the series were awarded the BAFTA in the category Best Scripted Comedy, the first being won by John Howard Davies in 1976, and the second by Douglas Argent and Bob Spiers in 1980. In 1980, Cleese received the BAFTA for Best Entertainment Performance.
In a list drawn up by the British Film Institute in 2000, voted by industry professionals, "Fawlty Towers" was named the best British television series of all time.
John Lennon was a fan of the show. He said in 1980: "I love "Fawlty Towers". I'd like to be in that. [It's] the greatest show I've seen in years... what a masterpiece, a beautiful thing." Filmmaker Martin Scorsese has remarked he is a great fan of "Fawlty Towers" and named "The Germans" as his favourite episode. He described the scene with Basil impersonating Hitler as "so tasteless, it's hilarious".
Three attempted remakes of "Fawlty Towers" were started for the American market, with two making it into production. The first, "Chateau Snavely" starring Harvey Korman and Betty White, was produced by ABC for a pilot in 1978, but the transfer from coastal hotel to highway motel proved too much and the series never was produced. The second, also by ABC, was "Amanda's," starring Bea Arthur, notable for switching the sexes of its Basil and Sybil equivalents. It also failed to pick up a major audience and was dropped after ten episodes had been aired, although 13 episodes were shot. A third remake, called "Payne" (produced by and starring John Larroquette), was produced in 1999, but was cancelled shortly after. Nine episodes were produced of which eight aired on American television (though the complete run was broadcast overseas). A German pilot based on the sitcom was made in 2001, named "Zum letzten Kliff," but further episodes were not made.
The popular sitcoms "3rd Rock from the Sun" and "Cheers" (in both of which Cleese made guest appearances) have cited "Fawlty Towers" as an inspiration, especially regarding its depiction of a dysfunctional workplace "family". Arthur Mathews and Graham Linehan have cited "Fawlty Towers" as a major influence on their sitcom "Father Ted." "Guest House" on Pakistan's PTV also resembled the series.
Several of the characters have made other appearances, as spinoffs or in small cameo roles. In 1981, in character as Manuel, Andrew Sachs recorded his own version of the Joe Dolce cod-Italian song "Shaddap You Face" (with the B-side "Waiter, There's a Spanish Flea in My Soup") but the record was not released because Joe Dolce took out an injunction: he was about to issue his version in Britain. Sachs also portrayed a Manuel-like character in a series of British TV advertisements for life insurance. Gilly Flower and Renee Roberts, who played the elderly ladies Miss Tibbs and Miss Gatsby in the series, reprised their roles in a 1983 episode of "Only Fools and Horses." In 2006, Cleese played Basil Fawlty for the first time in 27 years, for an unofficial England 2006 World Cup song, "Don't Mention the World Cup", taking its name from the phrase, "Don't mention the war," which Basil used in the episode "The Germans". In 2007, Cleese and Sachs reprised their roles for a six-episode corporate business video for the Norwegian oil company Statoil. In the video, Fawlty is running a restaurant called "Basil's Brasserie" while Manuel owns a Michelin-starred restaurant in London. In the 2008 gala performance "We Are Most Amused," Cleese breaks into character as Basil for a cameo appearance by Sachs as an elderly Manuel.
In November 2007, Prunella Scales returned to the role of Sybil Fawlty in a series of sketches for the BBC's annual "Children in Need" charity telethon. The character was seen taking over the management of the eponymous hotel from the BBC drama series "Hotel Babylon," interacting with characters from that programme as well as other 1970s sitcom characters. The character of Sybil was used by permission of John Cleese.
In 2007, the Los Angeles Film School produced seven episodes of "Fawlty Tower Oxnard" starring Robert Romanus as Basil Fawlty.
In 2016, Cleese reprised his role as Basil in a series of TV adverts for High Street optician chain Specsavers. The same year, Cleese and Booth reunited to create and co-write the official theatrical adaptation of "Fawlty Towers", which premiered in Melbourne at the Comedy Theatre. It was critically well received, subsequently embarking on a successful tour of Australia. Cleese was intimately involved in the creation of the stage version from the beginning, including in the casting. He visited Australia to promote the adaptation, as well as oversee its success. Melbourne was chosen to premiere the adaptation due to "Fawlty Towers' " enduring popularity in Australia, and also because it has become a popular international test market for large-scale theatrical productions in recent years, having recently been the city where the revised "Love Never Dies" and the new "King Kong" were also premiered. Cleese also noted he did not believe the London press would give the adaptation fair, unbiased reviews, so he deliberately chose to premiere it elsewhere.
In 2009, Tiger Aspect Productions produced a two-part documentary for the digital comedy channel Gold, called "Fawlty Towers: Re-Opened." The documentary features interviews with all four main cast members, including Connie Booth, who had refused to talk about the series for 30 years. John Cleese confirmed at the 30-year reunion in May 2009 that they will never make another episode of the comedy because they are "too old and tired" and expectations would be too high. In a television interview (shown in Australia on Seven Network and the Australian Broadcasting Corporation) on 7 May 2009, Cleese also commented that he and Booth took six weeks to write each episode.
In 1977 and 1978 alone, the original TV show was sold to 45 stations in 17 countries and was the BBC's best-selling overseas programme for that year. "Fawlty Towers" became a huge success in almost all countries in which it aired. Although it initially was a flop in Spain, largely because of the portrayal of the Spanish waiter Manuel, it was successfully resold with the Manuel character's nationality changed to Italian except in Spain's Catalan region where Manuel was Mexican. To show how badly it translated, Clive James picked up a clip containing Manuel's "¿Qué?" phrase to show on "Clive James on Television" in 1982. The series also briefly was broadcast in Italy in the 1990s on the satellite channel Canal Jimmy, in the original English with Italian subtitles.
In Australia, the show originally was broadcast on ABC Television, the first series in 1976 and the second series in 1980. The show then was sold to the Seven Network where it has been repeated numerous times.
Two record albums were released by BBC Records. The first album, simply titled "Fawlty Towers", was released in 1979 and contained the audio from "Communication Problems" (as "Mrs Richards") and "Hotel Inspectors". The second album, titled "Second Sitting", was released in 1981 and contained audio from "Basil the Rat" (as "The Rat") and "The Builders".
"Fawlty Towers" was originally released by BBC Video in 1984, with three episodes on each of four tapes. Each tape was edited with the credits from all three episodes put at the end of the tape. A Laserdisc containing all episodes spliced together as a continuous episode was released in the U.S. on 23 June 1993. It was re-released in 1994, unedited but digitally remastered. It also was re-released in 1998 with a special interview with John Cleese. "Fawlty Towers – The Complete Series" was released on DVD on 16 October 2001, available in regions 1, 2 and 4. A "Collector's Edition" is available in region 2.
Series one of the show was released on UMD Video for PSP. In July 2009, BBC America announced a DVD re-release of the "Fawlty Towers" series. The DVD set was released on 20 October 2009. The reissue, titled "Fawlty Towers Remastered: Special Edition," contains commentary by John Cleese on every episode as well as remastered video and audio. All episodes are available as streamed video-on-demand via Britbox, Netflix and Amazon Prime Video. Additionally, both series are available for download on iTunes.
A "Fawlty Towers" game was released on PC in 2000 and featured a number of interactive games, desktop-customizing content and clips from the show. | https://en.wikipedia.org/wiki?curid=11673 |
False friend
In linguistics, false friends are words in different languages that look or sound similar, but differ significantly in meaning. An example is the English "embarrassed" and the Spanish "embarazada" (which means "pregnant"), the word "parents" and the Portuguese "parentes" and Italian "parenti" (which mean "relatives"), or the word "sensible", which means "reasonable" in English, but "sensitive" in French, German, Italian and Spanish.
The term originates from a book by French linguists describing the phenomenon, which was translated in 1928 and entitled, "false friend of a translator".
As well as producing completely false friends, the use of loanwords often results in the use of a word in a restricted context, which may then develop new meanings not found in the original language. For example, "angst" means "fear" in a general sense (as well as "anxiety") in German, but when it was borrowed into English in the context of psychology, its meaning was restricted to a particular type of fear described as "a neurotic feeling of anxiety and depression". Also, "gymnasium" meant both 'a place of education' and 'a place for exercise' in Latin, but its meaning was restricted to the former in German and to the latter in English, making the expressions into false friends in those languages as well as in Greek, where it started out as 'a place for naked exercise'.
False friends, or bilingual homophones are words in two or more languages that look or sound similar, but differ significantly in meaning.
The origin of the term is as a shortened version of the expression "false friend of a translator", the English translation of a French expression () introduced by linguists Maxime Kœssler and Jules Derocquigny in their 1928 book, with a sequel, "Autres Mots anglais perfides".
From the etymological point of view, false friends can be created in several ways.
If language A borrowed a word from language B, or both borrowed the word from a third language or inherited it from a common ancestor, and later the word shifted in meaning or acquired additional meanings in at least one of these languages, a native speaker of one language will face a false friend when learning the other. Sometimes, presumably both senses were present in the common ancestor language, but the cognate words got different restricted senses in Language A and Language B.
"Actual", which in English is usually a synonym of "real", has a different meaning in other European languages, in which it means 'current' or 'up-to-date', and has the logical derivative as a verb, meaning 'to make current' or 'to update'. "Actualise" (or 'actualize') in English means 'to make a reality of'.
The word "friend" itself has cognates in the other Germanic languages; but the Scandinavian ones (like Swedish "frände", Danish "frænde") predominantly mean 'relative'. The original Proto-Germanic word meant simply 'someone whom one cares for' and could therefore refer to both a friend and a relative, but lost various degrees of the 'friend' sense in Scandinavian languages, while it mostly lost the sense of 'relative' in English. (The plural "friends" is still rarely used for "kinsfolk", as in the Scottish proverb "Friends agree best at a distance", quoted in 1721.)
The Estonian and Finnish languages are closely related, which gives rise to false friends:
Or Estonian "vaimu" ‘spirit; ghost’ and Finnish "vaimo" ‘wife’, ; or Estonian "huvitav" ‘interesting’ and Finnish "huvittava" ‘amusing’.
A high level of lexical similarity exists between German and Dutch, but shifts in meaning of words with a shared etymology have in some instances resulted in 'bi-directional false friends':
In Belgium, similarities between Dutch and French words often lead to confusion when several different meanings of the same word are mixed up and thus mistranslated. In satirical sketch comedy "Sois Belge et tais-toi!", performed by Joël Riguelle in 2009, the following examples are given:
The Italian word "confetti" ‘sugared almonds’ has acquired a new meaning in English, French and Dutch; in Italian, the corresponding word is "coriandoli".
English and Spanish, both of which have borrowed from Greek and Latin, have multiple false friends, such as:
-Sp. "darse cuenta" - Engl. realize / Sp. realizar - Engl. "carry out"
-Sp. "realmente" - Engl. actually / Sp. actualmente - Engl. "currently"
-Sp. "publicidad" - Engl. advertisement / Sp. advertencia - Engl. "warning".
-Sp. "extraño" - Engl. bizarre / Sp. bizarro - Engl. "brave".
English and Japanese also have diverse false friends, many of them being "wasei-eigo" and "gairaigo" words.
In Swedish, the word "rolig" means 'fun': "ett roligt skämt" ("a funny joke"), while in the closely related languages Danish and Norwegian it means 'calm' (as in "he was calm despite all the commotion around him"). However, the Swedish original meaning of 'calm' is retained in some related words such as "ro", 'calmness', and "orolig", 'worrisome, anxious', literally 'un-calm'. The Danish and Norwegian word "semester" means term (as in school term), but the Swedish word "semester" means holiday. The Danish word "frokost" means lunch, the Norwegian word "frokost" means breakfast.
In French, the word "Hure" refers to the head of a boar, while in German "" means prostitute.
Pseudo-anglicisms are new words formed from English morphemes independently from an analogous English construct and with a different intended meaning.
Japanese is replete with pseudo-anglicisms, known as "wasei-eigo" ("Japan-made English").
In bilingual situations, false friends often result in a semantic change—a real new meaning that is then commonly used in a language. For example, the Portuguese "humoroso" ('capricious') changed its referent in American Portuguese to 'humorous', owing to the English surface-cognate "humorous."
"Corn" was originally the dominant type of grain in a region (indeed "corn" and "grain" are themselves cognates from the same Indo-European root). It means usually cereals in general in the British Isles, but has come to mean exclusively maize in North America.
The American Italian "fattoria" lost its original meaning 'farm' in favor of 'factory' owing to the phonetically similar surface-cognate English "factory" (cf. Standard Italian "fabbrica" 'factory'). Instead of the original "fattoria", the phonetic adaptation American Italian "farma" became the new signifier for 'farm' (Weinreich 1963: 49; see "one-to-one correlation between signifiers and referents").
This phenomenon is analyzed by Ghil'ad Zuckermann as "(incestuous) phono-semantic matching". | https://en.wikipedia.org/wiki?curid=11675 |
Fundamental analysis
Fundamental analysis, in accounting and finance, is the analysis of a business's financial statements (usually to analyze the business's assets, liabilities, and earnings); health; and competitors and markets. It also considers the overall state of the economy and factors including interest rates, production, earnings, employment, GDP, housing, manufacturing and management. There are two basic approaches that can be used: bottom up analysis and top down analysis. These terms are used to distinguish such analysis from other types of investment analysis, such as quantitative and technical.
Fundamental analysis is performed on historical and present data, but with the goal of making financial forecasts. There are several possible objectives:
There are two basic methodologies investors rely upon when the objective of the analysis is to determine what stock to buy and at what price:
Investors can use one or both of these complementary methods for stock picking. For example, many fundamental investors use technical indicators for deciding entry and exit points. Similarly, a large proportion of technical investors use fundamental indicators to limit their pool of possible stocks to "good" companies.
The choice of stock analysis is determined by the investor's belief in the different paradigms for "how the stock market works". For explanations of these paradigms, see the discussions at efficient-market hypothesis, random walk hypothesis, capital asset pricing model, Fed model Theory of Equity Valuation, market-based valuation, and behavioral finance.
Fundamental analysis includes:
The intrinsic value of the shares is determined based upon these three analyses. It is this value that is considered the true value of the share. If the intrinsic value is higher than the market price, buying the share is recommended. If it is equal to market price, it is recommended to hold the share; and if it is less than the market price, then one should sell the shares.
Investors may also use fundamental analysis within different portfolio management styles.
Investors using fundamental analysis can use either a top-down or bottom-up approach.
The analysis of a business's health starts with a financial statement analysis that includes financial ratios. It looks at dividends paid, operating cash flow, new equity issues and capital financing. The earnings estimates and growth rate projections published widely by Thomson Reuters and others can be considered either "fundamental" (they are facts) or "technical" (they are investor sentiment) based on perception of their validity.
Determined growth rates (of income and cash) and risk levels (to determine the discount rate) are used in various valuation models. The foremost is the discounted cash flow model, which calculates the present value of the future:
The amount of debt a company possesses is also a major consideration in determining its health. It can be quickly assessed using the debt-to-equity ratio and the "current ratio" (current assets/current liabilities).
The simple model commonly used is the P/E ratio (price-to-earnings ratio). Implicit in this model of a perpetual annuity (time value of money) is that the inverse, or the E/P rate, is the discount rate appropriate to the risk of the business. This model does not incorporate earnings growth.
Growth estimates are incorporated into the PEG ratio. Its validity depends on the length of time analysts believe the growth will continue. IGAR models can be used to impute expected changes in growth from current P/E and historical growth rates for the stocks relative to a comparison index.
The process of fundamental analysis has significantly dropped in difficulty over the past 10 years. Ever since computers became a household product, people have built software designed to make the investor's life easier. Fundamental analysis is one of the most time-consuming forms of analysis. Furthermore, with the fast-paced trading style of the 21st century, where markets are dominated by HFT firms and day traders, it is difficult to keep up with the market in a timely fashion. One way to go about cutting down analysis time, is to subscribe to either free or paid screening services. Screening services will allow you to search the entire market for stocks that match the quantitative fields you are looking for. These types of software then automatically give you results, hence cutting down on time spent sifting through SEC filings.
Reference - Fundamental Analysis Software for more information on fundamental analysis software.
Economists such as Burton Malkiel suggest that neither fundamental analysis nor technical analysis is useful in outperforming the markets. This is especially true of low liquidity markets or securities. | https://en.wikipedia.org/wiki?curid=11684 |
Frasier
Frasier () is an American sitcom television series that was broadcast on NBC for 11 seasons, premiering on September 16, 1993, and concluding on May 13, 2004. The program was created and produced by David Angell, Peter Casey, and David Lee (as Grub Street Productions) in association with Grammnet (2004) and Paramount Network Television. The series was created as a spin-off of "Cheers", continuing the story of psychiatrist Frasier Crane as he returned to his hometown of Seattle and started building a new life as a radio advice show host while reconnecting with his father and brother and making new friends. "Frasier" starred Kelsey Grammer, Jane Leeves, David Hyde Pierce, Peri Gilpin, and John Mahoney. The show was critically acclaimed, with the show itself and the cast winning thirty-seven Primetime Emmy Awards, a record at the time for a scripted series. It also won the Primetime Emmy Award for Outstanding Comedy Series for five consecutive years. As of 2019, the possibility of a revival was being discussed, to air in 2020.
Harvard-trained psychiatrist Dr. Frasier Crane (Grammer) returns to his hometown of Seattle, Washington, following the end of his marriage and his life in Boston (as seen in "Cheers"). His plans for a new life as a bachelor are challenged when he is obliged to take in his father, Martin (Mahoney), a retired Seattle Police Department detective, who has mobility problems after being shot in the line of duty during a robbery.
Frasier hires Daphne Moon (Leeves) as Martin's live-in physical therapist and caregiver, and tolerates Martin's dog Eddie. Frasier frequently spends time with his younger brother Niles (Pierce), a fellow psychiatrist. Niles becomes attracted to, and eventually falls in love with, Daphne (notwithstanding his own marriage), but does not confess his feelings to her until the final episode of the seventh season.
Frasier hosts "The Dr. Frasier Crane Show", a call-in psychiatry show on talk radio station KACL. His producer Roz Doyle (Gilpin) is very different from Frasier in many ways. She is earthy, direct, and, at least early in the series, has superficial relationships with many men. However, Roz and Frasier share a professional respect and a wry sense of humour, and over time, they become best friends. Frasier and the others often visit the local coffee shop, Café Nervosa.
The Crane brothers, who have expensive tastes, intellectual interests, and high opinions of themselves, frequently clash with their blue-collar, average Joe father. The brothers' close relationship is often tense, and their sibling rivalry intermittently results in chaos. For a pair who make a living solving others' problems, they are often comically inept at dealing with each other's myriad hangups. Other recurring themes include Niles's relationship with his never-seen wife (later ex-wife) Maris, Frasier's search for love, Martin's new life after retirement, and the various attempts by the two brothers to gain acceptance into le tout Seattle.
The main cast remained unchanged for all 11 years. When the series ended in 2004, Grammer had portrayed the character of Frasier Crane for a total of 20 years, including his nine seasons on "Cheers" plus a one time performance as the character on the series "Wings" which earned Grammer an Emmy nomination; at the time, he tied James Arness' portrayal of Matt Dillon on "Gunsmoke" for the longest-running character on primetime television. The record has since been surpassed in animation by the voice cast of "The Simpsons", and in live action by Richard Belzer's portrayal of John Munch on the series "" and "". Grammer was briefly the highest-paid television actor in the United States for his portrayal of Frasier, while Jane Leeves was the highest-paid British actress.
In addition to those of the ensemble, additional story lines included characters from Frasier's former incarnation on "Cheers", such as his ex-wife Lilith Sternin, played by Bebe Neuwirth, and their son Frederick, played by Trevor Einhorn.
Grammer had been the voice of Sideshow Bob on "The Simpsons" since 1990. In a 1997 episode (while "Frasier" was still in production), the character's brother, Cecil Terwilliger, was introduced, played by Pierce, as referenced in the episode title, "Brother from Another Series". The episode contained numerous "Frasier" references, including a "Frasier"-style version of "The Simpsons" theme for a transition and its iconic title card for the same thing. Pierce returned as Cecil for the second time (the first since "Frasier" had concluded) alongside Grammer in the 2007 episode "Funeral for a Fiend". The episode introduced the brothers' father, Dr. Robert Terwilliger, who was portrayed by Mahoney.
Cast reunions also occurred on four episodes of "Hot in Cleveland", which featured Leeves in the main cast along with Wendie Malick (who played Martin's girlfriend towards the end of "Frasier"). In the season-two episode "Unseparated at Birth" and season-three episode "Funeral Crashers", Mahoney guest-starred as a waiter smitten with Betty White's character. Gilpin appeared in the episode "I Love Lucci (Part 1)", and Tom McGowan (who played Kenny Daly) appeared in "Love Thy Neighbor" as a casting director. "Hot in Cleveland" was created and produced by Suzanne Martin, who wrote multiple episodes of "Frasier".
During the eighth season of "Cheers", Grammer made a deal with former "Cheers" producers David Angell, Peter Casey, and David Lee (who were moving on to produce "Wings") that they would do a new series together once "Cheers" ended. Once it became clear during the 10th season that the 11th would be the last, the group began working on their next series together.
Grammer did not originally want to continue playing Frasier Crane, and Angell, Casey, and Lee did not want the new show to be compared to "Cheers", which they had worked on before "Wings". The three proposed that the actor play a wealthy, Malcolm Forbes-like paraplegic publisher who operated his business from his apartment. The main show featured a "street-smart" Hispanic live-in nurse who would clash with the main character. While Grammer liked the concept, Paramount Television disliked it, and suggested that the best route would be to spin off the Frasier Crane character. Grammer ultimately agreed to star in a "Cheers" spin-off, but the producers set the new show as far from Boston as possible to prevent NBC from demanding that other characters from the old show make guest appearances on the new show during its first season. After first choosing Denver, Angell, Casey, and Lee ultimately chose Seattle as the setting.
The creators did not want Frasier in private practice, which would make the show resemble "The Bob Newhart Show". From an unused idea they had for a "Cheers" episode, they conceived the concept of the psychiatrist working in a radio station surrounded by "wacky, yet loveable" characters. After realizing that such a setting was reminiscent of "WKRP in Cincinnati", the creators decided to emphasize Frasier's home life, which "Cheers" had rarely explored. Lee considered his own experience with "the relationship between an aging father and the grown-up son he never understood" and thought it would be a good theme for "Frasier". Although Frasier had mentioned on "Cheers" (in two episodes) that his father, a research scientist, had died, Angell, Casey, and Lee did not realize this was the case, as they were not working on "Cheers" during the season those two episodes were filmed. The creative team was already well into the development process when Grammer pointed out the discontinuity; they decided to overlook it, initially retconning the character's backstory. In a second-season episode, the discrepancy was resolved, as Frasier revealed he had lied to the "Cheers" gang about his father.
One element of the original concept that was carried over was the live-in health-care provider for Frasier's father. Grammer points out that very little of the Frasier Crane of "Cheers" carried over to "Frasier", as his family history was changed (though this was later adjusted); the setting, his job, and even the character himself changed from the "Cheers" predecessor, having to be more grounded as the central character of the show so the other supporting characters could be more eccentric.
Martin Crane was based on creator Casey's father, who spent 34 years with the San Francisco Police Department. The creators suggested to NBC that they would like to cast someone like Mahoney, to which NBC told them if they could get Mahoney, they could hire him without auditions. Both Grammer and the producers contacted Mahoney, with the producers flying to Chicago to show Mahoney the pilot script over dinner. Upon reading it, Mahoney accepted. Grammer, who had lost his father as a child, and the childless Mahoney immediately built a close father-son relationship.
In discussing Martin's nurse, Warren Littlefield of NBC suggested she be English instead of Hispanic and suggested Leeves for the role. Grammer was initially reluctant, as he thought the casting made the show resemble "Nanny and the Professor", but approved Leeves after a meeting and read-through with her. Mahoney and Leeves quickly bonded over their shared English heritage; Mahoney was originally from Manchester where Leeves's character is from.
The character of Niles was not part of the original concept for the show. Frasier had told his bar friends on "Cheers" that he was an only child. However, Sheila Guthrie, the assistant casting director on "Wings", brought the producers a photo of Pierce (whom she knew from his work on "The Powers That Be") and noted his resemblance to Grammer when he first appeared on "Cheers". She recommended him should they ever want Frasier to have a brother. The creators were "blown away" both by his resemblance to Grammer and by his acting ability. They decided to ignore Frasier's statement on "Cheers" and created the role for Pierce. Pierce accepted the role before realizing he had not read a script. Once he was given a script, he was initially concerned that his character was essentially a duplicate of Frasier, thinking that it would not work.
The first table reading of the pilot script was notable because the producers had never heard either Pierce or Mahoney read lines because they were cast without auditions.
The only main role that required an audition was Roz Doyle, who was named in memory of a producer of "Wings". The producers auditioned around 300 actresses with no particular direction in mind. Women of all ethnicities were considered. Lisa Kudrow was originally cast in the role, but during rehearsals, the producers decided they needed someone who could appear more assertive in her job and take control over Frasier at KACL, and Kudrow did not fit that role. The creators quickly hired Gilpin, their second choice.
The original focus of the series was intended to be the relationship between Frasier and Martin, and it was the focus of most of the first-season episodes. Once the show began airing, Niles became a breakout character, and more focus was added to the brothers' relationship, and other plots centering on Niles, starting in the second season. The producers initially did not want to make Niles's wife Maris an unseen character because they did not want to draw parallels to Vera, Norm's wife on "Cheers". They originally intended that she would appear after several episodes, but were so enjoying writing excuses for her absence that eventually they decided she would remain unseen, and after the increasingly eccentric characteristics ascribed to her, no real actress could portray her.
Frasier's apartment was designed to be ultra-modern in an eclectic style (as Frasier himself points out in the pilot). One of the show's signature elements that it became well known for was the apartment's design which included elements such as a slightly split-level design, doors with triangular wooden inlay features, numerous pieces of well-known high-end furniture (such as a replica of Coco Chanel's sofa, and both Eames and Wassily Chairs) and a notable view from the terrace which was frequently complimented by visitors. The main set consisted of the open-concept living area with a sitting/TV space and dining area on the lower level and a piano exit to the terrace on the rear upper level. The set also included the kitchen through an open archway. A small section of the building corridor and elevator doors was built, as was a powder room near the front entrance. Two corridors off the living area ostensibly led to the apartment's three bedrooms. Sets for each of these rooms were built as separate sets on an as-needed basis.
No building or apartment in Seattle really has the view from Frasier's residence. It was created so the Space Needle, the most iconic landmark of Seattle, would appear more prominently. According to the season-one DVD bonus features, the photograph used on the set was taken from atop a cliff, possibly the ledge at Kerry Park, a frequent photography location. Despite this, "Frasier" has been said to have contributed to the emergence of an upscale urban lifestyle in 1990s Seattle, with buyers seeking properties in locations resembling that depicted in the show, in search of "that cosmopolitan feel of Frasier".
Another of the primary sets was the radio studio at KACL from which Frasier broadcasts his show. The studio itself consists of two rooms: the broadcast booth and the control room. A section of the corridor outside of the booth was also built (visible through the windows at the back of the studio) and could be shot from the side to view the corridor itself. The set was designed based on ABC's then-brand-new radio studios in Los Angeles which the production designer visited. Technical elements such as the microphones were regularly updated to conform with the latest technology. Although the studio set lacked a "front" wall (the fourth wall), one was built for occasional use in episodes with certain moments shot from behind the broadcast desk, rather than in front of it as usual.
The producers wanted to have a gathering place outside of home and work where the characters could meet. After a trip to Seattle, and seeing the many burgeoning coffee shops, the production designer suggested to producers that they use a coffee shop. Unlike many of the relatively modern coffee shop designs prevalent in Seattle, the production designer opted for a more warm and inviting style which would appear more established and traditional. Stools were specifically omitted to avoid any similarity to the bar on "Cheers". Several Los Angeles coffee shops were used for reference. A bookcase was added on the back wall, suggesting patrons could grab a book and read while they enjoyed their coffee. The show used three versions of the interior set depending on how much space other sets for each episode required. If space for the full set was not available, a smaller version that omitted the tables closest to the audience could be used. If space for that set was lacking, a small back section of the back of the cafe at the top of the steps could be set up under the audience bleachers. A set was also used on occasion for the exterior patio.
The cast had an unusual amount of freedom to suggest changes to the script. Grammer used an acting method he called "requisite disrespect" and did not rehearse with the others, instead learning and rehearsing his lines once just before filming each scene in front of a live studio audience. Although effective, the system often caused panic among guest stars.
In 1996, Grammer's recurrent alcoholism led to a car accident. The cast and crew performed an intervention that persuaded him to enter the Betty Ford Clinic, delaying production for a month.
Only one episode, "The 1000th Show", was filmed in Seattle. As with "Cheers", most episodes were filmed on Stage 25, Paramount Studios, or at various locations in and around Los Angeles.
The KACL callers' lines were read by anonymous voice-over actors during filming in front of a live audience, and during postproduction, the lines were replaced by celebrities, who actually phoned in their parts without having to come into the studio. The end credits of season finales show greyscale headshots of celebrities who had "called in" that season. Celebrities providing voices as callers include Gillian Anderson, Kevin Bacon, Halle Berry, Mel Brooks, Cindy Crawford, Billy Crystal, Phil Donahue, David Duchovny, Hilary Duff, Olympia Dukakis, Carrie Fisher, Jodie Foster, Art Garfunkel, Linda Hamilton, Daryl Hannah, Ron Howard, Eric Idle, Jay Leno, Laura Linney, John Lithgow, Yo-Yo Ma, William H. Macy, Henry Mancini, Reba McEntire, Helen Mirren, Estelle Parsons, Freddie Prinze, Jr., Christopher Reeve, Carly Simon, Gary Sinise, Mary Steenburgen, Ben Stiller, Marlo Thomas, Lily Tomlin, and Eddie Van Halen. Some "callers" also guest-starred, such as Parsons and Linney, who played Frasier's final love interest in the last season.
The show's theme song, "Tossed Salads and Scrambled Eggs", is sung by Grammer and is played over the closing credits of each episode. Composer Bruce Miller, who had also composed for "Wings", was asked to avoid explicitly mentioning any subjects related to the show such as radio or psychiatry. After Miller finished the music, lyricist Darryl Phinnesse suggested the title as they were things that were, like Frasier Crane's patients, "mixed up". The lyrics indirectly refer to Crane's radio show; "I hear the blues a-callin", for example, refers to troubled listeners who call the show. Grammer recorded several variations of the final spoken line of the theme, which were rotated for each of the episodes. Other than season finales, a short, silent scene, often revisiting a small subplot aside from the central story of the episode, appears with the credits and song, which the actors performed without written dialogue based on the scriptwriter's suggestion.
The title card at the start of each episode shows a white line being drawn in the shape of the Seattle skyline on a black background above the show's title. In most episodes, once the skyline and title appear, the skyline is augmented in some way, such as windows lighting up or a helicopter lifting off. The color of the title text changed for each season (respectively: blue, red, green, purple, gold, orange, yellow, light green, light orange, silver, and metallic gold). Over the title card, one of about 25 brief musical cues evoking the closing theme is played.
Talks of a revival began in 2016, but were initially denied by Grammer, though they resurfaced in mid- to late 2018, with Grammer confirming they were looking into it. In February and March 2019, he said in several interviews that a reboot was likely. Grammer has said the revival will be a "third act" for the Frasier Crane character and is likely to be in a new setting other than Seattle. He has also indicated a new series will pay tribute to John Mahoney. In late 2019, Grammer said "we've hatched a plan for "Frasier" reboot", and it was reported that it would air in 2020.
With the exception of Rebecca Howe (Kirstie Alley), all the surviving main regular cast members of "Cheers" made appearances on "Frasier". Lilith Sternin (Bebe Neuwirth) was the only one to become a recurring character, appearing in a total of twelve episodes.
In the eighth-season "Cheers" episode "Two Girls for Every Boyd", Frasier tells Sam Malone (played by Ted Danson) that his father, a research scientist, had died. In the "Frasier" season-two episode "The Show Where Sam Shows Up", when Sam meets Martin, Frasier explains that at the time, he was angry after an argument with his father on the phone. However, in "The Show Where Woody Shows Up", when meeting Martin, Woody says he remembers hearing about him.
In the ninth-season episode of "Frasier", "Cheerful Goodbyes" in 2002, Frasier returns to Boston to give a speech and Niles, Daphne, and Martin come along to see the city. Frasier runs into Cliff Clavin (played by John Ratzenberger) at the airport and learns that Cliff is retiring and moving to Florida. Frasier and company attend Cliff's retirement party, where Frasier reunites with the rest of the gang from "Cheers" (minus Sam, Woody, and Rebecca), including bar regular Norm Peterson (played by George Wendt), waitress Carla Tortelli (played by Rhea Perlman), barflies Paul Krapence (played by Paul Willson) and Phil (played by Philip Perlman), and Cliff's old post-office nemesis Walt Twitchell (played by Raye Birk).
In the 11th-season episode of "Frasier", "Caught in the Act", Frasier's married ex-wife, children's entertainer Nanny G, comes to town and invites him backstage for a rendezvous. Nanny G appeared on the "Cheers" episode "One Hugs, The Other Doesn't" (1992) and was portrayed by Emma Thompson. In this episode of "Frasier", she is portrayed by Laurie Metcalf. A younger version of the character (this time played by Dina Waters) appears in the second episode of season 9 of "Frasier", “Don Juan in Hell: Part 2,” along with Neuwirth and Shelley Long reprising their roles of Lilith and Diane Chambers, respectively. In this episode, Rita Wilson also reprises her role as Frasier’s mother, Hester, which she briefly debuted in the season 7 premiere, “Momma Mia"; in "Don Juan in Hell: Part 2," Diane also references the season 3 episode of "Cheers", “Diane Meets Mom,” in which Hester (then portrayed by Nancy Marchand) threatens Diane’s life. Diane (again portrayed by Long) plays a central role in “The Show Where Diane Comes Back” (season 3, episode 14) and had a brief cameo in the season 1 episode “Adventures in Paradise: Part 2.”
Some cast members of "Frasier" had appeared previously in minor roles on "Cheers". In the episode "Do Not Forsake Me, O' My Postman" (1992), John Mahoney played Sy Flembeck, an over-the-hill jingle writer hired by Rebecca to write a jingle for the bar. In it, Grammer and Mahoney exchanged a few lines. Peri Gilpin appeared in a "Cheers" episode titled "Woody Gets an Election" playing a reporter who interviews Woody when he runs for office.
The set of "Frasier" was built over the set of "Cheers" on the same stage after it had finished filming.
Critics and commentators broadly held "Frasier" in high regard. Caroline Frost said that the series overall showed a high level of wit, but noted that many critics felt that the marriage of Daphne and Niles in season 10 had removed much of the show's comic tension. Ken Tucker felt that their marriage made the series seem desperate for storylines, while Robert Bianco felt that it was symptomatic of a show that had begun to dip in quality after so much time on the air. Kelsey Grammer acknowledged the creative lull, saying that over the course of two later seasons, the show "took itself too seriously". Commentators do, however, acknowledge that there was an improvement following the return of the writers Christopher Lloyd and Joe Keenan, although not necessarily to its earlier high standards.
Writing about the first season, John O'Connor described "Frasier" as being a relatively unoriginal concept, but said that it was generally a "splendid act,” while Tucker thought that the second season benefited greatly from a mix of "high and low humor.” Tucker's comment is referring to what Grammer described as a rule of the series that the show should not play down to its audience. Kevin Cherry believes that "Frasier" was able to stay fresh by not making any contemporary commentary, therefore allowing the show to be politically and socially neutral. Other commentators, such as Haydn Bush disagree, believing the success of "Frasier" can be attributed to the comedic timing and the rapport between the characters. Joseph J. Darowski and Kate Darowski praise the overall message of the series, which across eleven seasons sees several lonely, broken individuals develop warm, caring relationships. While individual episodes vary in quality, the series as a whole carries with it a definitive theme and evolution from pilot to finale. The Economist would devote an article to the 25th anniversary of the show's premiere stating, "it is clear that audiences still demand the sort of intelligent and heartfelt comedy that “Frasier” provided."
In spite of the criticisms of the later seasons, these critics were unanimous in praising at least the early seasons, with varied commentary on the series' demise ranging from believing, like Bianco, that the show had run its course to those like Dana Stevens, who bemoaned the end of "Frasier" as the "end of situation comedy for adults". Critics compared the farcical elements of the series, especially in later seasons, to the older sitcom "Three's Company". NBC News contributor Wendell Wittler described the moments of misunderstanding as "inspired by the classic comedies of manners as were the frequent deflations of Frasier’s pomposity."
"Frasier" is one of the most successful spin-off series in television history and one of the most critically acclaimed comedy series of all time. The series won a total of 37 Primetime Emmy Awards during its 11-year run, breaking the record long held by CBS' "The Mary Tyler Moore Show" (29). It held the record until 2016 when "Game of Thrones" won 38. Grammer and Pierce each won four, including one each for the fifth and eleventh seasons. The series is tied with ABC's "Modern Family" for the most consecutive wins for Outstanding Comedy Series, winning five from 1994 to 1998.
Grammer has been Emmy-nominated for playing Frasier Crane on "Cheers" and "Frasier", as well as a 1992 crossover appearance on "Wings", making him the only performer to be nominated for playing the same role on three different shows. The first year Grammer did not receive an Emmy nomination for "Frasier" was in 2003 for the 10th season. However, Pierce was nominated every year of the show's run, breaking the record for nominations in his category, with his eighth nomination in 2001; he was nominated a further three times after this.
In 1994, the episode "The Matchmaker" was ranked number 43 on "TV Guide"s 100 Greatest Episodes of All Time. In 2000, the series was named the greatest international programme of all time by a panel of 1,600 industry experts for the British Film Institute as part of BFI TV 100. In 2002, "Frasier" was ranked number 34 on "TV Guide"s 50 Greatest TV Shows of All Time. In a 2006 poll taken by Channel 4 of professionals in the sitcom industry, "Frasier" was voted the best sitcom of all time.
"Frasier" began airing in off-network syndication on September 15, 1997. It is available on Cozi TV, Hallmark Channel, Amazon Prime Video, Hulu, and Crave in Canada.
The show's popularity has resulted in several fan sites, podcasts, and publications. Podcasts that look primarily at the television show "Frasier" include "Frasierphiles," "The Frasier Analysis," and "Talk Salad and Scrambled Eggs" with Kevin Smith and Matt Mira. While the show was still in production, a cookbook, "Cafe Nervosa: The Connoisseur's Cookbook", was published that claimed to be authored by Frasier and Niles Crane. Similarly, a book titled "My Life as a Dog" was published as an autobiography of Moose, the dog that played Eddie in the first several seasons of the series. In 2001, a soundtrack to the series was released. Jefferson Graham published a behind-the-scenes look at the series, and several collections of scripts were published. In 2017, Frasier: A Cultural History was published by Rowman & Littlfield as part of their Cultural History of Television series. That volume was written by Joseph J. Darowski and Kate Darowski.
Paramount Home Entertainment and (from 2006 onward) CBS DVD have released all 11 seasons of "Frasier" on DVD in Region 1, 2 and 4. A 44-disc package containing the entire 11 seasons has also been released.
On April 7, 2015, CBS DVD released "Frasier: The Complete Series" on DVD in Region 1.
The first four seasons were also released on VHS along with a series of 'Best Of' tapes. These tapes consisted of four episodes taken from seasons 1–4. No more video releases have been announced.
One "Frasier" CD has been released featuring a number of songs taken from the show.
Several books about "Frasier" have been released, including: | https://en.wikipedia.org/wiki?curid=11685 |
Fantasy Games Unlimited
Fantasy Games Unlimited (FGU) is a publishing house for tabletop and role-playing games. The company has no in-house design teams and relies on submitted material from outside talent.
Founded in the summer of 1975 in Jericho, New York by Scott Bizar, the company's first publications were the wargames "Gladiators" and "Royal Armies of the Hyborean Age". Upon the sudden appearance and massive popularity of "Dungeons & Dragons" from TSR, the company turned its attentions to role-playing games, seeking out and producing systems created by amateurs and freelancers. Rather than focusing on any one line and supporting it with subsequent supplements, FGU produced a continuous stream of new games. Because of the disparate authors, the rules systems were mutually incompatible. FGU Incorporated published dozens of different role-playing games.
Fantasy Games Unlimited won the All Time Best Ancient Medieval Rules for 1979 H.G. Wells Award at Origins 1980 for "Chivalry & Sorcery".
In 1991, Fantasy Games Unlimited Inc. was dissolved as a New York corporation. Bizar continues to publish in Arizona as a sole proprietorship called Fantasy Games Unlimited.
A new FGU website appeared in July 2006 offering the company's back catalog. It said that new products would be "coming soon". New "Aftermath!" products began to appear in 2008. By 2010 much of the company's back catalog was available. At that time FGU was seeking submissions to allow the publication of new adventures for some of their existing titles, primarily "Aftermath!, Space Opera, and Villains and Vigilantes". | https://en.wikipedia.org/wiki?curid=11690 |
Functional decomposition
In mathematics, functional decomposition is the process of resolving a functional relationship into its constituent parts in such a way that the original function can be reconstructed (i.e., recomposed) from those parts by function composition.
This process of decomposition may be undertaken to gain insight into the identity of the constituent components which may reflect individual physical processes of interest. Also functional decomposition may result in a compressed representation of the global function, a task which is feasible only when the constituent processes possess a certain level of "modularity" (i.e., independence or non-interaction).
Interactions between the components are critical to the function of the collection. All interactions may not be observable, but possibly deduced through repetitive perception, synthesis, validation and verification of composite behavior.
For a multivariate function formula_1, functional decomposition generally refers to a process of identifying a set of functions formula_2 such that
where formula_4 is some other function. Thus, we would say that the function formula_5 is decomposed into functions formula_2. This process is intrinsically hierarchical in the sense that we can (and often do) seek to further decompose the functions formula_7 into a collection of constituent functions formula_8such that
where formula_10 is some other function. Decompositions of this kind are interesting and important for a wide variety of reasons. In general, functional decompositions are worthwhile when there is a certain "sparseness" in the dependency structure; that is, when constituent functions are found to depend on approximately disjoint sets of variables. Thus, for example, if we can obtain a decomposition of formula_11 into a hierarchical composition of functions formula_12 such that formula_13, formula_14, formula_15, as shown in the figure at right, this would probably be considered a highly valuable decomposition.
A basic example of functional decomposition is expressing the four binary arithmetic operations of addition, subtraction, multiplication, and division in terms of the two binary operations of addition formula_16 and multiplication formula_17 and the two unary operations of additive inversion formula_18 and multiplicative inversion formula_19 Subtraction can then be realized as the composition of addition and additive inversion, formula_20 and division can be realized as the composition of multiplication and multiplicative inverse, formula_21 This simplifies the analysis of subtraction and division, and also makes it easier to axiomatize these operations in the notion of a field, as there are only two binary and two unary operations, rather than four binary operations.
Extending these primitive operations, there is a rich literature on the topic of polynomial decomposition.
As to "why" the decomposition is valuable, the reason is twofold. Firstly, decomposition of a function into non-interacting components generally permits more economical representations of the function. For example, on a set of quaternary (i.e., 4-ary) variables, representing the full function formula_22 requires storing formula_23 values, the value of the function for each element in the Cartesian product formula_24, i.e., each of the 1024 possible combinations for formula_24. However, if the decomposition into formula_12 given above is possible, then formula_27 requires storing 4 values, formula_28 requires storing formula_29 values, and formula_30 again requires storing just 4 values. So in virtue of the decomposition, we need store only formula_31 values rather than 1024 values, a dramatic savings.
Intuitively, this reduction in representation size is achieved simply because each variable depends only on a subset of the other variables. Thus, variable formula_32 only depends directly on variable formula_33, rather than depending on the "entire set" of variables. We would say that variable formula_33 "screens off" variable formula_32 from the rest of the world. Practical examples of this phenomenon surround us, as discussed in the "Philosophical Considerations" below, but let's just consider the particular case of "northbound traffic on the West Side Highway." Let us assume this variable (formula_36) takes on three possible values of {"moving slow", "moving deadly slow", "not moving at all"}. Now let's say variable formula_36 depends on two other variables, "weather" with values of {"sun", "rain", "snow"}, and "GW Bridge traffic" with values {"10mph", "5mph", "1mph"}. The point here is that while there are certainly many secondary variables that affect the weather variable (e.g., low pressure system over Canada, butterfly flapping in Japan, etc.) and the Bridge traffic variable (e.g., an accident on I-95, presidential motorcade, etc.) all these other secondary variables are not directly relevant to the West Side Highway traffic. All we need (hypothetically) in order to predict the West Side Highway traffic is the weather and the GW Bridge traffic, because these two variables "screen off" West Side Highway traffic from all other potential influences. That is, all other influences act "through" them.
Outside of purely mathematical considerations, perhaps the greatest value of functional decomposition is the insight it provides into the structure of the world. When a functional decomposition can be achieved, this provides ontological information about what structures actually exist in the world, and how they can be predicted and manipulated. For example, in the illustration above, if it is learned that formula_36 depends directly only on formula_39, this means that for purposes of prediction of formula_36, it suffices to know only formula_39. Moreover, interventions to influence formula_36 can be taken directly on formula_39, and nothing additional can be gained by intervening on variables formula_44, since these only act through formula_39 in any case.
The philosophical antecedents and ramifications of functional decomposition are quite broad, as functional decomposition in one guise or another underlies all of modern science. Here we review just a few of these philosophical considerations.
One of the major distinctions that is often drawn between Eastern philosophy and Western Philosophy is that the Eastern philosophers tended to espouse ideas favoring holism while the Western thinkers tended to espouse ideas favoring reductionism. This distinction between East and West is akin to other philosophical distinctions (such as realism vs. anti-realism). Some examples of the Eastern holistic spirit:
The Western tradition, from its origins among the Greek philosophers, preferred a position in which drawing correct distinctions, divisions, and contrasts was considered the very pinnacle of insight. In the Aristotelian/Porphyrian worldview, to be able to distinguish (via strict proof) which qualities of a thing represent its essence vs. property vs. accident vs. definition, and by virtue of this formal description to segregate that entity into its proper place in the taxonomy of nature — this was to achieve the very height of wisdom.
In natural or artificial systems that require components to be integrated in some fashion, but where the number of components exceeds what could reasonably be fully interconnected (due to square wise growth in number of connections (= n over two or = n * (n - 1) / 2)), one often finds that some degree of hierarchicality must be employed in the solution. The general advantages of sparse hierarchical systems over densely connected systems—and quantitative estimates of these advantage—are presented by . In prosaic terms, a hierarchy is "a collection of elements that combine lawfully into complex wholes which depend for their properties upon those of their constituent parts," and wherein novelty is "fundamentally combinatorial, iterative, and transparent" .
An important notion that always arises in connection with hierarchies is modularity, which is effectively implied by the sparseness of connections in hierarchical topologies. In physical systems, a module is generally a set of interacting components that relates to the external world via a very limited interface, thus concealing most aspects of its internal structure. As a result, modifications that are made to the internals of a module (to improve efficiency for example) do not necessarily create a ripple effect through the rest of the system . This feature makes the effective use of modularity a centerpiece of all good software and hardware engineering.
There are many compelling arguments regarding the prevalence and necessity of hierarchy/modularity in nature . points out that among evolving systems, only those that can manage to obtain and then reuse stable subassemblies (modules) are likely to be able to search through the fitness landscape with a reasonably quick pace; thus, Simon submits that "among possible complex forms, hierarchies are the ones that have the time to evolve." This line of thinking has led to the even stronger claim that although "we do not know what forms of life have evolved on other planets in the universe, ... we can safely assume that 'wherever there is life, it must be hierarchically organized'" . This would be a fortunate state of affairs since the existence of simple and isolable subsystems is thought to be a precondition for successful science . In any case, experience certainly seems to indicate that much of the world possesses hierarchical structure.
It has been proposed that perception itself is a process of hierarchical decomposition , and that phenomena which are not essentially hierarchical in nature may not even be "theoretically intelligible" to the human mind (,). In Simon's words,
Practical applications of functional decomposition are found in Bayesian networks, structural equation modeling, linear systems, and database systems.
Processes related to functional decomposition are prevalent throughout the fields of knowledge representation and machine learning. Hierarchical model induction techniques such as Logic circuit minimization, decision trees, grammatical inference, hierarchical clustering, and quadtree decomposition are all examples of function decomposition. A review of other applications and function decomposition can be found in , which also presents methods based on information theory and graph theory.
Many statistical inference methods can be thought of as implementing a function decomposition process in the presence of noise; that is, where functional dependencies are only expected to hold "approximately". Among such models are mixture models and the recently popular methods referred to as "causal decompositions" or Bayesian networks.
See database normalization.
In practical scientific applications, it is almost never possible to achieve perfect functional decomposition because of the incredible complexity of the systems under study. This complexity is manifested in the presence of "noise," which is just a designation for all the unwanted and untraceable influences on our observations.
However, while perfect functional decomposition is usually impossible, the spirit lives on in a large number of statistical methods that are equipped to deal with noisy systems. When a natural or artificial system is intrinsically hierarchical, the joint distribution on system variables should provide evidence of this hierarchical structure. The task of an observer who seeks to understand the system is then to infer the hierarchical structure from observations of these variables. This is the notion behind the hierarchical decomposition of a joint distribution, the attempt to recover something of the intrinsic hierarchical structure which generated that joint distribution.
As an example, Bayesian network methods attempt to decompose a joint distribution along its causal fault lines, thus "cutting nature at its seams". The essential motivation behind these methods is again that within most systems (natural or artificial), relatively few components/events interact with one another directly on equal footing . Rather, one observes pockets of dense connections (direct interactions) among small subsets of components, but only loose connections between these densely connected subsets. There is thus a notion of "causal proximity" in physical systems under which variables naturally precipitate into small clusters. Identifying these clusters and using them to represent the joint provides the basis for great efficiency of storage (relative to the full joint distribution) as well as for potent inference algorithms.
Functional Decomposition is a design method intending to produce a non-implementation, architectural description of a computer program. Rather than conjecturing Objects and adding methods to them (OOP), with each Object intending to capture some service of the program, the software architect first establishes a series of functions and types that accomplishes the main processing problem of the computer program, decomposes each to reveal common functions and types, and finally derives Modules from this activity.
For example, the design of the editor Emacs can initially be thought about in terms of functions:
formula_46
formula_47
formula_48
And a possible function decomposition of "f":
formula_49
formula_50
formula_51
This leads one to the plausible Module, Service, or Object, of an interpreter (containing the function "fromExpr"). Function Decomposition arguably yields insights about re-usability, such as if during the course of analysis, two functions produce the same type, it is likely that a common function/cross-cutting concern resides in both. To contrast, in OOP, it is a common practice to conjecture Modules prior to considering such a decomposition. This arguably results in costly refactoring later. FD mitigates that risk to some extent. Further, arguably, what separates FD from other design methods- is that it provides a concise high-level medium of architectural discourse that is end-to-end, revealing flaws in upstream requirements and beneficially exposing more design decisions in advance. And lastly, FD is known to prioritize development. Arguably, if the FD is correct, the most re-usable and cost-determined parts of the program are identified far earlier in the development cycle.
Functional decomposition is used in the analysis of many signal processing systems, such as LTI systems. The input signal to an LTI system can be expressed as a function, formula_52. Then formula_52 can be decomposed into a linear combination of other functions, called component signals:
Here, formula_55 are the component signals. Note that formula_56 are constants. This decomposition aids in analysis, because now the output of the system can be expressed in terms of the components of the input. If we let formula_57 represent the effect of the system, then the output signal is formula_58, which can be expressed as:
In other words, the system can be seen as acting separately on each of the components of the input signal. Commonly used examples of this type of decomposition are the Fourier series and the Fourier transform.
Functional decomposition in systems engineering refers to the process of defining a system in functional terms, then defining lower-level functions and sequencing relationships from these higher level systems functions. The basic idea is to try to divide a system in such a way that each block of a block diagram can be described without an "and" or "or" in the description.
This exercise forces each part of the system to have a pure function. When a system is designed as pure functions, they can be reused, or replaced. A usual side effect is that the interfaces between blocks become simple and generic. Since the interfaces usually become simple, it is easier to replace a pure function with a related, similar function.
For example, say that one needs to make a stereo system. One might functionally decompose this into speakers, amplifier, a tape deck and a front panel. Later, when a different model needs an audio CD, it can probably fit the same interfaces. | https://en.wikipedia.org/wiki?curid=11691 |
Franz Boas
Franz Uri Boas (1858–1942) was a German-born American anthropologist and a pioneer of modern anthropology who has been called the "Father of American Anthropology". His work is associated with the movements known as Historical Particularism and Cultural Relativism.
Studying in Germany, Boas was awarded a doctorate in 1881 in physics while also studying geography. He then participated in a geographical expedition to northern Canada, where he became fascinated with the culture and language of the Baffin Island Inuit. He went on to do field work with the indigenous cultures and languages of the Pacific Northwest. In 1887 he emigrated to the United States, where he first worked as a museum curator at the Smithsonian, and in 1899 became a professor of anthropology at Columbia University, where he remained for the rest of his career. Through his students, many of whom went on to found anthropology departments and research programmes inspired by their mentor, Boas profoundly influenced the development of American anthropology. Among his most significant students were A. L. Kroeber, Ruth Benedict, Edward Sapir, Margaret Mead, Zora Neale Hurston, Gilberto Freyre and many others.
Boas was one of the most prominent opponents of the then-popular ideologies of scientific racism, the idea that race is a biological concept and that human behavior is best understood through the typology of biological characteristics. In a series of groundbreaking studies of skeletal anatomy he showed that cranial shape and size was highly malleable depending on environmental factors such as health and nutrition, in contrast to the claims by racial anthropologists of the day that held head shape to be a stable racial trait. Boas also worked to demonstrate that differences in human behavior are not primarily determined by innate biological dispositions but are largely the result of cultural differences acquired through social learning. In this way, Boas introduced culture as the primary concept for describing differences in behavior between human groups, and as the central analytical concept of anthropology.
Among Boas's main contributions to anthropological thought was his rejection of the then-popular evolutionary approaches to the study of culture, which saw all societies progressing through a set of hierarchic technological and cultural stages, with Western European culture at the summit. Boas argued that culture developed historically through the interactions of groups of people and the diffusion of ideas and that consequently there was no process towards continuously "higher" cultural forms. This insight led Boas to reject the "stage"-based organization of ethnological museums, instead preferring to order items on display based on the affinity and proximity of the cultural groups in question.
Boas also introduced the ideology of cultural relativism, which holds that cultures cannot be objectively ranked as higher or lower, or better or more correct, but that all humans see the world through the lens of their own culture, and judge it according to their own culturally acquired norms. For Boas, the object of anthropology was to understand the way in which culture conditioned people to understand and interact with the world in different ways and to do this it was necessary to gain an understanding of the language and cultural practices of the people studied. By uniting the disciplines of archaeology, the study of material culture and history, and physical anthropology, the study of variation in human anatomy, with ethnology, the study of cultural variation of customs, and descriptive linguistics, the study of unwritten indigenous languages, Boas created the four-field subdivision of anthropology which became prominent in American anthropology in the 20th century.
Franz Boas was born on July 9, 1858, in Minden, Westphalia, the son of Sophie Meyer and Meier Boas. Although his grandparents were observant Jews, his parents embraced Enlightenment values, including their assimilation into modern German society. Boas's parents were educated, well-to-do, and liberal; they did not like dogma of any kind. Due to this, Boas was granted the independence to think for himself and pursue his own interests. Early in life, he displayed a penchant for both nature and natural sciences. Boas vocally opposed antisemitism and refused to convert to Christianity, but he did not identify himself as a Jew. This is disputed however by Ruth Bunzel, a protégée of Boas, who called him "the essential protestant; he valued autonomy above all things." According to his biographer, "He was an 'ethnic' German, preserving and promoting German culture and values in America." In an autobiographical sketch, Boas wrote:
The background of my early thinking was a German home in which the ideals of the revolution of 1848 were a living force. My father, liberal, but not active in public affairs; my mother, idealistic, with a lively interest in public matters; the founder about 1854 of the kindergarten in my hometown, devoted to science. My parents had broken through the shackles of dogma. My father had retained an emotional affection for the ceremonial of his parental home, without allowing it to influence his intellectual freedom.
From kindergarten on, Boas was educated in natural history, a subject he enjoyed. In gymnasium, he was most proud of his research on the geographic distribution of plants.
When he started his university studies, Boas first attended Heidelberg University for a semester followed by four terms at Bonn University, studying physics, geography, and mathematics at these schools. In 1879, he hoped to transfer to Berlin University to study physics under Hermann von Helmholtz, but ended up transferring to the University of Kiel instead due to family reasons. At Kiel, Boas studied under Theobald Fischer and received a doctorate in physics in 1881 for his dissertation entitled "Contributions to the Understanding of the Color of Water", which examined the absorption, reflection, and the polarization of light in seawater. Although technically Boas' doctorate degree was in physics, his advisor Fischer, a student of Carl Ritter, was primarily a geographer and thus some biographers view Boas as more of a geographer than a physicist at this stage. The combination of physics and geography also may have been accomplished through a major in physics and a minor in geography. For his part Boas self-identified as a geographer by this time, prompting his sister, Toni, to write in 1883 "After long years of infidelity, my brother was re-conquered by geography, the first love of his boyhood."
In his dissertation research, Boas' methodology included investigating how different intensities of light created different colors when interacting with different types of water, however, he encountered difficulty in being able to objectively perceive slight differences in the color of water and as a result became intrigued by this problem of perception and its influence on quantitative measurements. Boas, due to tone deafness, encountered difficulties studying tonal languages such as Laguna. Boas had already been interested in Kantian philosophy since taking a course on aesthetics with Kuno Fischer at Heidelberg. These factors led Boas to consider pursuing research in psychophysics, which explores the relationship between the psychological and the physical, after completing his doctorate, but he had no training in psychology. Boas did publish six articles on psychophysics during his year of military service (1882–1883), but ultimately he decided to focus on geography, primarily so he could receive sponsorship for his planned Baffin Island expedition.
Boas took up geography as a way to explore his growing interest in the relationship between subjective experience and the objective world. At the time, German geographers were divided over the causes of cultural variation. Many argued that the physical environment was the principal determining factor, but others (notably Friedrich Ratzel) argued that the diffusion of ideas through human migration is more important. In 1883, encouraged by Theobald Fischer, Boas went to Baffin Island to conduct geographic research on the impact of the physical environment on native Inuit migrations. The first of many ethnographic field trips, Boas culled his notes to write his first monograph titled "The Central Eskimo", which was published in 1888 in the 6th Annual Report from the Bureau of American Ethnology. Boas lived and worked closely with the Inuit peoples on Baffin Island, and he developed an abiding interest in the way people lived.
In the perpetual darkness of the Arctic winter, Boas reported, he and his traveling companion became lost and were forced to keep sledding for twenty-six hours through ice, soft snow, and temperatures that dropped below −46 °C. The following day, Boas penciled in his diary,
Boas went on to explain in the same entry that "all service, therefore, which a man can perform for humanity must serve to promote truth." Boas was forced to depend on various Inuit groups for everything from directions and food to shelter and companionship. It was a difficult year filled with tremendous hardships that included frequent bouts of disease, mistrust, pestilence, and danger. Boas successfully searched for areas not yet surveyed and found unique ethnographic objects, but the long winter and the lonely treks across perilous terrain forced him to search his soul to find a direction for his life as a scientist and a citizen.
Boas's interest in indigenous communities grew as he worked at the Royal Ethnological Museum in Berlin, where he was introduced to members of the Nuxalk Nation of British Columbia, which sparked a lifelong relationship with the First Nations of the Pacific Northwest.
He returned to Berlin to complete his studies. In 1886, Boas defended (with Helmholtz's support) his habilitation thesis, "Baffin Land", and was named "privatdozent" in geography.
While on Baffin Island he began to develop his interest in studying non-Western cultures (resulting in his book, "The Central Eskimo", published in 1888). In 1885, Boas went to work with physical anthropologist Rudolf Virchow and ethnologist Adolf Bastian at the Royal Ethnological Museum in Berlin. Boas had studied anatomy with Virchow two years earlier while preparing for the Baffin Island expedition. At the time, Virchow was involved in a vociferous debate over evolution with his former student, Ernst Haeckel. Haeckel had abandoned his medical practice to study comparative anatomy after reading Charles Darwin's "The Origin of Species", and vigorously promoted Darwin's ideas in Germany. However, like most other natural scientists prior to the rediscovery of Mendelian genetics in 1900 and the development of the modern synthesis, Virchow felt that Darwin's theories were weak because they lacked a theory of cellular mutability. Accordingly, Virchow favored Lamarckian models of evolution. This debate resonated with debates among geographers. Lamarckians believed that environmental forces could precipitate rapid and enduring changes in organisms that had no inherited source; thus, Lamarckians and environmental determinists often found themselves on the same side of debates.
But Boas worked more closely with Bastian, who was noted for his antipathy to environmental determinism. Instead, he argued for the "psychic unity of mankind", a belief that all humans had the same intellectual capacity, and that all cultures were based on the same basic mental principles. Variations in custom and belief, he argued, were the products of historical accidents. This view resonated with Boas's experiences on Baffin Island and drew him towards anthropology.
While at the Royal Ethnological Museum Boas became interested in the Native Americans in the Pacific Northwest, and after defending his habilitation thesis, he left for a three-month trip to British Columbia via New York. In January 1887, he was offered a job as assistant editor of the journal "Science". Alienated by growing antisemitism and nationalism as well as the very limited academic opportunities for a geographer in Germany, Boas decided to stay in the United States. Possibly he received additional motivation for this decision from his romance with Marie Krackowizer, whom he married in the same year.
Aside from his editorial work at "Science", Boas secured an appointment as "docent" in anthropology at Clark University, in 1888. Boas was concerned about university president G. Stanley Hall's interference in his research, yet in 1889 he was appointed as the head of a newly created department of anthropology at Clark University. In the early 1890s, he went on a series of expeditions which were referred to as the Morris K. Jesup Expedition. The primary goal of these expeditions was to illuminate Asiatic-American relations. In 1892 Boas, along with another member of the Clark faculty, resigned in protest of the alleged infringement by Hall on academic freedom.
Anthropologist Frederic Ward Putnam, director and curator of the Peabody Museum at Harvard University, who had been appointed as head of the Department of Ethnology and Archeology for the Chicago Fair in 1892, chose Boas as his first assistant at Chicago to prepare for the 1893 World's Columbian Exposition or Chicago World's Fair, the 400th anniversary of Christopher Columbus's arrival in the Americas. Boas had a chance to apply his approach to exhibits. Boas directed a team of about one hundred assistants, mandated to create anthropology and ethnology exhibits on the Indians of North America and South America that were living at the time Christopher Columbus arrived in America while searching for India. Putnam intended the World's Columbian Exposition to be a celebration of Columbus' voyage. Putnam argued that showing late nineteenth century Inuit and First Nations (then called Eskimo and Indians) "in their natural conditions of life" would provide a contrast and celebrate the four centuries of Western accomplishments since 1493.
Franz Boas traveled north to gather ethnographic material for the Exposition. Boas had intended public science in creating exhibitions for the Exposition where visitors to the Midway could learn about other cultures. Boas arranged for fourteen Kwakwaka'wakw aboriginals from British Columbia to come and reside in a mock Kwakwaka'wakw village, where they could perform their daily tasks in context. Inuit were there with 12-foot-long whips made of sealskin, wearing sealskin clothing and showing how adept they were in sealskin kayaks. His experience with the Exposition provided the first of a series of shocks to Franz Boas' faith in public anthropology. The visitors were not there to be educated. By 1916, Boas had come to recognize with a certain resignation that "the number of people in our country who are willing and able to enter into the modes of thought of other nations is altogether too small ... The American who is cognizant only of his own standpoint sets himself up as arbiter of the world."
After the exposition, the ethnographic material collected formed the basis of the newly created Field Museum in Chicago with Boas as the curator of anthropology. He worked there until 1894, when he was replaced (against his will) by BAE archeologist William Henry Holmes.
In 1896, Boas was appointed Assistant Curator of Ethnology and Somatology of the American Museum of Natural History under Putnam. In 1897, he organized the Jesup North Pacific Expedition, a five-year-long field-study of the natives of the Pacific Northwest, whose ancestors had migrated across the Bering Strait from Siberia. He attempted to organize exhibits along contextual, rather than evolutionary, lines. He also developed a research program in line with his curatorial goals: describing his instructions to his students in terms of widening contexts of interpretation within a society, he explained that "... they get the specimens; they get explanations of the specimens; they get connected texts that partly refer to the specimens and partly to abstract things concerning the people; and they get grammatical information". These widening contexts of interpretation were abstracted into one context, the context in which the specimens, or assemblages of specimens, would be displayed: "... we want a collection arranged according to tribes, in order to teach the particular style of each group". His approach, however, brought him into conflict with the President of the Museum, Morris Jesup, and its director, Hermon Bumpus. By 1900 Boas had begun to retreat from American museum anthropology as a tool of education or reform (Hinsley 1992: 361). He resigned in 1905, never to work for a museum again.
Some scholars, like Boas's student Alfred Kroeber, believed that Boas used his research in physics as a model for his work in anthropology. Many others, however—including Boas's student Alexander Lesser, and later researchers such as Marian W. Smith, Herbert S. Lewis, and Matti Bunzl—have pointed out that Boas explicitly rejected physics in favor of history as a model for his anthropological research.
This distinction between science and history has its origins in 19th-century German academe, which distinguished between "Naturwissenschaften" (the sciences) and "Geisteswissenschaften" (the humanities), or between "Gesetzwissenschaften" (the law - giving sciences) and "Geschichtswissenschaften" (history). Generally, "Naturwissenschaften" and "Gesetzwissenschaften" refer to the study of phenomena that are governed by objective natural laws, while the latter terms in the two oppositions refer to those phenomena that have to mean only in terms of human perception or experience.
In 1884, Kantian philosopher Wilhelm Windelband coined the terms nomothetic and idiographic to describe these two divergent approaches. He observed that most scientists employ some mix of both, but in differing proportions; he considered physics a perfect example of a nomothetic science, and history, an idiographic science. Moreover, he argued that each approach has its origin in one of the two "interests" of reason Kant had identified in the "Critique of Judgement"—one "generalizing", the other "specifying". (Winkelband's student Heinrich Rickert elaborated on this distinction in "The Limits of Concept Formation in Natural Science : A Logical Introduction to the Historical Sciences"; Boas's students Alfred Kroeber and Edward Sapir relied extensively on this work in defining their own approach to anthropology.)
Although Kant considered these two interests of reason to be objective and universal, the distinction between the natural and human sciences was institutionalized in Germany, through the organization of scholarly research and teaching, following the Enlightenment. In Germany, the Enlightenment was dominated by Kant himself, who sought to establish principles based on universal rationality. In reaction to Kant, German scholars such as Johann Gottfried Herder (an influence to Boas) argued that human creativity, which necessarily takes unpredictable and highly diverse forms, is as important as human rationality. In 1795, the great linguist and philosopher Wilhelm von Humboldt called for an anthropology that would synthesize Kant's and Herder's interests. Humboldt founded the University of Berlin in 1809, and his work in geography, history, and psychology provided the milieu in which Boas's intellectual orientation matured.
Historians working in the Humboldtian tradition developed ideas that would become central in Boasian anthropology. Leopold von Ranke defined the task of the historian as "merely to show as it actually was", which is a cornerstone of Boas's empiricism. Wilhelm Dilthey emphasized the centrality of "understanding" to human knowledge, and that the lived experience of a historian could provide a basis for an empathic understanding of the situation of a historical actor. For Boas, both values were well-expressed in a quote from Goethe: "A single action or event is interesting, not because it is explainable, but because it is true."
The influence of these ideas on Boas is apparent in his 1887 essay, "The Study of Geography", in which he distinguished between physical science, which seeks to discover the laws governing phenomena, and historical science, which seeks a thorough understanding of phenomena on their own terms. Boas argued that geography is and must be historical in this sense. In 1887, after his Baffin Island expedition, Boas wrote "The Principles of Ethnological Classification", in which he developed this argument in application to anthropology:
This formulation echoes Ratzel's focus on historical processes of human migration and culture contact and Bastian's rejection of environmental determinism. It also emphasizes culture as a context ("surroundings"), and the importance of history. These are the hallmarks of Boasian anthropology (which Marvin Harris would later call "historical particularism"), would guide Boas's research over the next decade, as well as his instructions to future students. (See Lewis 2001b for an alternative view to Harris'.)
Although context and history were essential elements to Boas's understanding of anthropology as "Geisteswissenschaften" and "Geschichtswissenschaften", there is one essential element that Boasian anthropology shares with "Naturwissenschaften": empiricism. In 1949, Boas's student, Alfred Kroeber summed up the three principles of empiricism that define Boasian anthropology as a science:
One of the greatest accomplishments of Boas and his students was their critique of theories of physical, social, and cultural evolution current at that time. This critique is central to Boas's work in museums, as well as his work in all four fields of anthropology. As historian George Stocking noted, however, Boas's main project was to distinguish between biological and cultural heredity, and to focus on the cultural processes that he believed had the greatest influence over social life. In fact, Boas supported Darwinian theory, although he did not assume that it automatically applied to cultural and historical phenomena (and indeed was a lifelong opponent of 19th-century theories of cultural evolution, such as those of Lewis H. Morgan and Edward Burnett Tylor). The notion of evolution that the Boasians ridiculed and rejected was the then dominant belief in orthogenesis—a determinate or teleological process of evolution in which change occurs progressively regardless of natural selection. Boas rejected the prevalent theories of social evolution developed by Edward Burnett Tylor, Lewis Henry Morgan, and Herbert Spencer not because he rejected the notion of "evolution" per se, but because he rejected orthogenetic notions of evolution in favor of Darwinian evolution.
The difference between these prevailing theories of cultural evolution and Darwinian theory cannot be overstated: the orthogeneticists argued that all societies progress through the same stages in the same sequence. Thus, although the Inuit with whom Boas worked at Baffin Island, and the Germans with whom he studied as a graduate student, were contemporaries of one another, evolutionists argued that the Inuit were at an earlier stage in their evolution, and Germans at a later stage.
Boasians argued that virtually every claim made by cultural evolutionists was contradicted by the data, or reflected a profound misinterpretation of the data. As Boas's student Robert Lowie remarked, "Contrary to some misleading statements on the subject, there have been no responsible opponents of evolution as 'scientifically proved', though there has been determined hostility to an evolutionary metaphysics that falsifies the established facts". In an unpublished lecture, Boas characterized his debt to Darwin thus:
Although the idea does not appear quite definitely expressed in Darwin's discussion of the development of mental powers, it seems quite clear that his main object has been to express his conviction that the mental faculties developed essentially without a purposive end, but they originated as variations, and were continued by natural selection. This idea was also brought out very clearly by Wallace, who emphasized that apparently reasonable activities of man might very well have developed without an actual application of reasoning.
Thus, Boas suggested that what appear to be patterns or structures in a culture were not a product of conscious design, but rather the outcome of diverse mechanisms that produce cultural variation (such as diffusion and independent invention), shaped by the social environment in which people live and act. Boas concluded his lecture by acknowledging the importance of Darwin's work: "I hope I may have succeeded in presenting to you, however imperfectly, the currents of thought due to the work of the immortal Darwin which have helped to make anthropology what it is at the present time."
In the late 19th century anthropology in the United States was dominated by the Bureau of American Ethnology, directed by John Wesley Powell, a geologist who favored Lewis Henry Morgan's theory of cultural evolution. The BAE was housed at the Smithsonian Institution in Washington, and the Smithsonian's curator for ethnology, Otis T. Mason, shared Powell's commitment to cultural evolution. (The Peabody Museum at Harvard University was an important, though lesser, center of anthropological research.)
It was while working on museum collections and exhibitions that Boas formulated his basic approach to culture, which led him to break with museums and seek to establish anthropology as an academic discipline.
During this period Boas made five more trips to the Pacific Northwest. His continuing field research led him to think of culture as a local context for human action. His emphasis on local context and history led him to oppose the dominant model at the time, cultural evolution.
Boas initially broke with evolutionary theory over the issue of kinship. Lewis Henry Morgan had argued that all human societies move from an initial form of matrilineal organization to patrilineal organization. First Nations groups on the northern coast of British Columbia, like the Tsimshian, and Tlingit, were organized into matrilineal clans. First Nations on the southern coast, like the Nootka and the Salish, however, were organized into patrilineal groups. Boas focused on the Kwakiutl, who lived between the two clusters. The Kwakiutl seemed to have a mix of features. Prior to marriage, a man would assume his wife's father's name and crest. His children took on these names and crests as well, although his sons would lose them when they got married. Names and crests thus stayed in the mother's line. At first, Boas—like Morgan before him—suggested that the Kwakiutl had been matrilineal like their neighbors to the north, but that they were beginning to evolve patrilineal groups. In 1897, however, he repudiated himself, and argued that the Kwakiutl were changing from a prior patrilineal organization to a matrilineal one, as they learned about matrilineal principles from their northern neighbors.
Boas's rejection of Morgan's theories led him, in an 1887 article, to challenge Mason's principles of museum display. At stake, however, were more basic issues of causality and classification. The evolutionary approach to material culture led museum curators to organize objects on display according to function or level of technological development. Curators assumed that changes in the forms of artifacts reflect some natural process of progressive evolution. Boas, however, felt that the form an artifact took reflected the circumstances under which it was produced and used. Arguing that "[t]hough like causes have like effects like effects have not like causes", Boas realized that even artifacts that were similar in form might have developed in very different contexts, for different reasons. Mason's museum displays, organized along evolutionary lines, mistakenly juxtapose like effects; those organized along contextual lines would reveal like causes.
In his capacity as Assistant Curator at the American Museum of Natural History, Franz Boas requested that Arctic explorer Robert E. Peary bring one Inuk from Greenland to New York. Peary obliged and brought six Inuit to New York in 1897 who lived in the basement of the American Museum of Natural History. Four of them died from tuberculosis within a year of arriving in New York, one returned to Greenland, and a young boy, Minik Wallace, remained living in the museum. Boas staged a funeral for the father of the boy and had the remains dissected and placed in the museum. Boas has been widely critiqued for his role in bringing the Inuit to New York and his disinterest in them once they had served their purpose at the museum.
Boas was appointed a lecturer in physical anthropology at Columbia University in 1896, and promoted to professor of anthropology in 1899. However, the various anthropologists teaching at Columbia had been assigned to different departments. When Boas left the Museum of Natural History, he negotiated with Columbia University to consolidate the various professors into one department, of which Boas would take charge. Boas's program at Columbia was the first Doctor of Philosophy (PhD) program in anthropology in America.
During this time Boas played a key role in organizing the American Anthropological Association (AAA) as an umbrella organization for the emerging field. Boas originally wanted the AAA to be limited to professional anthropologists, but W. J. McGee (another geologist who had joined the BAE under Powell's leadership) argued that the organization should have an open membership. McGee's position prevailed and he was elected the organization's first president in 1902; Boas was elected a vice-president, along with Putnam, Powell, and Holmes.
At both Columbia and the AAA, Boas encouraged the "four-field" concept of anthropology; he personally contributed to physical anthropology, linguistics, archaeology, as well as cultural anthropology. His work in these fields was pioneering: in physical anthropology he led scholars away from static taxonomical classifications of race, to an emphasis on human biology and evolution; in linguistics he broke through the limitations of classic philology and established some of the central problems in modern linguistics and cognitive anthropology; in cultural anthropology he (along with the Polish-English anthropologist Bronisław Malinowski) established the contextualist approach to culture, cultural relativism, and the participant observation method of fieldwork.
The four-field approach understood not merely as bringing together different kinds of anthropologists into one department, but as reconceiving anthropology through the integration of different objects of anthropological research into one overarching object, was one of Boas's fundamental contributions to the discipline, and came to characterize American anthropology against that of
England, France, or Germany. This approach defines as its object the human species as a totality. This focus did not lead Boas to seek to reduce all forms of humanity and human activity to some lowest common denominator; rather, he understood the essence of the human species to be the tremendous variation in human form and activity (an approach that parallels Charles Darwin's approach to species in general).
In his 1907 essay, "Anthropology", Boas identified two basic questions for anthropologists: "Why are the tribes and nations of the world different, and how have the present differences developed?" Amplifying these questions, he explained the object of anthropological study thus:
We do not discuss the anatomical, physiological, and mental characteristics of a man considered as an individual; but we are interested in the diversity of these traits in groups of men found in different geographical areas and in different social classes. It is our task to inquire into the causes that have brought about the observed differentiation and to investigate the sequence of events that have led to the establishment of the multifarious forms of human life. In other words, we are interested in the anatomical and mental characteristics of men living under the same biological, geographical, and social environment, and as determined by their past.
These questions signal a marked break from then-current ideas about human diversity, which assumed that some people have a history, evident in a historical (or written) record, while other people, lacking writing, also lack history. For some, this distinction between two different kinds of societies explained the difference between history, sociology, economics and other disciplines that focus on people with writing, and anthropology, which was supposed to focus on people without writing. Boas rejected this distinction between kinds of societies, and this division of labor in the academy. He understood all societies to have a history, and all societies to be proper objects of the anthropological society. In order to approach literate and non-literate societies the same way, he emphasized the importance of studying human history through the analysis of other things besides written texts. Thus, in his 1904 article, "The History of Anthropology", Boas wrote that
The historical development of the work of anthropologists seems to single out clearly a domain of knowledge that heretofore has not been treated by any other science. It is the biological history of mankind in all its varieties; linguistics applied to people without written languages; the ethnology of people without historical records; and prehistoric archeology.
Historians and social theorists in the 18th and 19th centuries had speculated as to the causes of this differentiation, but Boas dismissed these theories, especially the dominant theories of social evolution and cultural evolution as speculative. He endeavored to establish a discipline that would base its claims on a rigorous empirical study.
One of Boas's most important books, "The Mind of Primitive Man" (1911), integrated his theories concerning the history and development of cultures and established a program that would dominate American anthropology for the next fifteen years. In this study, he established that in any given population, biology, language, material, and symbolic culture, are autonomous; that each is an equally important dimension of human nature, but that no one of these dimensions is reducible to another. In other words, he established that culture does not depend on any independent variables. He emphasized that the biological, linguistic, and cultural traits of any group of people are the product of historical developments involving both cultural and non-cultural forces. He established that cultural plurality is a fundamental feature of humankind and that the specific cultural environment structures much individual behavior.
Boas also presented himself as a role model for the citizen-scientist, who understand that even were the truth pursued as its own end, all knowledge has moral consequences. "The Mind of Primitive Man" ends with an appeal to humanism:
I hope the discussions outlined in these pages have shown that the data of anthropology teach us a greater tolerance of forms of civilization different from our own, that we should learn to look on foreign races with greater sympathy and with a conviction that, as all races have contributed in the past to cultural progress in one way or another, so they will be capable of advancing the interests of mankind if we are only willing to give them a fair opportunity.
Boas's work in physical anthropology brought together his interest in Darwinian evolution with his interest in migration as a cause of change. His most important research in this field was his study of changes in the body from among children of immigrants in New York. Other researchers had already noted differences in height, cranial measurements, and other physical features between Americans and people from different parts of Europe. Many used these differences to argue that there is an innate biological difference between races. Boas's primary interest—in symbolic and material culture and in language—was the study of processes of change; he, therefore, set out to determine whether bodily forms are also subject to processes of change. Boas studied 17,821 people, divided into seven ethno-national groups. Boas found that average measures of the cranial size of immigrants were significantly different from members of these groups who were born in the United States. Moreover, he discovered that average measures of the cranial size of children born within ten years of their mothers' arrival were significantly different from those of children born more than ten years after their mothers' arrival. Boas did not deny that physical features such as height or cranial size were inherited; he did, however, argue that the environment has an influence on these features, which is expressed through change over time. This work was central to his influential argument that differences between races were not immutable. Boas observed:
The head form, which has always been one of the most stable and permanent characteristics of human races, undergoes far-reaching changes due to the transfer of European races to American soil. The East European Hebrew, who has a round head, becomes more long-headed; the South Italian, who in Italy has an exceedingly long head, becomes more short-headed; so that both approach a uniform type in this country, so far as the head is concerned.
These findings were radical at the time and continue to be debated. In 2002, the anthropologists Corey S. Sparks and Richard L. Jantz claimed that differences between children born to the same parents in Europe and America were very small and insignificant and that there was no detectable effect of exposure to the American environment on the cranial index in children. They argued that their results contradicted Boas's original findings and demonstrated that they may no longer be used to support arguments of plasticity in cranial morphology. However, Jonathan Marks—a well-known physical anthropologist and former president of the General Anthropology section of the American Anthropological Association—has remarked that this revisionist study of Boas's work "has the ring of desperation to it (if not obfuscation), and has been quickly rebutted by more mainstream biological anthropology". In 2003 anthropologists Clarence C. Gravlee, H. Russell Bernard, and William R. Leonard reanalyzed Boas's data and concluded that most of Boas's original findings were correct. Moreover, they applied new statistical, computer-assisted methods to Boas's data and discovered more evidence for cranial plasticity. In a later publication, Gravlee, Bernard and Leonard reviewed Sparks and Jantz's analysis. They argue that Sparks and Jantz misrepresented Boas's claims and that Sparks's and Jantz's data actually support Boas. For example, they point out that Sparks and Jantz look at changes in cranial size in relation to how long an individual has been in the United States in order to test the influence of the environment. Boas, however, looked at changes in cranial size in relation to how long the mother had been in the United States. They argue that Boas's method is more useful because the prenatal environment is a crucial developmental factor.
A further publication by Jantz based on Gravlee et al. claims that Boas had cherry picked two groups of immigrants (Sicilians and Hebrews) which had varied most towards the same mean, and discarded other groups which had varied in the opposite direction. He commented, "Using the recent reanalysis by Gravlee et al. (2003), we can observe in Figure 2 that the maximum difference in the cranial index due to immigration (in Hebrews) is much smaller than the maximum ethnic difference, between Sicilians and Bohemians. It shows that long-headed parents produce long headed offspring and vice versa. To make the argument that children of immigrants converge onto an "American type" required Boas to use the two groups that changed the most."
Although some sociobiologists and evolutionary psychologists have suggested that Boas was opposed to Darwinian evolution, Boas, in fact, was a committed proponent of Darwinian evolutionary thought. In 1888, he declared that "the development of ethnology is largely due to the general recognition of the principle of biological evolution"; since Boas's times, physical anthropologists have established that the human capacity for culture is a product of human evolution. In fact, Boas's research on changes in body form played an important role in the rise of Darwinian theory. Boas was trained at a time when biologists had no understanding of genetics; Mendelian genetics became widely known only after 1900. Prior to that time biologists relied on the measurement of physical traits as empirical data for any theory of evolution. Boas's biometric studies, however, led him to question the use of this method and kind of data. In a speech to anthropologists in Berlin in 1912, Boas argued that at best such statistics could only raise biological questions, and not answer them. It was in this context that anthropologists began turning to genetics as a basis for any understanding of biological variation.
Boas also contributed greatly to the foundation of linguistics as a science in the United States. He published many descriptive studies of Native American languages, and wrote on theoretical difficulties in classifying languages, and laid out a research program for studying the relations between language and culture which his students such as Edward Sapir, Paul Rivet, and Alfred Kroeber followed.
His 1889 article "On Alternating Sounds", however, made a singular contribution to the methodology of both linguistics and cultural anthropology. It is a response to a paper presented in 1888 by Daniel Garrison Brinton, at the time a professor of American linguistics and archeology at the University of Pennsylvania. Brinton observed that in the spoken languages of many Native Americans, certain sounds regularly alternated. Brinton argued that this pervasive inconsistency was a sign of linguistic and evolutionary inferiority.
Boas had heard similar phonetic shifts during his research in Baffin Island and in the Pacific Northwest. Nevertheless, he argued that "alternating sounds" is not at all a feature of Native American languages—indeed, he argued, they do not really exist. Rather than take alternating sounds as objective proof of different stages in cultural evolution, Boas considered them in terms of his longstanding interest in the subjective perception of objective physical phenomena. He also considered his earlier critique of evolutionary museum displays. There, he pointed out that two things (artifacts of material culture) that appear to be similar may, in fact, be quite different. In this article, he raises the possibility that two things (sounds) that appear to be different may, in fact, be the same.
In short, he shifted attention to the "perception" of different sounds. Boas begins by raising an empirical question: when people describe one sound in different ways, is it because they cannot perceive the difference, or might there be another reason? He immediately establishes that he is not concerned with cases involving perceptual deficit—the aural equivalent of color-blindness. He points out that the question of people who describe one sound in different ways is comparable to that of people who describe different sounds in one way. This is crucial for research in descriptive linguistics: when studying a new language, how are we to note the pronunciation of different words? (in this point, Boas anticipates and lays the groundwork for the distinction between phonemics and phonetics.) People may pronounce a word in a variety of ways and still recognize that they are using the same word. The issue, then, is not "that such sensations are not recognized in their individuality" (in other words, people recognize differences in pronunciations); rather, it is that sounds "are classified according to their similarity" (in other words, that people classify a variety of perceived sounds into one category). A comparable visual example would involve words for colors. The English word "green" can be used to refer to a variety of shades, hues, and tints. But there are some languages that have no word for "green". In such cases, people might classify what we would call "green" as either "yellow" or "blue". This is not an example of color-blindness—people can perceive differences in color, but they categorize similar colors in a different way than English speakers.
Boas applied these principles to his studies of Inuit languages. Researchers have reported a variety of spellings for a given word. In the past, researchers have interpreted this data in a number of ways—it could indicate local variations in the pronunciation of a word, or it could indicate different dialects. Boas argues an alternative explanation: that the difference is not in how Inuit pronounce the word, but rather in how English-speaking scholars perceive the pronunciation of the word. It is not that English speakers are physically incapable of perceiving the sound in question; rather, the phonetic system of English cannot accommodate the perceived sound.
Although Boas was making a very specific contribution to the methods of descriptive linguistics, his ultimate point is far reaching: observer bias need not be personal, it can be cultural. In other words, the perceptual categories of Western researchers may systematically cause a Westerner to misperceive or to fail to perceive entirely a meaningful element in another culture. As in his critique of Otis Mason's museum displays, Boas demonstrated that what appeared to be evidence of cultural evolution was really the consequence of unscientific methods and a reflection of Westerners' beliefs about their own cultural superiority. This point provides the methodological foundation for Boas's cultural relativism: elements of a culture are meaningful in that culture's terms, even if they may be meaningless (or take on a radically different meaning) in another culture.
The essence of Boas's approach to ethnography is found in his early essay on "The Study of Geography". There he argued for an approach that
This orientation led Boas to promote a cultural anthropology characterized by a strong commitment to
Boas argued that in order to understand "what is"—in cultural anthropology, the specific cultural traits (behaviors, beliefs, and symbols)—one had to examine them in their local context. He also understood that as people migrate from one place to another, and as the cultural context changes over time, the elements of a culture, and their meanings, will change, which led him to emphasize the importance of local histories for an analysis of cultures.
Although other anthropologists at the time, such as Bronisław Malinowski and Alfred Reginald Radcliffe-Brown focused on the study of societies, which they understood to be clearly bounded, Boas's attention to history, which reveals the extent to which traits diffuse from one place to another, led him to view cultural boundaries as multiple and overlapping, and as highly permeable. Thus, Boas's student Robert Lowie once described culture as a thing of "shreds and patches". Boas and his students understood that as people try to make sense of their world they seek to integrate its disparate elements, with the result that different cultures could be characterized as having different configurations or patterns. But Boasians also understood that such integration was always in tensions with diffusion, and any appearance of a stable configuration is contingent (see Bashkow 2004: 445).
During Boas's lifetime, as today, many Westerners saw a fundamental difference between modern societies, which are characterized by dynamism and individualism, and traditional societies which are stable and homogeneous. Boas's empirical field research, however, led him to argue against this comparison. For example, his 1903 essay, "Decorative Designs of Alaskan Needlecases: A History of Conventional Designs, Based on Materials in a U.S. Museum", provides another example of how Boas made broad theoretical claims based on a detailed analysis of empirical data. After establishing formal similarities among the needlecases, Boas shows how certain formal features provide a vocabulary out of which individual artisans could create variations in design. Thus, his emphasis on culture as a context for meaningful action made him sensitive to individual variation within a society (William Henry Holmes suggested a similar point in an 1886 paper, "Origin and development of form and ornament in ceramic art", although unlike Boas he did not develop the ethnographic and theoretical implications).
In a programmatic essay in 1920, "The Methods of Ethnology", Boas argued that instead of "the systematic enumeration of standardized beliefs and customs of a tribe", anthropology needs to document "the way in which the individual reacts to his whole social environment, and to the difference of opinion and of mode of action that occur in primitive society and which are the causes of far-reaching changes". Boas argued that attention to individual agency reveals that "the activities of the individual are determined to a great extent by his social environment, but in turn, his own activities influence the society in which he lives and may bring about modifications in a form". Consequently, Boas thought of culture as fundamentally dynamic: "As soon as these methods are applied, primitive society loses the appearance of absolute stability ... All cultural forms rather appear in a constant state of flux ..." (see Lewis 2001b)
Having argued against the relevance of the distinction between literate and non-literate societies as a way of defining anthropology's object of study, Boas argued that non-literate and literate societies should be analyzed in the same way. Nineteenth-century historians had been applying the techniques of philology to reconstruct the histories of, and relationships between, literate societies. In order to apply these methods to non-literate societies, Boas argued that the task of fieldworkers is to produce and collect texts in non-literate societies. This took the form not only of compiling lexicons and grammars of the local language, but of recording myths, folktales, beliefs about social relationships and institutions, and even recipes for local cuisine. In order to do this, Boas relied heavily on the collaboration of literate native ethnographers (among the Kwakiutl, most often George Hunt), and he urged his students to consider such people valuable partners, inferior in their standing in Western society, but superior in their understanding of their own culture. (see Bunzl 2004: 438–439)
Using these methods, Boas published another article in 1920, in which he revisited his earlier research on Kwakiutl kinship. In the late 1890s, Boas had tried to reconstruct transformation in the organization of Kwakiutl clans, by comparing them to the organization of clans in other societies neighboring the Kwakiutl to the north and south. Now, however, he argued against translating the Kwakiutl principle of kin groups into an English word. Instead of trying to fit the Kwakiutl into some larger model, he tried to understand their beliefs and practices in their own terms. For example, whereas he had earlier translated the Kwakiutl word "numaym" as "clan", he now argued that the word is best understood as referring to a bundle of privileges, for which there is no English word. Men secured claims to these privileges through their parents or wives, and there were a variety of ways these privileges could be acquired, used, and transmitted from one generation to the next. As in his work on alternating sounds, Boas had come to realize that different ethnological interpretations of Kwakiutl kinship were the result of the limitations of Western categories. As in his work on Alaskan needlecases, he now saw variation among Kwakiutl practices as the result of the play between social norms and individual creativity.
Before his death in 1942, he appointed Helen Codere to edit and publish his manuscripts about the culture of the Kwakiutl people.
Franz Boas was an immensely influential figure throughout the development of folklore as a discipline. At first glance, it might seem that his only concern was for the discipline of anthropology—after all, he fought for most of his life to keep folklore as a part of anthropology. Yet Boas was motivated by his desire to see both anthropology and folklore become more professional and well-respected. Boas was afraid that if folklore was allowed to become its own discipline the standards for folklore scholarship would be lowered. This, combined with the scholarships of "amateurs", would lead folklore to be completely discredited, Boas believed.
In order to further professionalize folklore, Boas introduced the strict scientific methods which he learned in college to the discipline. Boas championed the use of exhaustive research, fieldwork, and strict scientific guidelines in folklore scholarship. Boas believed that a true theory could only be formed from thorough research and that even once you had a theory it should be treated as a "work in progress" unless it could be proved beyond doubt. This rigid scientific methodology was eventually accepted as one of the major tenets of folklore scholarship, and Boas's methods remain in use even today. Boas also nurtured many budding folklorists during his time as a professor, and some of his students are counted among the most notable minds in folklore scholarship.
Boas was passionate about the collection of folklore and believed that the similarity of folktales amongst different folk groups was due to dissemination. Boas strove to prove this theory, and his efforts produced a method for breaking a folktale into parts and then analyzing these parts. His creation of "catch-words" allowed for categorization of these parts, and the ability to analyze them in relation to other similar tales. Boas also fought to prove that not all cultures progressed along the same path, and that non-European cultures, in particular, were not primitive, but different.
Boas remained active in the development and scholarship of folklore throughout his life. He became the editor of the "Journal of American Folklore" in 1908, regularly wrote and published articles on folklore (often in the "Journal of American Folklore"). He helped to elect Louise Pound as president of the American Folklore Society in 1925.
Boas was known for passionately defending what he believed to be right. During his lifetime (and often through his work), Boas combated racism, berated anthropologists and folklorists who used their work as a cover for espionage, worked to protect German and Austrian scientists who fled the Nazi regime, and openly protested Hitlerism.
Many social scientists in other disciplines often agonize over the legitimacy of their work as "science" and consequently emphasize the importance of detachment, objectivity, abstraction, and quantifiability in their work. Perhaps because Boas, like other early anthropologists, was originally trained in the natural sciences, he and his students never expressed such anxiety. Moreover, he did not believe that detachment, objectivity, and quantifiability was required to make anthropology scientific. Since the object of study of anthropologists is different from the object of study of physicists, he assumed that anthropologists would have to employ different methods and different criteria for evaluating their research. Thus, Boas used statistical studies to demonstrate the extent to which variation in data is context-dependent, and argued that the context-dependent nature of human variation rendered many abstractions and generalizations that had been passing as scientific understandings of humankind (especially theories of social evolution popular at the time) in fact unscientific. His understanding of ethnographic fieldwork began with the fact that the objects of ethnographic study (e.g., the Inuit of Baffin Island) were not just objects, but subjects, and his research called attention to their creativity and agency. More importantly, he viewed the Inuit as his teachers, thus reversing the typical hierarchical relationship between scientist and object of study.
This emphasis on the relationship between anthropologists and those they study—the point that, while astronomers and stars; chemists and elements; botanists and plants are fundamentally different, anthropologists and those they study are equally human—implied that anthropologists themselves could be objects of anthropological study. Although Boas did not pursue this reversal systematically, his article on alternating sounds illustrates his awareness that scientists should not be confident about their objectivity, because they too see the world through the prism of their culture.
This emphasis also led Boas to conclude that anthropologists have an obligation to speak out on social issues. Boas was especially concerned with racial inequality, which his research had indicated is not biological in origin, but rather social. Boas is credited as the first scientist to publish the idea that all people—including white and African Americans—are equal. He often emphasized his abhorrence of racism, and used his work to show that there was no scientific basis for such a bias. An early example of this concern is evident in his 1906 commencement address to Atlanta University, at the invitation of W. E. B. Du Bois. Boas began by remarking that "If you did accept the view that the present weakness of the American Negro, his uncontrollable emotions, his lack of energy, are racially inherent, your work would still be noble one". He then went on, however, to argue against this view. To the claim that European and Asian civilizations are, at the time, more advanced than African societies, Boas objected that against the total history of humankind, the past two thousand years is but a brief span. Moreover, although the technological advances of our early ancestors (such as taming fire and inventing stone tools) might seem insignificant when compared to the invention of the steam engine or control over electricity, we should consider that they might actually be even greater accomplishments. Boas then went on to catalogue advances in Africa, such as smelting iron, cultivating millet, and domesticating chickens and cattle, that occurred in Africa well before they spread to Europe and Asia (evidence now suggests that chickens were first domesticated in Asia; the original domestication of cattle is under debate). He then described the activities of African kings, diplomats, merchants, and artists as evidence of cultural achievement. From this, he concluded, any social inferiority of Negroes in the United States cannot be explained by their African origins:
If therefore, it is claimed that your race is doomed to economic inferiority, you may confidently look to the home of your ancestors and say, that you have set out to recover for the colored people the strength that was their own before they set foot on the shores of this continent. You may say that you go to work with bright hopes and that you will not be discouraged by the slowness of your progress; for you have to recover not only what has been lost in transplanting the Negro race from its native soil to this continent, but you must reach higher levels than your ancestors ever had attained.
Boas proceeds to discuss the arguments for the inferiority of the "Negro race", and calls attention to the fact that they were brought to the Americas through force. For Boas, this is just one example of the many times conquest or colonialism has brought different peoples into an unequal relation, and he mentions "the conquest of England by the Normans, the Teutonic invasion of Italy, [and] the Manchu conquest of China" as resulting in similar conditions. But the best example, for Boas, of this phenomenon is that of the Jews in Europe:
Even now there lingers in the consciousness of the old, sharper divisions which the ages had not been able to efface, and which is strong enough to find—not only here and there—expression as antipathy to the Jewish type. In France, that let down the barriers more than a hundred years ago, the feeling of antipathy is still strong enough to sustain an anti-Jewish political party.
Boas's closing advice is that African Americans should not look to whites for approval or encouragement because people in power usually take a very long time to learn to sympathize with people out of power. "Remember that in every single case in history the process of adaptation has been one of exceeding slowness. Do not look for the impossible, but do not let your path deviate from the quiet and steadfast insistence on full opportunities for your powers."
Despite Boas's caveat about the intractability of white prejudice, he also considered it the scientist's responsibility to argue against white myths of racial purity and racial superiority and to use the evidence of his research to fight racism.
Boas was also critical of one nation imposing its power over others. In 1916, Boas wrote a letter to "The New York Times" which was published under the headline, "Why German-Americans Blame America". Although Boas did begin the letter by protesting bitter attacks against German Americans at the time of the war in Europe, most of his letter was a critique of American nationalism. "In my youth, I had been taught in school and at home not only to love the good of my own country, but also to seek to understand and to respect the individualities of other nations. For this reason, one-sided nationalism, that is so often found nowadays, is to be unendurable." He writes of his love for American ideals of freedom, and of his growing discomfort with American beliefs about its own superiority over others.
I have always been of the opinion that we have no right to impose our ideals upon other nations, no matter how strange it may seem to us that they enjoy the kind of life they lead, how slow they may be in utilizing the resources of their countries, or how much opposed their ideas may be to ours ... Our intolerant attitude is most pronounced in regard to what we like to call "our free institutions." Modern democracy was no doubt the most wholesome and needed reaction against the abuses of absolutism and of a selfish, often corrupt, bureaucracy. That the wishes and thoughts of the people should find expression, and that the form of government should conform to these wishes is an axiom that has pervaded the whole Western world, and that is even taking root in the Far East. It is a quite different question, however, in how far the particular machinery of democratic government is identical with democratic institutions ... To claim as we often do, that our solution is the only democratic and the ideal one is a one-sided expression of Americanism. I see no reason why we should not allow the Germans, Austrians, and Russians, or whoever else it may be, to solve their problems in their own ways, instead of demanding that they bestow upon themselves the benefactions of our regime.
Although Boas felt that scientists have a responsibility to speak out on social and political problems, he was appalled that they might involve themselves in disingenuous and deceitful ways. Thus, in 1919, when he discovered that four anthropologists, in the course of their research in other countries, were serving as spies for the American government, he wrote an angry letter to "The Nation". It is perhaps in this letter that he most clearly expresses his understanding of his commitment to science:
A soldier whose business is murder as a fine art, a diplomat whose calling is based on deception and secretiveness, a politician whose very life consists in compromises with his conscience, a businessman whose aim is personal profit within the limits allowed by a lenient law—such may be excused if they set patriotic deception above common everyday decency and perform services as spies. They merely accept the code of morality to which modern society still conforms. Not so the scientist. The very essence of his life is the service of truth. We all know scientists who in private life do not come up to the standard of truthfulness, but who, nevertheless, would not consciously falsify the results of their researches. It is bad enough if we have to put up with these because they reveal a lack of strength of character that is liable to distort the results of their work. A person, however, who uses science as a cover for political spying, who demeans himself to pose before a foreign government as an investigator and asks for assistance in his alleged researches in order to carry on, under this cloak, his political machinations, prostitutes science in an unpardonable way and forfeits the right to be classed as a scientist.
Although Boas did not name the spies in question, he was referring to a group led by Sylvanus G. Morley, who was affiliated with Harvard University's Peabody Museum. While conducting research in Mexico, Morley and his colleagues looked for evidence of German submarine bases, and collected intelligence on Mexican political figures and German immigrants in Mexico.
Boas's stance against spying took place in the context of his struggle to establish a new model for academic anthropology at Columbia University. Previously, American anthropology was based at the Smithsonian Institution in Washington and the Peabody Museum at Harvard, and these anthropologists competed with Boas's students for control over the American Anthropological Association (and its flagship publication "American Anthropologist"). When the National Academy of Sciences established the National Research Council in 1916 as a means by which scientists could assist the United States government to prepare for entry into the war in Europe, competition between the two groups intensified. Boas's rival, W. H. Holmes (who had gotten the job of Director at the Field Museum for which Boas had been passed over 26 years earlier), was appointed to head the NRC; Morley was a protégé of Holmes.
When Boas's letter was published, Holmes wrote to a friend complaining about "the Prussian control of anthropology in this country" and the need to end Boas's "Hun regime". Opinion was influenced by anti-German and probably also by anti-Jewish sentiment. The Anthropological Society of Washington passed a resolution condemning Boas's letter for unjustly criticizing President Wilson; attacking the principles of American democracy; and endangering anthropologists abroad, who would now be suspected of being spies (a charge that was especially insulting, given that his concerns about this very issue were what had prompted Boas to write his letter in the first place). This resolution was passed on to the American Anthropological Association (AAA) and the National Research Council. Members of the American Anthropological Association (among whom Boas was a founding member in 1902), meeting at the Peabody Museum of Archaeology and Ethnology at Harvard (with which Morley, Lothrop, and Spinden were affiliated), voted by 20 to 10 to censure Boas. As a result, Boas resigned as the AAA's representative to the NRC, although he remained an active member of the AAA. The AAA's censure of Boas was not rescinded until 2005.
Boas continued to speak out against racism and for intellectual freedom. When the Nazi Party in Germany denounced "Jewish Science" (which included not only Boasian Anthropology but Freudian psychoanalysis and Einsteinian physics), Boas responded with a public statement signed by over 8,000 other scientists, declaring that there is only one science, to which race and religion are irrelevant. After World War I, Boas created the Emergency Society for German and Austrian Science. This organization was originally dedicated to fostering friendly relations between American and German and Austrian scientists and for providing research funding to German scientists who had been adversely affected by the war, and to help scientists who had been interned. With the rise of Nazi Germany, Boas assisted German scientists in fleeing the Nazi regime. Boas helped these scientists not only to escape but to secure positions once they arrived. Additionally, Boas addressed an open letter to Paul von Hindenburg in protest against Hitlerism. He also wrote an article in The American Mercury arguing that there were no differences between Aryans and non-Aryans and the German government should not base its policies on such a false premise.
Boas, and his students such as Melville J. Herskovits, opposed the racist pseudoscience developed at the Kaiser Wilhelm Institute of Anthropology, Human Heredity, and Eugenics under its director Eugen Fischer: "Melville J. Herskovits (one of Franz Boas's students) pointed out that the health problems and social prejudices encountered by these children (Rhineland Bastards) and their parents explained what Germans viewed as racial inferiority was not due to racial heredity. This "... provoked polemic invective against the latter [Boas] from Fischer. "The views of Mr. Boas are in part quite ingenious, but in the field of heredity Mr. Boas is by no means competent" even though "a great number of research projects at the KWI-A which had picked up on Boas' studies about immigrants in New York had confirmed his findings—including the study by Walter Dornfeldt about Eastern European Jews in Berlin. Fischer resorted to polemic simply because he had no arguments to counter the Boasians' critique."
Franz Boas died suddenly at the Columbia University Faculty Club on December 21, 1942, in the arms of Claude Lévi-Strauss. By that time he had become one of the most influential and respected scientists of his generation.
Between 1901 and 1911, Columbia University produced seven PhDs in anthropology. Although by today's standards this is a very small number, at the time it was sufficient to establish Boas's Anthropology Department at Columbia as the preeminent anthropology program in the country. Moreover, many of Boas's students went on to establish anthropology programs at other major universities.
Boas's first doctoral student at Columbia was Alfred L. Kroeber (1901), who, along with fellow Boas student Robert Lowie (1908), started the anthropology program at the University of California, Berkeley. He also trained William Jones (1904), one of the first Native American Indian anthropologists (the Fox nation) who was killed while conducting research in the Philippines in 1909, and Albert B. Lewis (1907). Boas also trained a number of other students who were influential in the development of academic anthropology: Frank Speck (1908) who trained with Boas but received his PhD from the University of Pennsylvania and immediately proceeded to found the anthropology department there; Edward Sapir (1909) and Fay-Cooper Cole (1914) who developed the anthropology program at the University of Chicago; Alexander Goldenweiser (1910), who, with Elsie Clews Parsons (who received her doctorate in sociology from Columbia in 1899, but then studied ethnology with Boas), started the anthropology program at the New School for Social Research; Leslie Spier (1920) who started the anthropology program at the University of Washington together with his wife Erna Gunther, also one of Boas's students, and Melville Herskovits (1923) who started the anthropology program at Northwestern University. He also trained John R. Swanton (who studied with Boas at Columbia for two years before receiving his doctorate from Harvard in 1900), Paul Radin (1911), Ruth Benedict (1923), Gladys Reichard (1925) who had begun teaching at Barnard College in 1921 and was later promoted to the rank of professor, Ruth Bunzel (1929), Alexander Lesser (1929), Margaret Mead (1929), and Gene Weltfish (who defended her dissertation in 1929, although she did not officially graduate until 1950 when Columbia reduced the expenses required to graduate), E. Adamson Hoebel (1934), Jules Henry (1935), George Herzog (1938),and Ashley Montagu (1938).
His students at Columbia also included Mexican anthropologist Manuel Gamio, who earned his Master of Arts degree after studying with Boas from 1909 to 1911, and became the founding director of Mexico's Bureau of Anthropology in 1917; Clark Wissler, who received his doctorate in psychology from Columbia University in 1901, but proceeded to study anthropology with Boas before turning to research Native Americans; Esther Schiff, later Goldfrank, worked with Boas in the summers of 1920 to 1922 to conduct research among the Cochiti and Laguna Pueblo Indians in New Mexico; Gilberto Freyre, who shaped the concept of "racial democracy" in Brazil; Viola Garfield, who carried forth Boas's Tsimshian work; Frederica de Laguna, who worked on the Inuit and the Tlingit; and anthropologist, folklorist and novelist Zora Neale Hurston, who graduated from Barnard College, the women's college associated with Columbia, in 1928, and who studied African American and Afro-Caribbean folklore.
Boas and his students were also an influence on Claude Lévi-Strauss, who interacted with Boas and the Boasians during his stay in New York in the 1940s.
Several of Boas's students went on to serve as editors of the American Anthropological Association's flagship journal, "American Anthropologist": John R. Swanton (1911, 1921–1923), Robert Lowie (1924–1933), Leslie Spier (1934–1938), and Melville Herskovits (1950–1952). Edward Sapir's student John Alden Mason was editor from 1945 to 1949, and Alfred Kroeber and Robert Lowie's student, Walter Goldschmidt, was editor from 1956 to 1959.
Most of Boas's students shared his concern for careful, historical reconstruction, and his antipathy towards speculative, evolutionary models. Moreover, Boas encouraged his students, by example, to criticize themselves as much as others. For example, Boas originally defended the cephalic index (systematic variations in head form) as a method for describing hereditary traits, but came to reject his earlier research after further study; he similarly came to criticize his own early work in Kwakiutl (Pacific Northwest) language and mythology.
Encouraged by this drive to self-criticism, as well as the Boasian commitment to learn from one's informants and to let the findings of one's research shape one's agenda, Boas's students quickly diverged from his own research agenda. Several of his students soon attempted to develop theories of the grand sort that Boas typically rejected. Kroeber called his colleagues' attention to Sigmund Freud and the potential of a union between cultural anthropology and psychoanalysis. Ruth Benedict developed theories of "culture and personality" and "national cultures", and Kroeber's student, Julian Steward developed theories of "cultural ecology" and "multilineal evolution".
Nevertheless, Boas has had an enduring influence on anthropology. Virtually all anthropologists today accept Boas's commitment to empiricism and his methodological cultural relativism. Moreover, virtually all cultural anthropologists today share Boas's commitment to field research involving extended residence, learning the local language, and developing social relationships with informants. Finally, anthropologists continue to honor his critique of racial ideologies. In his 1963 book, "Race: The History of an Idea in America", Thomas Gossett wrote that "It is possible that Boas did more to combat race prejudice than any other person in history." | https://en.wikipedia.org/wiki?curid=11698 |
Franz Bopp
Franz Bopp (; 14 September 1791 – 23 October 1867) was a German linguist known for extensive and pioneering comparative work on Indo-European languages.
Bopp was born in Mainz, but the political disarray in the Republic of Mainz caused his parents' removal to Aschaffenburg, the second seat of the Archbishop of Mainz. There he received a liberal education at the Lyceum and Karl Joseph Hieronymus Windischmann drew his attention to the languages and literature of the East. (Windischmann, along with Georg Friedrich Creuzer, Joseph Görres, and the brothers Schlegel, expressed great enthusiasm for Indian wisdom and philosophy.) Moreover, Karl Wilhelm Friedrich von Schlegel's book, "Über die Sprache und Weisheit der Indier" ("On the Speech and Wisdom of the Indians", Heidelberg, 1808), had just begun to exert a powerful influence on the minds of German philosophers and historians, and stimulated Bopp's interest in the sacred language of the Hindus.
In 1812, he went to Paris at the expense of the Bavarian government, with a view to devoting himself vigorously to the study of Sanskrit. There he enjoyed the society of such eminent men as Antoine-Léonard de Chézy (his primary instructor), Silvestre de Sacy, Louis Mathieu Langlès, and, above all Alexander Hamilton (1762–1824), cousin of the American statesman of the same name , who had acquired an acquaintance with Sanskrit when in India and had brought out, along with Langlès, a descriptive catalogue of the Sanskrit manuscripts of the Imperial Library.
In the library, Bopp had access not only to the rich collection of Sanskrit manuscripts (mostly brought from India by Jean François Pons in the early 18th century), but also to the Sanskrit books that had been issued from the Calcutta and Serampore presses. He spent five years of laborious study, almost living in the libraries of Paris and unmoved by the turmoils that agitated the world around him, including Napoleon's escape, the Waterloo campaign and the Restoration.
The first paper from his years of study in Paris appeared in Frankfurt am Main in 1816, under the title of "Über das Konjugationssystem der Sanskritsprache in Vergleichung mit jenem der griechischen, lateinischen, persischen und germanischen Sprache (On the Conjugation System of Sanskrit in comparison with that of Greek, Latin, Persian and Germanic)", to which Windischmann contributed a preface. In this first book, Bopp entered at once the path on which he would focus the philological researches of his whole subsequent life. His task was not to point out the similarity of Sanskrit with Persian, Greek, Latin or German, for previous scholars had long established that, but he aimed to trace the postulated common origin of the languages' grammatical forms, of their inflections from composition. This was something no predecessor had attempted. By a historical analysis of those forms, as applied to the verb, he furnished the first trustworthy materials for a history of the languages compared.
After a brief sojourn in Germany, Bopp travelled to London where he made the acquaintance of Sir Charles Wilkins and Henry Thomas Colebrooke. He also became friends with Wilhelm von Humboldt, the Prussian ambassador at the Court of St. James's, to whom he taught Sanskrit. He brought out, in the "Annals of Oriental Literature" (London, 1820), an essay entitled "Analytical Comparison of the Sanskrit, Greek, Latin and Teutonic Languages" in which he extended to all parts of grammar what he had done in his first book for the verb alone. He had previously published a critical edition, with a Latin translation and notes, of the story of Nala and Damayanti (London, 1819), the most beautiful episode of the "Mahabharata". Other episodes of the "Mahabharata", "Indralokâgama", and three others (Berlin, 1824); "Diluvium", and three others (Berlin, 1829); a new edition of Nala (Berlin, 1832) followed in due course, all of which, with August Wilhelm von Schlegel's edition of the "Bhagavad Gita" (1823), proved excellent aids in initiating the early student into the reading of Sanskrit texts. On the publication, in Calcutta, of the whole "Mahabharata", Bopp discontinued editing Sanskrit texts and confined himself thenceforth exclusively to grammatical investigations.
After a short residence at Göttingen, Bopp gained, on the recommendation of Humboldt, appointment to the chair of Sanskrit and comparative grammar at Berlin in 1821, which he occupied for the rest of his life. He also became a member of the Royal Prussian Academy the following year.
In 1827, he published his "Ausführliches Lehrgebäude der Sanskritsprache" ("Detailed System of the Sanskrit Language"), on which he had worked since 1821. Bopp started work on a new edition in Latin, for the following year, completed in 1832; a shorter grammar appeared in 1834. At the same time he compiled a "Sanskrit and Latin Glossary" (1830), in which, more especially in the second and third editions (1847 and 1868–71), he also took account of the cognate languages. His chief activity, however, centered on the elaboration of his "Comparative Grammar", which appeared in six parts at considerable intervals (Berlin, 1833, 1835, 1842, 1847, 1849, 1852), under the title "Vergleichende Grammatik des Sanskrit, Zend, Griechischen, Lateinischen, Litthauischen, Altslawischen, Gotischen und Deutschen" ("Comparative Grammar of Sanskrit, Zend [Avestan], Greek, Latin, Lithuanian, Old Slavonic, Gothic and German").
How carefully Bopp matured this work emerges from the series of monographs printed in the "Transactions of the Berlin Academy" (1824–1831), which preceded it. They bear the general title "Vergleichende Zergliederung des Sanskrits und der mit ihm verwandten Sprachen (Comparative Analysis of Sanskrit and its related Languages)". Two other essays (on the "Numerals", 1835) followed the publication of the first part of the "Comparative Grammar". Old Slavonian began to take its stand among the languages compared from the second part onwards. E. B. Eastwick translated the work into English in 1845. A second German edition, thoroughly revised (1856–1861), also covered Old Armenian.
In his "Comparative Grammar" Bopp set himself a threefold task:
The first and second points remained dependent upon the third. As Bopp based his research on the best available sources and incorporated every new item of information that came to light, his work continued to widen and deepen in the making, as can be witnessed from his monographs on the vowel system in the Teutonic languages (1836), on the Celtic languages (1839), on the Old Prussian (1853) and Albanian languages ("Über das Albanesische in seinen verwandtschaftlichen Beziehungen", Vienna, 1854), on the accent in Sanskrit and Greek (1854), on the relationship of the Malayo-Polynesian to the Indo-European languages (1840), and on the Caucasian languages (1846). In the last two, the impetus of his genius led him on a wrong track. He is the first philologist to prove Albanian as a separate branch of Indo-European. Bopp was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1855.
Critics have charged Bopp with neglecting the study of the native Sanskrit grammars, but in those early days of Sanskrit studies, the great libraries of Europe did not hold the requisite materials; if they had, those materials would have demanded his full attention for years, and such grammars as those of Charles Wilkins and Henry Thomas Colebrooke, from which Bopp derived his grammatical knowledge, had all used native grammars as a basis. The further charge that Bopp, in his "Comparative Grammar", gave undue prominence to Sanskrit is disproved by his own words; for, as early as 1820, he gave it as his opinion that frequently, the cognate languages serve to elucidate grammatical forms lost in Sanskrit ("Annals of Or. Lit." i. 3), which he further developed in all his subsequent writings.
The "Encyclopædia Britannica" (11th edition of 1911) assesses Bopp and his work as follows:
English scholar Russell Martineau, who had studied under Bopp, gave the following tribute:
Martineau also wrote:
Attribution | https://en.wikipedia.org/wiki?curid=11699 |
Full Metal Jacket
Full Metal Jacket is a 1987 war film directed, co-written, and produced by Stanley Kubrick and starring Matthew Modine, R. Lee Ermey, Vincent D'Onofrio and Adam Baldwin. The screenplay by Kubrick, Michael Herr, and Gustav Hasford was based on Hasford's novel "The Short-Timers" (1979). The storyline follows a platoon of U.S. Marines through their boot camp training in Marine Corps Recruit Depot Parris Island, South Carolina, primarily focusing on two privates, Joker and Pyle, who struggle under their abusive drill instructor, Gunnery Sergeant Hartman, and the experiences of two of the platoon's Marines in Vietnamese cities of Da Nang and Huế during the Tet Offensive of the Vietnam War. The film's title refers to the full metal jacket bullet used by military servicemen. The film was released in the United States on June 26, 1987.
"Full Metal Jacket" received critical acclaim and an Oscar nomination for Best Adapted Screenplay for Kubrick, Herr, and Hasford. In 2001, the American Film Institute placed it at No. 95 in their "AFI's 100 Years...100 Thrills" poll.
During the United States' involvement in the Vietnam War, a group of boot camp recruits arrive at Parris Island. The ruthless drill instructor, Hartman, employs forceful methods to turn the recruits into combat-ready Marines. Among the recruits is the overweight and dim-witted Leonard Lawrence, whom Hartman nicknames "Gomer Pyle", as well as the wisecracking J.T. Davis, who receives the name "Joker" after interrupting Hartman's speech with an impression of John Wayne.
When Pyle shows ineptitude in basic training, Hartman pairs him with Joker. Under Joker's supervision, Pyle starts to improve, but Hartman discovers a contraband jelly doughnut in Pyle's unlocked foot locker. Blaming the platoon for Pyle's infractions, Hartman adopts a collective punishment policy: he will punish the entire platoon, except for Pyle, for every mistake he makes. One night, the recruits haze Pyle with a blanket party in which Joker reluctantly participates. Following this, Pyle seems to reinvent himself as a model recruit, showing particular expertise in marksmanship. This impresses Hartman but worries Joker, who notices Pyle talking to his rifle and believes he may be suffering a mental breakdown.
The recruits graduate and receive their Military Occupational Specialty assignments. Joker is assigned to Military Journalism, while most of the others – including Pyle – are assigned to Infantry. During the platoon's final night on Parris Island, Joker discovers Pyle in the head loading his rifle and executing drill commands, and loudly recites the Rifleman's Creed. This awakens the platoon and Hartman, who confronts Pyle and orders him to surrender the rifle. Pyle shoots Hartman dead and then commits suicide, while Joker helplessly watches in horror.
In January 1968, Joker – now a sergeant – is a war correspondent in Da Nang, South Vietnam for "Stars and Stripes" with Private First Class Rafterman, a combat photographer. Rafterman wants to go into combat, as Joker claims he has. At the Marine base, Joker is mocked for his lack of the thousand-yard stare, indicating his lack of war experience. They are interrupted by the start of the Tet Offensive as the North Vietnamese Army unsuccessfully attempts to overrun the base.
The following day, the journalism staff is briefed about enemy attacks throughout South Vietnam. Joker is sent to Phu Bai, accompanied by Rafterman. They meet the Lusthog Squad, where Joker is reunited with Cowboy, with whom he had gone through basic training. Joker accompanies the squad during the Battle of Huế, where platoon commander "Touchdown" is killed by the enemy. After the Marines declare the area secure, a team of American news journalists and reporters enters Huế to interview various Marines about their experiences in Vietnam and their opinions about the war.
While patrolling Huế, Crazy Earl, the squad leader, is killed by a booby trap, leaving Cowboy in command. The squad becomes lost, and Cowboy orders Eightball to scout the area. A Viet Cong sniper wounds Eightball and Doc Jay, the squad Corpsman. Believing that the sniper is drawing the squad into an ambush, Cowboy attempts to radio in tank support to no avail. The squad's machine gunner, Animal Mother, disobeys Cowboy's orders to retreat and attempts to save his comrades. He discovers there is only one sniper, but Doc Jay and Eightball are killed when Doc Jay attempts to indicate the sniper's location. While radioing for support, Cowboy is shot and killed through the gap of a building.
Animal Mother assumes command of the squad and leads an attack on the sniper. Joker discovers the sniper, a teenage girl, and attempts to shoot her, but his rifle jams and alerts her to his presence. Rafterman shoots the sniper, mortally wounding her. As the squad converges, the sniper begs the squad to shoot her, prompting an argument about whether to kill her or leave her to suffer. Animal Mother decides to allow a mercy killing only if Joker performs it. After some hesitation, Joker shoots her. The Marines congratulate him on his kill as Joker stares into the distance. The Marines march toward their camp, singing the "Mickey Mouse March". Joker states in narration that despite being "in a world of shit", he is glad to be alive and is no longer afraid.
Additional characters include Ed O'Ross as Lieutenant Walter J. "Touchdown" Schinoski, the first platoon leader of the Lusthog Squad, John Terry as Lieutenant Lockhart, the editor of "Stars and Stripes", Bruce Boa as Poge Colonel, the colonel who dresses down Joker for wearing a peace symbol on his lapel. Stanley Kubrick and his daughter Vivian make uncredited appearances as two photographers at a Vietnam massacre site.
Kubrick contacted Michael Herr, author of the Vietnam War memoir "Dispatches" (1977), in the spring of 1980 to discuss working on a film about the Holocaust, but he eventually discarded that in favor of a film about the Vietnam War. They met in England, and the director told Herr that he wanted to do a war film but had yet to find a story to adapt. Kubrick discovered Gustav Hasford's novel "The Short-Timers" (1979) while reading the "Virginia Kirkus Review." Herr received it in bound galleys and thought that it was a masterpiece. In 1982, Kubrick read the novel twice, concluding that it "was a unique, absolutely wonderful book", and decided, along with Herr, to adapt it for his next film. According to Kubrick, he was drawn to the book's dialogue, finding it "almost poetic in its carved-out, stark quality". In 1983, Kubrick began conducting research for the film, watching past footage and documentaries, reading Vietnamese newspapers on microfilm from the Library of Congress, and studying hundreds of photographs from the era. Initially, Herr was not interested in revisiting his Vietnam War experiences, and Kubrick spent three years persuading him to participate in what the author describes as "a single phone call lasting three years, with interruptions".
In 1985, Kubrick contacted Hasford to work on the screenplay with him and Herr, and often talked to Hasford on the phone three to four times a week, for hours at a time. Kubrick had already written a detailed treatment, and Kubrick and Herr got together at Kubrick's home every day, breaking down the treatment into scenes. From that, Herr wrote the first draft. The filmmaker worried that the book's title might be misread by audiences as referring to people who only did half a day's work and changed it to "Full Metal Jacket" after discovering the phrase while going through a gun catalogue. After the first draft was completed, Kubrick phoned his orders to Hasford and Herr, and Hasford and Herr mailed their submissions to him. Kubrick read and edited them, and then the team repeated the process. Neither Hasford nor Herr knew how much he had contributed to the screenplay, which led to a dispute over the final credits. Hasford remembers, "We were like guys on an assembly line in the car factory. I was putting on one widget and Michael was putting on another widget and Stanley was the only one who knew that this was going to end up being a car." Herr says the director was not interested in making an anti-war film, but "he wanted to show what war is like".
At some point, Kubrick wanted to meet Hasford in person, but Herr advised against this, describing "The Short-Timers" author as a "scary man" and believing he and Kubrick would not "get on". Nonetheless, Kubrick insisted, and they all met at Kubrick's house in England for dinner. It did not go well, and Hasford did not meet with Kubrick again.
Through Warner Bros., Kubrick advertised a national casting search in the United States and Canada. The director used videotape to audition actors and received over 3,000 submissions. His staff screened all of the tapes, leaving 800 of them for Kubrick to review personally.
Former U.S. Marine Drill Instructor Ermey, originally hired as a technical advisor, asked Kubrick if he could audition for the role of Hartman. Kubrick had seen Ermey's portrayal of drill instructor Staff Sergeant Loyce in "The Boys in Company C" (1978) and told the Marine that he was not vicious enough to play the character. Ermey improvised insulting dialogue against a group of Royal Marines who were being considered for the part of background Marines, to demonstrate his ability to play the character, as well as to show how a drill instructor goes about breaking down the individuality of new recruits. Upon viewing the videotape of these sessions, Kubrick gave Ermey the role, realizing he "was a genius for this part". Kubrick also incorporated the 250-page transcript of Ermey's rants into the script. Ermey's experience as a drill instructor during the Vietnam era proved invaluable. Kubrick estimated that Ermey wrote 50% of his own dialogue, especially the insults.
While Ermey practiced his lines in a rehearsal room, Kubrick's assistant Leon Vitali would throw tennis balls and oranges at him. Ermey had to catch the ball and throw it back as quickly as possible, while at the same time saying his lines as fast as he could. Any hesitation, slur, or missed line would necessitate starting over. Twenty error-free runs were required. "[He] was my drill instructor", Ermey said of Vitali.
The original casting plan envisaged Anthony Michael Hall starring as Private Joker. After eight months of negotiations, a deal between Kubrick and Hall fell through. Kubrick offered Bruce Willis a role, but the actor had to turn it down because he was to start filming of the first six episodes of his TV series "Moonlighting".
Kubrick shot the film in England: in Cambridgeshire, on the Norfolk Broads, and at the former Millennium Mills, Beckton Gas Works, Newham (east London) and the Isle of Dogs. A former Royal Air Force station and then British Army base, Bassingbourn Barracks doubled as the Parris Island Marine boot camp. A British Army rifle range near Barton, outside Cambridge, was used in the scene where Hartman congratulates Private Pyle for his shooting skills. Kubrick worked from still photographs of Huế, taken in 1968, and found an area owned by British Gas that closely resembled it and was scheduled to be demolished. The disused Beckton Gas Works, a few miles from central London, were filmed to represent Huế after attacks. Kubrick had buildings blown up, and the film's art director used a wrecking ball to knock specific holes in certain buildings over the course of two months. Originally, Kubrick had a plastic replica jungle flown in from California, but once he looked at it he was reported to have said, "I don't like it. Get rid of it". The open country was filmed in the Cliffe marshes, and along the River Thames, supplemented with 200 imported Spanish palm trees and 100,000 plastic tropical plants from Hong Kong.
Kubrick acquired four M41 tanks from a Belgian army colonel who was an admirer of the director's work, and Westland Wessex helicopters painted Marine green to represent Marine Corps Sikorsky H-34 Choctaw helicopters. Although the Wessex was a licensed derivative of the Sikorsky H-34, the Wessex substituted two gas turbine engines for the H-34's radial (piston) engine. This resulted in a much longer and less rounded nose than that of the Vietnam era H-34. Kubrick also obtained a selection of rifles, M79 grenade launchers, and M60 machine guns from a licensed weapons dealer.
Modine described the shoot as difficult: Beckton Gas Works was a toxic and environmental nightmare for the entire film crew. Asbestos and hundreds of other chemicals poisoned the ground and air. Modine documents details of shooting at Beckton in his book, "Full Metal Jacket Diary" (2005). During the boot camp sequence of the film, Modine and the other recruits had to endure the rigors of Marine Corps training, including having Ermey yelling at them for 10 hours a day during the shooting of the Parris Island scenes. To ensure the actors' reactions to Ermey were as authentic and fresh as possible, Ermey and the recruits did not rehearse together. For film continuity, each recruit had to have his head shaved once a week.
At one point during filming, Ermey had a car accident, broke all of his ribs on one side, and was out for four-and-a-half months.
Cowboy's death scene shows a building in the background that resembles the famous alien monolith in Kubrick's "" (1968). Kubrick described the resemblance as an "extraordinary accident".
During filming, Hasford contemplated taking legal action over the writing credits. Originally, the filmmakers intended for Hasford to receive an "additional dialogue" credit, but he fought for and eventually received full credit. The writer took two friends and visited the set dressed as extras, only to be mistaken by a crew member for Herr when Hasford identified himself as the writer whose work the film was based on.
Kubrick's daughter Vivian—who appears uncredited as a news-camera operator at the mass grave—shadowed the filming of "Full Metal Jacket." She shot 18 hours of behind-the-scenes footage for a potential "making-of" documentary similar to her earlier film documentary on Kubrick's "The Shining" (1980), but in this case did not make the film. Snippets of her work can be seen in the documentary "Stanley Kubrick's Boxes" (2008).
Compared to Kubrick's other works, the themes of "Full Metal Jacket" have received little attention from critics and reviewers. Michael Pursell's essay ""Full Metal Jacket": The Unravelling of Patriarchy" (1988) was an early, in-depth consideration of the film's two-part structure and its criticism of masculinity, arguing that the film shows "war and pornography as facets of the same system".
Most reviews have focused on military brainwashing themes in the boot camp training section of the film, while seeing the latter half of the film as more confusing and disjointed in content. Rita Kempley of "The Washington Post" wrote, "it's as if they borrowed bits of every war movie to make this eclectic finale." Roger Ebert said, "The movie disintegrates into a series of self-contained set pieces, none of them quite satisfying." Julian Rice, in his book "Kubrick's Hope" (2008), sees the second part of the film as continuing the psychic journey of Joker in trying to come to grips with human evil.
Tony Lucia, in his July 5, 1987, review of "Full Metal Jacket" for the "Reading Eagle", looked at the themes of Kubrick's career, suggesting "the unifying element may be the ordinary man dwarfed by situations too vast and imposing to handle". Lucia specifically refers to the "military mentality" in this film. He said further that the theme covered "a man testing himself against his own limitations", and he concluded: ""Full Metal Jacket" is the latest chapter in an ongoing movie which is not merely a comment on our time or a time past, but on something that reaches beyond."
British critic Gilbert Adair wrote: "Kubrick's approach to language has always been reductive and uncompromisingly deterministic in nature. He appears to view it as the exclusive product of environmental conditioning, only very marginally influenced by concepts of subjectivity and interiority, by all the whims, shades and modulations of personal expression".
Michael Herr wrote of his work on the screenplay: "The substance was single-minded, the old and always serious problem of how you put into a film or a book the living, behaving presence of what Jung called The Shadow, the most accessible of archetypes, and the easiest to experience ... War is the ultimate field of Shadow-activity, where all of its other activities lead you. As they expressed it in Vietnam, 'Yea, though I walk through the Valley of the Shadow of Death, I will fear no Evil, for I the Evil'."
In a 2009 review, Dan Schneider alleged that Kubrick took the cinematic idea of a recruit being broken down in boot camp and driven to suicide from the epic film series "The Human Condition" (1959–1961).
Kubrick's daughter Vivian Kubrick, under the alias "Abigail Mead", wrote the film's score. According to an interview, which appeared in the January 1988 issue of "Keyboard", the film was scored mostly with a Fairlight CMI synthesizer (the then-current Series III edition) and a Synclavier. For the period music, Kubrick went through "Billboard" list of Top 100 Hits for each year from 1962 to 1968 and tried many songs, but "sometimes the dynamic range of the music was too great, and we couldn't work in dialogue".
A single "Full Metal Jacket (I Wanna Be Your Drill Instructor)", credited to Mead and Nigel Goulding, was released to promote the film. It incorporates Ermey's drill cadences from the film. The single reached number two in the UK pop charts.
"Full Metal Jacket" received a limited release on June 26, 1987, in 215 theaters. Its opening weekend saw it accrue $2,217,307, an average of $10,313 per theater, ranking it the number 10 film for the June 26–28 weekend. It took a further $2,002,890 for a total of $5,655,225 before entering wide release on July 10, 1987, at 881 theaters—an increase of 666. The July 10–12 weekend saw the film gross $6,079,963, an average of $6,901 per theater, and rank as the number 2 grossing film. Over the next four weeks the film opened in a further 194 theaters to its widest release of 1,075 theaters before closing two weeks later with a total gross of $46,357,676, making it the number 23 highest-grossing film of 1987. , the film had grossed $120 million worldwide.
The film was released on Blu-ray on October 23, 2007, in the US and other countries. Warner Home Video released the 25th anniversary edition on Blu-ray on August 7, 2012.
Review aggregation website Rotten Tomatoes retrospectively collected reviews to give the film a score of 91% based on reviews from 79 critics and an average rating of 8.31/10. The summary states, "Intense, tightly constructed, and darkly comic at times, Stanley Kubrick's "Full Metal Jacket" may not boast the most original of themes, but it is exceedingly effective at communicating them." Another aggregator Metacritic gave it a score of 76 out of 100, which indicates a "generally favorable" response, based on 19 reviews. Reviewers generally reacted favorably to the cast, Ermey in particular, and the film's first act in recruit training, but several reviews were critical of the latter part of the film set in Vietnam and what was considered a "muddled" moral message in the finale. It ranks on AFI's 100 Years... 100 Thrills.
Richard Corliss of "Time" called the film a "technical knockout", praising "the dialogue's wild, desperate wit; the daring in choosing a desultory skirmish to make a point about war's pointlessness", and "the fine, large performances of almost every actor", believing, at the time, that Ermey and D'Onofrio would receive Oscar nominations. Corliss also appreciated "the Olympian elegance and precision of Kubrick's filmmaking". "Empire"s Ian Nathan awarded the film 3 out of 5 stars, saying it is "inconsistent" and describing it as "both powerful and frustratingly unengaged". Nathan felt that after leaving the opening act following the recruit training, the film becomes "bereft of purpose", but he summarized his review by calling it a "hardy Kubrickian effort that warms on you with repeated viewings". Nathan also praised Ermey's "staggering performance". Vincent Canby of "The New York Times" called it "harrowing, beautiful and characteristically eccentric". Canby echoed praise for Ermey, calling him "the film's stunning surprise ... he's so good—so obsessed—that you might think he wrote his own lines". Canby also said D'Onofrio's performance should be admired, and he called Modine "one of the best, most adaptable young film actors of his generation". Canby concluded: "Full Metal Jacket" was "a film of immense and very rare imagination".
Jim Hall, writing for Film4 in 2010, awarded the film 5 out of 5 stars and added to the praise for Ermey, saying his "performance as the foul-mouthed Hartman is justly celebrated and it's difficult to imagine the film working anything like as effectively without him". The review also preferred the opening training to the later Vietnam sequence, calling it "far more striking than the second and longer section". Film4 commented that the film ends abruptly but felt "it demonstrates just how clear and precise the director's vision could be when he resisted a fatal tendency for indulgence". Film4 concluded: ""Full Metal Jacket" ranks with "Dr. Strangelove" as one of Kubrick's very best." Jonathan Rosenbaum of the "Chicago Reader" called it "Elliptical, full of subtle inner rhymes … and profoundly moving, this is the most tightly crafted Kubrick film since "Dr. Strangelove", as well as the most horrific." "Variety" called the film an "intense, schematic, superbly made" drama "loaded with vivid, outrageously vulgar military vernacular that contributes heavily to the film's power", but felt that it never develops "a particularly strong narrative." The cast performances were all labeled "exceptional" with Modine being singled out as "embodying both what it takes to survive in the war and a certain omniscience." Gilbert Adair, writing in a review for "Full Metal Jacket", commented that "Kubrick's approach to language has always been of a reductive and uncompromisingly deterministic nature. He appears to view it as the exclusive product of environmental conditioning, only very marginally influenced by concepts of subjectivity and interiority, by all whims, shades and modulations of personal expression".
Not all reviews were positive. "Chicago Sun-Times" critic Roger Ebert held a dissenting view, calling the film "strangely shapeless" and awarding it 2.5 stars out of 4. Ebert called it "one of the best-looking war movies ever made on sets and stage" but felt this was not enough to compete with the "awesome reality of "Platoon", "Apocalypse Now" and "The Deer Hunter"." Ebert also criticized the film's second act set in Vietnam, saying the "movie disintegrates into a series of self-contained set pieces, none of them quite satisfying" and concluded that the film's message was "too little and too late", having been done by other Vietnam War films. However, Ebert also gave praise to Ermey and D'Onofrio, saying "these are the two best performances in the movie, which never recovers after they leave the scene." This review angered Gene Siskel on their television show "At The Movies"; he criticized Ebert for liking "Benji the Hunted" (which came out the same week) more than "Full Metal Jacket". Their difference in opinion was parodied on the television show "The Critic", where Siskel taunts Ebert with "coming from the guy who liked "Benji the Hunted"!" "Time Out London" also disliked the film saying "Kubrick's direction is as steely cold and manipulative as the régime it depicts", and felt that the characters were underdeveloped, adding "we never really get to know, let alone care about, the hapless recruits on view."
British television channel Channel 4 voted it number 5 on its list of the greatest war films ever made. In 2008, "Empire" placed "Full Metal Jacket" number 457 on its list of "The 500 Greatest Movies of All Time".
"Full Metal Jacket" was nominated for eleven awards worldwide between 1987 and 1989 including an Academy Award for Best Adapted Screenplay, two BAFTA Awards for Best Sound and Best Special Effects, and a Golden Globe for Best Supporting Actor for Ermey. Ultimately it won five awards, three from organisations outside of the United States: Italy, Japan, and the United Kingdom. The film won Best Foreign Language Film from the Japanese Academy, Best Producer from the David di Donatello Awards, Director of the Year from the London Critics Circle Film Awards, and Best Director and Best Supporting Actor from the Boston Society of Film Critics Awards, for Kubrick and Ermey respectively. Of the five awards won, four were awarded to Kubrick.
Film scholar Greg Jenkins has done a detailed analysis of the adaptation of the novel as a screenplay. The novel is in three parts. The film greatly expands the relatively brief section in Part I, about the boot camp on Parris Island, and essentially discards Part III. This gives the film a twofold structure, telling two largely independent stories connected by the same characters acting in each. Jenkins believes this structure is a development of concepts that Kubrick has had since the 1960s. At that time, Kubrick talked about wanting to explode the usual conventions of narrative structure.
Sergeant Hartman (renamed from the book's Gerheim) has an expanded role in the film. In the film, Private Pyle's incompetence is presented as weighing negatively on the rest of the platoon and in the film, unlike the novel, he is the only under-performing recruit. The film omits "Hartman's" disclosure to other troops that he thinks Pyle might be mentally unstable, a "Section 8", instead it is Joker in the scene where he mops the bathroom with Cowboy. In contrast, Hartman praises Pyle, saying that he is "born again hard". Jenkins says that the character of Hartman could not have been portrayed as having a warmer social relationship with the troops, as that would have upset the balance of the film, which depends on the spectacle of ordinary soldiers coming to grips with Hartman as a force of nature embodying a killer culture.
Various episodes in the book have been cut from the screenplay or conflated with others. For example, Cowboy's introduction of the "Lusthog Squad" has been both markedly shortened and supplemented by material from other sections of the book. Although the book's final, third section was largely dropped, elements from this section were inserted into other episodes of the film. For instance, the climactic episode with the sniper is a conflation of two episodes in the book, from Parts II and III. Jenkins thinks the film presents this passage more dramatically but in less gruesome detail than in the novel.
The film often has a more tragic tone than the book, which relies on callous humor. Joker in the film remains a model of humane thinking, as evidenced by his moral struggle in the sniper episode and elsewhere. He works to overcome his own meekness, rather than to compete with other Marines. The film omits the book's showing his eventual domination over Animal Mother.
The film also omits the death of the character Rafterman. Jenkins believed this allowed viewers to reflect on Rafterman's personal growth in the film and speculate on his future growth after the war. Jenkins also believed Rafterman's death would not have fit the plot of the screenplay.
The line of dialog "Me so horny. Me love you long time," uttered by the Da Nang street prostitute (played by Papillon Soo Soo) to Joker (Modine) became a catchphrase in popular culture after it was sampled by rap artists 2 Live Crew in their 1990 hit "Me So Horny" and by Sir Mix-A-Lot in "Baby Got Back" (1992). | https://en.wikipedia.org/wiki?curid=11701 |
Franklin J. Schaffner
Franklin James Schaffner (May 30, 1920July 2, 1989) was an American film, television, and stage director. He won the Academy Award for Best Director for "Patton" (1970), and is also known for the films "Planet of the Apes" (1968), "Nicholas and Alexandra" (1971), "Papillon" (1973), and "The Boys from Brazil" (1978). He served as President of the Directors Guild of America between 1987 and 1989.
Schaffner was born in Tokyo, Japan, the son of American missionaries Sarah Horting (née Swords) and Paul Franklin Schaffner, and was raised in Japan.
He returned to the United States and graduated from Franklin & Marshall College in Lancaster, Pennsylvania, where he was active in drama. He studied law at Columbia University in New York City but his education was interrupted by service with the United States Navy in World War II during which he served with American amphibious forces in Europe and North Africa. In the latter stages of the war he was sent to the Pacific Far East to serve with the United States Office for Strategic Services.
Schaffner returned to the United States after the war. He worked for a world peace organization then as an assistant director for the documentary film series "The March of Time". He became a director in the news and public affairs department of CBS television where his jobs including covering sports, beauty pageants and public-service programs.
In 1950 he directed "The Traitor" the first episode of "Ford Theatre". He also did an adaptation of "Alice in Wonderland". "Treasure Island"
He directed "Thunder on Sycamore Street" by Reginald Rose for "Studio One". He and Rose reunited on "Twelve Angry Men" which won Schaffner an Emmy for Best Director.
The following year Schaffner earned another Emmy for his work on the 1955 TV adaptation of the Broadway play, "The Caine Mutiny Court Martial", shown on the anthology series "Ford Star Jubilee".
Schaffner became one of three regular directors on the "Kaiser Aluminium Hour" the others being George Roy Hill and Fielder Cook. He was also a regular director on "Playhouse 90".
He was the original director on the series, "The Defenders", created by Rose. Schaffner's work earned him another Emmy.
In 1960, he directed Allen Drury's stage play "Advise and Consent". This earned him the Best Director recognition in the Variety Critics Poll.
In the realm of network television, Schaffner also received widespread critical acclaim in 1962 for his groundbreaking collaboration with the First Lady of the United States Jacqueline Kennedy and CBS television's Musical Director Alfredo Antonini in the production of A Tour of the White House with Mrs. John F. Kennedy- a television special which was broadcast to over 80 million viewers worldwide.
Schaffner's contributions in this production earned him a nomination in 1963 by the Director's Guild of America USA, for its award in the category of Outstanding Directorial Achievement in Television.
In January 1960 Schaffner signed a multi picture deal with Columbia Pictures.
In May 1961 he signed to make "A Summer Place" at Fox with Fabian and Dolores Hart. The film was not made. Schaffner directed "The Good Years" (1962) for TV with Henry Fonda and Lucille Ball. Other TV work included "The Great American Robbery".
Instead Schaffner's first motion picture was "The Stripper" (1963), made at Fox from a play by William Inge, starring Richard Beymer and Joanne Woodward. The film was well received critically though not a large commercial success.
He continued to work for TV including "The Legend of Lylah Clare".
Schaffner later made "The Best Man" (1964) based on a play by Gore Vidal and "The War Lord" (1965), based on a play by Leslie Stevens, with Charlton Heston. In a 1966 interview he said "as you mature you learn that the story is the most important thing." He announced various films for Columbia - "The Day Lincoln was Shot", "The Whistle Blows for Victory" and "The Green Beret" - but they were not made.
He went to Britain to make "The Double Man" (1967) with Yul Brynner, a film Schaffner admitted he did for the money.
Schaffner had a huge critical and commercial hit in "Planet of the Apes" (1968) starring Heston at Fox.
In December 1968 Schaffner signed a non-exclusive three-picture deal with Columbia.
His next film was for 20th Century Fox, however: "Patton" (1970), a biopic of General Patton starring George C. Scott. It was a major success for which Schaffner won the Academy Award for Best Director and the Directors Guild of America Award for Best Director.
He made "Nicholas and Alexandra" (1971) for producer Sam Spiegel. It was an expensive box office failure. Schaffner followed it with "Papillon" (1973) a $14 million epic with Steve McQueen and Dustin Hoffman which was a considerable financial success. In 1971 he said his films "are almost always about people who are out of their time and place."
Schaffner intended to follow "Papillon" with "Dynasty of Western Outlaws", about outlaws over the years in Missouri from a script by John Gay, and an adaptation of "The French Lieutenant's Woman". He ended up making neither - "Dynasty" was never made and "French Lieutenant" was made a decade later by another director.
Schaffner reunited with George C. Scott in "Islands in the Stream" (1977), based on the novel by Ernest Hemingway. He then did "The Boys from Brazil" (1978) based on a novel by Ira Levin with Gregory Peck.
His later films included "Sphinx" (1981), a $10 million thriller about Egypt based on novel by Robin Cook and produced by Stanley O'Toole, who had made "Boys from Brazil" with Schaffner. It was a commercial failure as was "Yes, Giorgio" (1982), a musical comedy starring Luciano Pavarotti.
Schaffner's last films were "Lionheart" (1987) and "Welcome Home" (1989).
Schaffner was President of the Directors Guild of America from 1987 until his death in 1989.
Jerry Goldsmith composed the music for seven of his films: "The Stripper", "Planet of the Apes", "Patton", "Papillon", "Islands in the Stream", "The Boys from Brazil" and "Lionheart". Four of them were nominated for the Academy Award for Best Original Score.
Schaffner twice worked with actors Charlton Heston and Maurice Evans ("The War Lord"; "Planet of the Apes"), George C. Scott ("Patton"; "Islands in the Stream") and Laurence Olivier ("Nicholas and Alexandra"; "The Boys from Brazil").
Schaffner married Helen Jane Gilchrist in 1948. The couple had two children, Jennie and Kate. She died in 2007.
Schaffner died on July 2, 1989, at the age of 69. He was released 10 days before his death from a hospital where he was being treated for lung cancer.
Screenwriter William Goldman identified Schaffner in 1981 as being one of the three best directors (then living) at handling "scope" (a gift for screen epics) in films. The other two were David Lean and Richard Attenborough.
In 1991 Schaffner's widow Jean established the Franklin J. Schaffner Alumni Medal (colloquially known as the Franklin J. Schaffner Award), which is awarded by the American Film Institute at its annual ceremony to an alumnus of either the AFI Conservatory or the AFI Conservatory Directing Workshop for Women who best embodies the qualities of the late director: talent, taste, dedication and commitment to quality filmmaking.
The moving image collection of Franklin J. Schaffner is held at the Academy Film Archive. | https://en.wikipedia.org/wiki?curid=11705 |
Finch
The true finches are small to medium-sized passerine birds in the family Fringillidae. Finches have stout conical bills adapted for eating seeds and nuts and often have colourful plumage. They occupy a great range of habitats where they are usually resident and do not migrate. They have a worldwide distribution except for Australia and the polar regions. The family Fringillidae contains more than two hundred species divided into fifty genera. It includes species known as siskins, canaries, redpolls, serins, grosbeaks and euphonias.
Many birds in other families are also commonly called "finches". These groups include: the estrildid finches (Estrildidae) of the Old World tropics and Australia; some members of the Old World bunting family (Emberizidae) and the American sparrow family (Passerellidae); and the Darwin's finches of the Galapagos islands, now considered members of the tanager family (Thraupidae).
Finches and canaries were used in the UK, Canada and USA in the coal mining industry, to detect carbon monoxide from the eighteenth to twentieth century. This practice ceased in the UK in 1986.
Finches helped Charles Darwin understand the way that natural environments affect the evolution and adaptation of a species. Originally, Darwin did not discern that all the finches were the same species, as they looked different. Some adapted to have long, elegant beaks to be able to reach the fruits of a plant, while others have adapted to have strong, sturdy beaks in order to break nuts. This realization helped Darwin understand the effects of species in different ecosystems, leading to a stronger understanding of Darwinism.
The taxonomy of the finch family, in particular the cardueline finches, has a long and complicated history. The study of the relationship between the taxa has been confounded by the recurrence of similar morphologies due to the convergence of species occupying similar niches. In 1968 the American ornithologist Raymond Andrew Paynter Jr. wrote:
Limits of the genera and relationships among the species are less understood – and subject to more controversy – in the carduelines than in any other species of passerines, with the possible exception of the estrildines [waxbills].
Beginning in around 1990 a series of phylogenetic studies based on mitochondrial and nuclear DNA sequences resulted in substantial revisions being made to the taxonomy. Several groups of birds that had previously been assigned to other families were found to be related to the finches. The Neotropical "Euphonia" and the "Chlorophonia" were formerly placed in the tanager family Thraupidae due to their similar appearance but analysis of mitochondrial DNA sequences revealed that both genera were more closely related to the finches. They are now placed in a separate subfamily Euphoniinae within the Fringillidae. The Hawaiian honeycreepers were at one time placed in their own family, Drepanididae but were found to be closely related to the "Carpodacus" rosefinches and are now placed within the Carduelinae subfamily. The three largest genera, "Carpodacus", "Carduelis" and "Serinus" were found to be polyphyletic. Each was split into monophyletic genera. The American rosefinches were moved from "Carpodacus" to "Haemorhous". "Carduelis" was split by moving the greenfinches to "Chloris" and a large clade into "Spinus" leaving just three species in the original genus. Thirty seven species were moved from "Serinus" to "Crithagra" leaving eight species in the original genus. Today the family Fringillidae is divided into three subfamilies, the Fringillinae containing a single genus with the chaffinches, the Carduelinae containing 183 species divided into 49 genera, and the Euphoniinae containing the "Euphonia" and the "Chlorophonia".
Although Przewalski's "rosefinch" ("Urocynchramus pylzowi") has ten primary flight feathers rather than the nine primaries of other finches, it was sometimes classified in the Carduelinae. It is now assigned to a distinct family, Urocynchramidae, monotypic as to genus and species, and with no particularly close relatives among the Passeroidea.
Fossil remains of true finches are rare, and those that are known can mostly be assigned to extant genera at least. Like the other Passeroidea families, the true finches seem to be of roughly Middle Miocene origin, around 20 to 10 million years ago (Ma). An unidentifable finch fossil from the Messinian age, around 12 to 7.3 million years ago (Ma) during the Late Miocene subepoch, has been found at Polgárdi in Hungary.
The scientific name Fringillidae comes from the Latin word "fringilla" for the common chaffinch ("Fringilla coelebs"), a member of the family which is common in Europe. The name was coined (as Fringilladæ) by the English zoologist William Elford Leach in a guide to the contents of the British Museum published in 1820.
The smallest "classical" true finches are the Andean siskin ("Spinus spinescens") at as little as 9.5 cm (3.8 in) and the lesser goldfinch ("Spinus psaltria") at as little as . The largest species is probably the collared grosbeak ("Mycerobas affinis") at up to and , although larger lengths, to in the pine grosbeak ("Pinicola enucleator"), and weights, to in the evening grosbeak ("Hesperiphona vespertinus"), have been recorded in species which are slightly smaller on average. They typically have strong, stubby beaks, which in some species can be quite large; however, Hawaiian honeycreepers are famous for the wide range of bill shapes and sizes brought about by adaptive radiation. All true finches have 9 primary remiges and 12 rectrices. The basic plumage colour is brownish, sometimes greenish; many have considerable amounts of black, while white plumage is generally absent except as wing-bars or other signalling marks. Bright yellow and red carotenoid pigments are commonplace in this family, and thus blue structural colours are rather rare, as the yellow pigments turn the blue color into green. Many, but by no means all true finches have strong sexual dichromatism, the females typically lacking the bright carotenoid markings of males.
The finches have a near-global distribution, being found across the Americas, Eurasia and Africa, as well as some island groups such as the Hawaiian islands. They are absent from Australasia, Antarctica, the Southern Pacific and the islands of the Indian Ocean, although some European species have been widely introduced in Australia and New Zealand.
Finches are typically inhabitants of well-wooded areas, but some can be found on mountains or even in deserts.
The finches are primarily granivorous, but euphoniines include considerable amounts of arthropods and berries in their diet, and Hawaiian honeycreepers evolved to utilize a wide range of food sources, including nectar. The diet of Fringillidae nestlings includes a varying amount of small arthropods. True finches have a bouncing flight like most small passerines, alternating bouts of flapping with gliding on closed wings. Most sing well and several are commonly seen cagebirds; foremost among these is the domesticated canary ("Serinus canaria domestica"). The nests are basket-shaped and usually built in trees, more rarely in bushes, between rocks or on similar substrate.
The family Fringillidae contains 228 species divided into 50 genera and three subfamilies. The subfamily Carduelinae includes 18 extinct Hawaiian honeycreepers and the extinct Bonin grosbeak. See List of Fringillidae species for further details.
Subfamily Fringillinae
Subfamily Carduelinae
Subfamily Euphoniinae | https://en.wikipedia.org/wiki?curid=11711 |
Facilitated diffusion
Facilitated diffusion (also known as facilitated transport or passive-mediated transport) is the process of spontaneous passive transport (as opposed to active transport) of molecules or ions across a biological membrane via specific transmembrane integral proteins. Being passive, facilitated transport does not directly require chemical energy from ATP hydrolysis in the transport step itself; rather, molecules and ions move down their concentration gradient reflecting its diffusive nature.
Facilitated diffusion is different from simple diffusion in several ways.
Polar molecules and large ions dissolved in water cannot diffuse freely across the plasma membrane due to the hydrophobic nature of the fatty acid tails of the phospholipids that make up the lipid bilayer. Only small, non-polar molecules, such as oxygen and carbon dioxide, can diffuse easily across the membrane. Hence, no nonpolar molecules are transported by proteins in the form of transmembrane channels. These channels are gated, meaning that they open and close, and thus deregulate the flow of ions or small polar molecules across membranes, sometimes against the osmotic gradient. Larger molecules are transported by transmembrane carrier proteins, such as permeases, that change their conformation as the molecules are carried across (e.g. glucose or amino acids).
Non-polar molecules, such as retinol or lipids, are poorly soluble in water. They are transported through aqueous compartments of cells or through extracellular space by water-soluble carriers (e.g. retinol binding protein). The metabolites are not altered because no energy is required for facilitated diffusion. Only permease changes its shape in order to transport metabolites. The form of transport through a cell membrane in which a metabolite is modified is called group translocation transportation.
Glucose, sodium ions, and chloride ions are just a few examples of molecules and ions that must efficiently cross the plasma membrane but to which the lipid bilayer of the membrane is virtually impermeable. Their transport must therefore be "facilitated" by proteins that span the membrane and provide an alternative route or bypass mechanism.
Various attempts have been made by engineers to mimic the process of facilitated transport in synthetic (i.e., non-biological) membranes for use in industrial-scale gas and liquid separations, but these have met with limited success to date, most often for reasons related to poor carrier stability and/or dissociation of the carrier from the passive transport.
In living organisms, the main physical and biochemical processes that are required for survival are regulated by diffusion. Facilitated diffusion is one form of diffusion and it is important in several metabolic processes of living cells. One vital role of facilitated diffusion is that it is the main mechanism behind the binding of Transcription Factors (TFs) to designated target sites on the DNA molecule. The in vitro model, which is a very well known method of facilitated diffusion, that takes place outside of a living cell, explains the 3-dimensional pattern of diffusion in the cytosol and the 1-dimensional diffusion along the DNA contour. After carrying out extensive research on processes occurring out of the cell, this mechanism was generally accepted but there was a need to verify that this mechanism could take place in vivo or inside of living cells. Bauer & Metzler (2013) therefore carried out an experiment using a bacterial genome in which they investigated the average time for TF – DNA binding to occur. After analyzing the process for the time it takes for TF's to diffuse across the contour and cytoplasm of the bacteria's DNA, it was concluded that in vitro and in vivo are similar in that the association and dissociation rates of TF's to and from the DNA are similar in both. Also, on the DNA contour, the motion is slower and target sites are easy to localize while in the cytoplasm, the motion is faster but the TF's are not sensitive to their targets and so binding is restricted.
Single-molecule imaging is an imaging technique which provides an ideal resolution necessary for the study of the Transcription factor binding mechanism in living cells. In prokaryotic bacteria cells such as "E. coli", facilitated diffusion is required in order for regulatory proteins to locate and bind to target sites on DNA base pairs. There are 2 main steps involved: the protein binds to a non-specific site on the DNA and then it diffuses along the DNA chain until it locates a target site, a process referred to as sliding. According to Brackley et al. (2013), during the process of protein sliding, the protein searches the entire length of the DNA chain using 3-D and 1-D diffusion patterns. During 3-D diffusion, the high incidence of Crowder proteins creates an osmotic pressure which brings searcher proteins (e.g. Lac Repressor) closer to the DNA to increase their attraction and enable them to bind, as well as steric effect which exclude the Crowder proteins from this region (Lac operator region). Blocker proteins participate in 1-D diffusion only i.e. bind to and diffuse along the DNA contour and not in the cytosol.
The in vivo model mentioned above clearly explains 3-D and 1-D diffusion along the DNA strand and the binding of proteins to target sites on the chain. Just like prokaryotic cells, in eukaryotes, facilitated diffusion occurs in the nucleoplasm on chromatin filaments, accounted for by the switching dynamics of a protein when it is either bound to a chromatin thread or when freely diffusing in the nucleoplasm. In addition, given that the chromatin molecule is fragmented, its fractal properties need to be considered. After calculating the search time for a target protein, alternating between the 3-D and 1-D diffusion phases on the chromatin fractal structure, it was deduced that facilitated diffusion in eukaryotes precipitates the searching process and minimizes the searching time by increasing the DNA-protein affinity.
Oxygen binds with red blood cells in the blood stream. The oxygen affinity with hemoglobin on red blood cell surfaces enhances this bonding ability. In a system of facilitated diffusion of oxygen, there is a tight relationship between the ligand which is oxygen and the carrier which is either hemoglobin or myoglobin. This mechanism of facilitated diffusion of oxygen by hemoglobin or myoglobin was discovered and initiated by Wittenberg and Scholander. They carried out experiments to test for the steady-state of diffusion of oxygen at various pressures. Oxygen-facilitated diffusion occurs in a homogeneous environment where oxygen pressure can be relatively controlled.
For oxygen diffusion to occur, there must be a full saturation pressure (more) on one side of the membrane and full reduced pressure (less) on the other side of the membrane i.e. one side of the membrane must be of higher concentration. During facilitated diffusion, hemoglobin increases the rate of constant diffusion of oxygen and facilitated diffusion occurs when oxyhemoglobin molecule is randomly displaced.
Carbon monoxide has a facilitated diffusion process similar to that of oxygen. They both make use of the high affinity of hemoglobin and myoglobin for the gas. Carbon monoxide also combines with hemoglobin and myoglobin with the help of facilitated diffusion just as it is in oxygen but the rate at which they react differs from one another. Carbon monoxide has a dissociation velocity which is 100 times less than that of oxygen; its affinity for myoglobin is 40 times higher and 250 times higher for hemoglobin, compared to oxygen.
Glucose is a six-carbon sugar that provides energy needed by cells. Since glucose is a large molecule, it is difficult to be transported across the membrane through simple diffusion. Hence, it diffuses across membranes through facilitated diffusion, down the concentration gradient. The carrier protein at the membrane binds to the glucose and alters its shape such that it can easily to be transported from one side of the membrane to the other. Movement of glucose into the cell could be rapid or slow depending on the number of membrane-spanning protein. It is transported against the concentration gradient by a dependent glucose symporter which provides a driving force to other glucose molecules in the cells. Facilitated diffusion helps in the release of accumulated glucose into the extracellular space adjacent to the blood capillary. | https://en.wikipedia.org/wiki?curid=11712 |
McDonnell Douglas F-15 Eagle
The McDonnell Douglas F-15 Eagle is an American twin-engine, all-weather tactical fighter aircraft designed by McDonnell Douglas (now part of Boeing). Following reviews of proposals, the United States Air Force selected McDonnell Douglas's design in 1967 to meet the service's need for a dedicated air superiority fighter. The Eagle first flew in July 1972, and entered service in 1976. It is among the most successful modern fighters, with over 100 victories and no losses in aerial combat, with the majority of the kills by the Israeli Air Force.
The Eagle has been exported to Israel, Japan, and Saudi Arabia. The F-15 was originally envisioned as a pure air-superiority aircraft. Its design included a secondary ground-attack capability that was largely unused. The aircraft design proved flexible enough that an all-weather strike derivative, the F-15E Strike Eagle, an improved and enhanced version which was later developed, entered service in 1989 and has been exported to several nations. As of 2017, the aircraft is being produced in different variants.
The F-15 can trace its origins to the early Vietnam War, when the U.S. Air Force and the U.S. Navy fought each other over future tactical aircraft. Defense Secretary Robert McNamara was pressing for both services to use as many common aircraft as possible, even if performance compromises were involved. As part of this policy, the USAF and Navy had embarked on the TFX (F-111) program, aiming to deliver a medium-range interdiction aircraft for the Air Force that would also serve as a long-range interceptor aircraft for the Navy.
In January 1965, Secretary McNamara asked the Air Force to consider a new low-cost tactical fighter design for short-range roles and close air support to replace several types like the F-100 Super Sabre and various light bombers then in service. Several existing designs could fill this role; the Navy favored the Douglas A-4 Skyhawk and LTV A-7 Corsair II, which were pure attack aircraft, while the Air Force was more interested in the Northrop F-5 fighter with a secondary attack capability. The A-4 and A-7 were more capable in the attack role, while the F-5 less so, but could defend itself. If the Air Force chose a pure attack design, maintaining air superiority would be a priority for a new airframe. The next month, a report on light tactical aircraft suggested the Air Force purchase the F-5 or A-7, and consider a new higher-performance aircraft to ensure its air superiority. This point was reinforced after the loss of two Republic F-105 Thunderchief aircraft to obsolete MiG-17s on 4 April 1965.
In April 1965, Harold Brown, at that time director of the Department of Defense Research and Engineering, stated the favored position was to consider the F-5 and begin studies of an "F-X". These early studies envisioned a production run of 800 to 1,000 aircraft and stressed maneuverability over speed; it also stated that the aircraft would not be considered without some level of ground-attack capability. On 1 August, Gabriel Disosway took command of Tactical Air Command and reiterated calls for the F-X, but lowered the required performance from Mach 3.0 to 2.5 to lower costs.
An official requirements document for an air superiority fighter was finalized in October 1965, and sent out as a request for proposals to 13 companies on 8 December. Meanwhile, the Air Force chose the A-7 over the F-5 for the support role on 5 November 1965, giving further impetus for an air superiority design as the A-7 lacked any credible air-to-air capability.
Eight companies responded with proposals. Following a downselect, four companies were asked to provide further developments. In total, they developed some 500 design concepts. Typical designs featured variable-sweep wings, weight over , included a top speed of Mach 2.7 and a thrust-to-weight ratio of 0.75. When the proposals were studied in July 1966, the aircraft were roughly the size and weight of the TFX F-111, and like that aircraft, were designs that could not be considered an air-superiority fighter.
Through this period, studies of combat over Vietnam were producing worrying results. Theory had stressed long-range combat using missiles and optimized aircraft for this role. The result was highly loaded aircraft with large radar and excellent speed, but limited maneuverability and often lacking a gun. The canonical example was the McDonnell Douglas F-4 Phantom II, used by the USAF, USN, and U.S. Marine Corps to provide air superiority over Vietnam, the only fighter with enough power, range, and maneuverability to be given the primary task of dealing with the threat of Soviet fighters while flying with visual engagement rules.
In practice, due to policy and practical reasons, aircraft were closing to visual range and maneuvering, placing the larger US aircraft at a disadvantage to the much less expensive day fighters such as the MiG-21. Missiles proved to be much less reliable than predicted, especially at close range. Although improved training and the introduction of the M61 Vulcan cannon on the F-4 did much to address the disparity, these early outcomes led to considerable re-evaluation of the 1963 Project Forecast doctrine. This led to John Boyd's energy–maneuverability theory, which stressed that extra power and maneuverability were key aspects of a successful fighter design and these were more important than outright speed. Through tireless championing of the concepts and good timing with the "failure" of the initial F-X project, the "fighter mafia" pressed for a lightweight day fighter that could be built and operated in large numbers to ensure air superiority. In early 1967, they proposed that the ideal design had a thrust-to-weight ratio near 1:1, a maximum speed further reduced to Mach 2.3, a weight of , and a wing loading of .
By this time, the Navy had decided the F-111 would not meet their requirements and began the development of a new dedicated fighter design, the VFAX program. In May 1966, McNamara again asked the forces to study the designs and see whether the VFAX would meet the Air Force's F-X needs. The resulting studies took 18 months and concluded that the desired features were too different; the Navy stressed loiter time and mission flexibility, while the Air Force was now looking primarily for maneuverability.
In 1967, the Soviet Union revealed the Mikoyan-Gurevich MiG-25 at the Domodedovo airfield near Moscow. The MiG-25 was designed as a high-speed, high-altitude interceptor aircraft, and made many performance tradeoffs to excel in this role. Among these was the requirement for very high speed, over Mach 2.8, which demanded the use of stainless steel instead of aluminum for many parts of the aircraft. The added weight demanded a much larger wing to allow the aircraft to operate at the required high altitudes. However, to observers, it appeared outwardly similar to the very large F-X studies, an aircraft with high speed and a large wing offering high maneuverability, leading to serious concerns throughout the Department of Defense and the various arms that the US was being outclassed. The MiG-23 was likewise a subject of concern, and it was generally believed to be a better aircraft than the F-4. The F-X would outclass the MiG-23, but now the MiG-25 appeared to be superior in speed, ceiling, and endurance to all existing US fighters, even the F-X. Thus, an effort to improve the F-X followed.
Both Headquarters USAF and TAC continued to call for a multipurpose aircraft, while both Disosway and Air Chief of Staff Bruce K. Holloway pressed for a pure air-superiority design that would be able to meet the expected performance of the MiG-25. During the same period, the Navy had ended its VFAX program and instead accepted a proposal from Grumman Aircraft for a smaller and more maneuverable design known as VFX, later becoming the Grumman F-14 Tomcat. VFX was considerably closer to the evolving F-X requirements. The Air Force in-fighting was eventually ended by the worry that the Navy's VFAX would be forced on them; in May 1968, it was stated that "We finally decided – and I hope there is no one who still disagrees – that this aircraft is going to be an air superiority fighter".
In September 1968, a request for proposals was released to major aerospace companies. These requirements called for single-seat fighter having a maximum take-off weight of for the air-to-air role with a maximum speed of Mach 2.5 and a thrust-to-weight ratio of nearly 1:1 at mission weight. It also called for a twin-engined arrangement, as this was believed to respond to throttle changes more rapidly and might offer commonality with the Navy's VFX program. However, details of the avionics were left largely undefined, as whether to build a larger aircraft with a powerful radar that could detect the enemy at longer ranges was not clear, or alternatively a smaller aircraft that would make detecting it more difficult for the enemy.
Four companies submitted proposals, with the Air Force eliminating General Dynamics and awarding contracts to Fairchild Republic, North American Rockwell, and McDonnell Douglas for the definition phase in December 1968. The companies submitted technical proposals by June 1969. The Air Force announced the selection of McDonnell Douglas on 23 December 1969. The winning design resembled the twin-tailed F-14, but with fixed wings; both designs were based on configurations studied in wind-tunnel testing by NASA.
The Eagle's initial versions were the F-15 single-seat variant and TF-15 twin-seat variant. (After the F-15C was first flown, the designations were changed to "F-15A" and "F-15B"). These versions would be powered by new Pratt & Whitney F100 engines to achieve a combat thrust-to-weight ratio in excess of 1:1. A proposed 25-mm Ford-Philco GAU-7 cannon with caseless ammunition suffered development problems. It was dropped in favor of the standard M61 Vulcan gun. The F-15 used conformal carriage of four Sparrow missiles like the Phantom. The fixed wing was put onto a flat, wide fuselage that also provided an effective lifting surface. The first F-15A flight was made on 27 July 1972, with the first flight of the two-seat F-15B following in July 1973.
The F-15 has a "look-down/shoot-down" radar that can distinguish low-flying moving targets from ground clutter. It would use computer technology with new controls and displays to lower pilot workload and require only one pilot to save weight. Unlike the F-14 or F-4, the F-15 has only a single canopy frame with clear vision forward. The USAF introduced the F-15 as "the first dedicated USAF air-superiority fighter since the North American F-86 Sabre".
The F-15 was favored by customers such as the Israel and Japan air arms. Criticism from the fighter mafia that the F-15 was too large to be a dedicated dogfighter and too expensive to procure in large numbers, led to the Lightweight Fighter (LWF) program, which led to the USAF General Dynamics F-16 Fighting Falcon and the middle-weight Navy McDonnell Douglas F/A-18 Hornet.
The single-seat F-15C and two-seat F-15D models entered production in 1978 and conducted their first flights in February and June of that year. These models were fitted with the Production Eagle Package (PEP 2000), which included of additional internal fuel, provisions for exterior conformal fuel tanks, and an increased maximum takeoff weight up to . The increased takeoff weight allows internal fuel, a full weapons load, conformal fuel tanks, and three external fuel tanks to be carried. The APG-63 radar uses a programmable signal processor (PSP), enabling the radar to be reprogrammable for additional purposes such as the addition of new armaments and equipment. The PSP was the first of its kind in the world, and the upgraded APG-63 radar was the first radar to use it. Other improvements included strengthened landing gear, a new digital central computer, and an overload warning system, which allows the pilot to fly up to 9 g at all weights.
The F-15 Multistage Improvement Program (MSIP) was initiated in February 1983 with the first production MSIP F-15C produced in 1985. Improvements included an upgraded central computer; a Programmable Armament Control Set, allowing for advanced versions of the AIM-7, AIM-9, and AIM-120A missiles; and an expanded Tactical Electronic Warfare System that provides improvements to the ALR-56C radar warning receiver and ALQ-135 countermeasure set. The final 43 F-15Cs included the Hughes APG-70 radar developed for the F-15E; these are sometimes referred as Enhanced Eagles. Earlier MSIP F-15Cs with the APG-63 were upgraded to the APG-63(V)1 to improve maintainability and to perform similar to the APG-70. Existing F-15s were retrofitted with these improvements.
In 1979, McDonnell Douglas and F-15 radar manufacturer, Hughes, teamed to privately develop a strike fighter version of the F-15. This version competed in the Air Force's Dual-Role Fighter competition starting in 1982. The F-15E strike variant was selected for production over General Dynamics' competing F-16XL in 1984. Beginning in 1985, F-15C and D models were equipped with the improved P&W F100-PW-220 engine and digital engine controls, providing quicker throttle response, reduced wear, and lower fuel consumption. Starting in 1997, original F100-PW-100 engines were upgraded to a similar configuration with the designation F100-PW-220E starting.
Beginning in 2007, 179 USAF F-15Cs would be retrofitted with the AN/APG-63(V)3 Active Electronically Scanned Array radar. A significant number of F-15s are to be equipped with the Joint Helmet Mounted Cueing System. Lockheed Martin is working on an IRST system for the F-15C. A follow-on upgrade called the Eagle passive/active warning survivability system (EPAWSS) was planned, but remained unfunded. Boeing was selected in October 2015 to serve as prime contractor for the EPAWSS, with BAE Systems selected as a subcontractor. The EPAWSS is an all-digital system with advanced electronic countermeasures, radar warning, and increased chaff and flare capabilities in a smaller footprint than the 1980s-era Tactical Electronic Warfare System. More than 400 F-15Cs and F-15Es will have the system installed.
In September 2015, Boeing unveiled its 2040C Eagle upgrade, designed to keep the F-15 relevant through 2040. Seen as a necessity because of the low numbers of F-22s procured, the upgrade builds upon the company's F-15SE Silent Eagle concept with low-observable features. Most improvements focus on lethality including quad-pack munitions racks to double its missile load to 16, conformal fuel tanks for extended range, "Talon HATE" communications pod to communicate with fifth-generation fighters, the APG-63(v)3 AESA radar, a long-range infrared search and track sensor, and BAE Systems' EPAWSS systems.
The F-15 has an all-metal semi-monocoque fuselage with a large-cantilever, shoulder-mounted wing. The wing planform of the F-15 suggests a modified cropped delta shape with a leading-edge sweepback angle of 45°. Ailerons and a simple high-lift flap are located on the trailing edge. No leading-edge maneuvering flaps are used. This complication was avoided by the combination of low wing loading and fixed leading-edge camber that varies with spanwise position along the wing. Airfoil thickness ratios vary from 6% at the root to 3% at the tip.
The empennage is metal and composite construction, with twin aluminium/composite material honeycomb structure vertical stabilizers with boron-composite skin, resulting in an exceptionally thin tailplane and rudders. Composite horizontal all-moving tails outboard of the vertical stabilizers move independently to provide roll control in some flight maneuvers. The F-15 has a spine-mounted air brake and retractable tricycle landing gear. It is powered by two Pratt & Whitney F100 axial compressor turbofan engines with afterburners, mounted side-by-side in the fuselage and fed by intake ramps. The cockpit is mounted high in the forward fuselage with a one-piece windscreen and large canopy for increased visibility and a 360° field of view for the pilot. The airframe began to incorporate advanced superplastically formed titanium components in the 1980s.
The F-15's maneuverability is derived from low wing loading (weight to wing area ratio) with a high thrust-to-weight ratio, enabling the aircraft to turn tightly without losing airspeed. The F-15 can climb to in around 60 seconds. At certain speeds, the dynamic thrust output of the dual engines is greater than the aircraft's combat weight and drag, so it has the ability to accelerate vertically. The weapons and flight-control systems are designed so that one person can safely and effectively perform air-to-air combat. The A and C models are single-seat variants; these were the main air-superiority versions produced. B and D models add a second seat behind the pilot for training. E models use the second seat for a weapon systems officer. Visibly, the F-15 has a unique feature" vis-à-vis "other modern fighter aircraft; it does not have the distinctive "turkey feather" aerodynamic exhaust petals covering its engine nozzles, because the petal design on the F-15 was problematic and could fall off in flight; therefore, they were removed, resulting in a 3% aerodynamic drag increase.
A multimission avionics system includes a head-up display (HUD), advanced radar, AN/ASN-109 inertial guidance system, flight instruments, ultra high frequency communications, and tactical air navigation system and instrument landing system receivers. It also has an internally mounted, tactical electronic warfare system, Identification friend or foe system, an electronic countermeasures suite, and a central digital computer.
The HUD projects all essential flight information gathered by the integrated avionics system. This display, visible in any light condition, provides the pilot information necessary to track and destroy an enemy aircraft without having to look down at cockpit instruments.
The F-15's versatile APG-63 and 70 pulse-Doppler radar systems can look up at high-flying targets and look-down/shoot-down at low-flying targets without being confused by ground clutter. These radars can detect and track aircraft and small high-speed targets at distances beyond visual range down to close range, and at altitudes down to treetop level. The APG-63 has a basic range of . The radar feeds target information into the central computer for effective weapons delivery. For close-in dogfights, the radar automatically acquires enemy aircraft, and this information is projected on the head-up display. The F-15's electronic warfare system provides both threat warning (radar warning receiver) and automatic countermeasures against selected threats.
A variety of air-to-air weaponry can be carried by the F-15. An automated weapon system enables the pilot to release weapons effectively and safely, using the head-up display and the avionics and weapons controls located on the engine throttles or control stick. When the pilot changes from one weapon system to another, visual guidance for the selected weapon automatically appears on the head-up display.
The Eagle can be armed with combinations of four different air-to-air weapons: AIM-7F/M Sparrow missiles or AIM-120 AMRAAM advanced medium-range air-to-air missiles on its lower fuselage corners, AIM-9L/M Sidewinder or AIM-120 AMRAAM missiles on two pylons under the wings, and an internal M61 Vulcan Gatling gun in the right wing root.
Low-drag conformal fuel tanks (CFTs) were developed for the F-15C and D models. They can be attached to the sides of the engine air intakes under each wing and are designed to the same load factors and airspeed limits as the basic aircraft. These tanks slightly degrade performance by increasing aerodynamic drag and cannot be jettisoned in-flight. However, they cause less drag than conventional external tanks. Each conformal tank can hold 750 U.S. gallons (2,840 l) of fuel. These CFTs increase range and reduce the need for in-flight refueling. All external stations for munitions remain available with the tanks in use. Moreover, Sparrow or AMRAAM missiles can be attached to the corners of the CFTs. The 57 FIS based at Keflavik NAS, Iceland, was the only C-model squadron to use CFTs on a regular basis due to its extended operations over the North Atlantic. With the closure of the 57 FIS, the F-15E is the only variant to carry them on a routine basis. CFTs have also been sold to Israel and Saudi Arabia.
The McDonnell Douglas F-15E Strike Eagle is a two-seat, dual-role, totally integrated fighter for all-weather, air-to-air, and deep interdiction missions. The rear cockpit is upgraded to include four multipurpose cathode ray tube displays for aircraft systems and weapons management. The digital, triple-redundant Lear Siegler aircraft flight control system permits coupled automatic terrain following, enhanced by a ring-laser gyro inertial navigation system. For low-altitude, high-speed penetration and precision attack on tactical targets at night or in adverse weather, the F-15E carries a high-resolution APG-70 radar and LANTIRN pods to provide thermography. The newest F-15E version is the F-15 Advanced, which features fly-by-wire controls.
The APG-63(V)2 active electronically scanned array (AESA) radar has been retrofitted to 18 U.S. Air Force F-15C aircraft. This upgrade includes most of the new hardware from the APG-63(V)1, but adds an AESA to provide increased pilot situation awareness. The AESA radar has an exceptionally agile beam, providing nearly instantaneous track updates and enhanced multitarget tracking capability. The APG-63(V)2 is compatible with current F-15C weapon loads and enables pilots to take full advantage of AIM-120 AMRAAM capabilities, simultaneously guiding multiple missiles to several targets widely spaced in azimuth, elevation, or range. The further improved APG-63(V)3 AESA radar is expected to be fitted to 179 F-15C aircraft; the first upgraded aircraft was delivered in October 2010. The ZAP (Zone Acquisition Program) missile launch envelope has been integrated into the operational flight program system of all U.S. F-15 aircraft, providing dynamic launch zone and launch acceptability region information for missiles to the pilot by display cues in real-time.
The largest operator of the F-15 is the United States Air Force. The first Eagle, an F-15B, was delivered on 13 November 1974. In January 1976, the first Eagle destined for a combat squadron, the 555th TFS, was delivered. These initial aircraft carried the Hughes Aircraft (now Raytheon) APG-63 radar.
The first kill by an F-15 was scored by Israeli Air Force ace Moshe Melnik in 1979. During Israeli raids against Palestinian factions in Lebanon in 1979–1981, F-15As reportedly downed 13 Syrian MiG-21s and two Syrian MiG-25s. Israeli F-15As and Bs participated as escorts in Operation Opera, an air strike on an Iraqi nuclear reactor. In the 1982 Lebanon War, Israeli F-15s were credited with 41 Syrian aircraft destroyed (23 MiG-21s and 17 MiG-23s, and one Aérospatiale SA.342L Gazelle helicopter). During Operation Mole Cricket 19, Israeli F-15s and F-16s together shot down 82 Syrian fighter aircraft (MiG-21s, MiG-23s, and MiG-23Ms) with no losses.
Israel was the only operator to use and develop the air-to-ground abilities of the air-superiority F-15 variants, doing so because the fighter's range was well beyond other combat aircraft in the Israeli inventory in the 1980s. The first known use of F-15s for a strike mission was during Operation Wooden Leg on 1 October 1985, with six F-15Ds attacking PLO Headquarters in Tunis with two GBU-15 guided bombs per aircraft and two F-15Cs restriking the ruins with six Mk-82 unguided bombs each. This was one of the few times air-superiority F-15s (A/B/C/D models) were used in tactical strike missions. Israeli air-superiority F-15 variants have since been extensively upgraded to carry a wider range of air-to-ground armaments, including JDAM GPS-guided bombs and Popeye missile.
Royal Saudi Air Force F-15C pilots reportedly shot down two Iranian Air Force F-4E Phantom IIs in a skirmish on 5 June 1984.
The ASM-135 missile was designed to be a standoff antisatellite (ASAT) weapon, with the F-15 acting as a first stage. The Soviet Union could correlate a U.S. rocket launch with a spy satellite loss, but an F-15 carrying an ASAT would blend in among hundreds of F-15 flights. From January 1984 to September 1986, two F-15As were used as launch platforms for the ASAT missile. The F-15As were modified to carry one ASM-135 on the centerline station with extra equipment within a special centerline pylon. The launch aircraft executed a Mach 1.22, 3.8 g climb at 65° to release the ASAT missile at an altitude of . The flight computer was updated to control the zoom-climb and missile release.
The third test flight involved a retired P78-1 solar observatory satellite in a orbit, which was destroyed by kinetic energy. The pilot, USAF Major Wilbert D. "Doug" Pearson, became the only pilot to destroy a satellite. The ASAT program involved five test launches. The program was officially terminated in 1988.
The USAF began deploying F-15C, D, and E model aircraft to the Persian Gulf region in August 1990 for Operations Desert Shield and Desert Storm. During the Gulf War, the F-15 accounted for 36 of the 39 air-to-air victories by U.S. Air Force against Iraqi forces. Iraq has confirmed the loss of 23 of its aircraft in air-to-air combat. The F-15C and D fighters were used in the air-superiority role, while F-15E Strike Eagles were used in air-to-ground attacks mainly at night, hunting modified Scud missile launchers and artillery sites using the LANTIRN system. According to the USAF, its F-15Cs had 34 confirmed kills of Iraqi aircraft during the 1991 Gulf War, most of them by missile fire: five Mikoyan MiG-29s, two MiG-25s, eight MiG-23s, two MiG-21s, two Sukhoi Su-25s, four Sukhoi Su-22s, one Sukhoi Su-7, six Dassault Mirage F1s, one Ilyushin Il-76 cargo aircraft, one Pilatus PC-9 trainer, and two Mil Mi-8 helicopters. Air superiority was achieved in the first three days of the conflict; many of the later kills were reportedly of Iraqi aircraft fleeing to Iran, rather than engaging American aircraft. A Strike Eagle achieved an aerial kill of an Iraqi Mi-8 helicopter with a laser-guided bomb. Two F-15Es were lost to ground fire, another was damaged on the ground by a Scud strike on King Abdulaziz Air Base.
On 11 November 1990, a Royal Saudi Air Force (RSAF) pilot defected to Sudan with an F-15C fighter during Operation Desert Shield. Saudi Arabia paid US$40 million for return of the aircraft three months later. RSAF F-15s shot down two Iraqi Mirage F1s during the Operation Desert storm. According to the Saudis, one F-15C was lost to a crash during the Gulf War in 1991. The IRAF claims this fighter was part of two F-15Cs that engaged two Iraqi MiG-25PDs, and was hit by an R-40 missile before crashing.
They have since been deployed to support Operation Southern Watch, the patrolling of the Iraqi no-fly zones in Southern Iraq; Operation Provide Comfort in Turkey; in support of NATO operations in Bosnia, and recent air expeditionary force deployments. In 1994, two U.S. Army Sikorsky UH-60 Black Hawks were mistakenly downed by USAF F-15Cs in northern Iraq in a friendly-fire incident. USAF F-15Cs shot down four Yugoslav MiG-29s using AIM-120 and AIM-7 Radar guided missiles during NATO's 1999 intervention in Kosovo, Operation Allied Force.
All F-15 aircraft were grounded by the USAF after a Missouri Air National Guard F-15C came apart in flight and crashed on 2 November 2007. The newer F-15E fleet was later cleared for continued operations. The US Air Force reported on 28 November 2007 that a critical location in the upper longerons on the F-15C model was suspected of causing the failure, causing the fuselage forward of the air intakes, including the cockpit and radome, to separate from the airframe.
F-15A through D-model aircraft were grounded until the location received more detailed inspections and repairs as needed. The grounding of F-15s received media attention as it began to place strains on the nation's air-defense efforts. The grounding forced some states to rely on their neighboring states' fighters for air-defense protection, and Alaska to depend on Canadian Forces' fighter support.
On 8 January 2008, the USAF Air Combat Command (ACC) cleared a portion of its F-15A through D-model fleet for return to flying status. It also recommended a limited return to flight for units worldwide using the affected models. The accident review board report was released on 10 January 2008. The report stated that analysis of the F-15C wreckage determined that the longeron did not meet drawing specifications, which led to fatigue cracks and finally a catastrophic failure of the remaining support structures and breakup of the aircraft in flight. In a report released on 10 January 2008, nine other F-15s were identified to have similar problems in the longeron. As a result of these problems, General John D. W. Corley stated, "the long-term future of the F-15 is in question." On 15 February 2008, ACC cleared all its grounded F-15A/B/C/D fighters for flight pending inspections, engineering reviews, and any needed repairs. ACC also recommended release of other U.S. F-15A/B/C/D aircraft.
The F-15 has a combined air-to-air combat record of 104 kills to no losses . The F-15's air superiority versions, the A/B/C/D models, have not suffered any losses to enemy action. Over half of F-15 kills have been achieved by Israeli Air Force pilots.
On 16 September 2009, the last F-15A, an Oregon Air National Guard aircraft, was retired, marking the end of service for the F-15A and F-15B models in the United States.
With the retirement of the F-15A and B models, the F-15C and D models are supplemented in U.S. service by the newer F-22 Raptor. , regular Air Force F-15C and F-15D fighters are based overseas with the Pacific Air Forces at Kadena AB in Japan and with the U.S. Air Forces in Europe at RAF Lakenheath in the United Kingdom. Other regular Air Force F-15s are operated by ACC as adversary/aggressor platforms at Nellis AFB, Nevada, and by Air Force Material Command in test and evaluation roles at Edwards AFB, California, and Eglin AFB, Florida. All remaining combat-coded F-15Cs and F-15Ds are operated by the Air National Guard.
The USAF is upgrading 178 F-15C/Ds with the AN/APG-63(V)3 AESA radar, and equipping other F-15s with the Joint Helmet Mounted Cueing System as of 2006. In 2007, the Air Force planned to keep 178 F-15C/Ds along with 224 F-15Es in service beyond 2025.
As part of the Air Force's FY 2015 budget, the F-15C faced cuts or retirement in response to sequestration. Cuts are principally directed at platforms with single-mission capabilities. The retirement of some of the 250 F-15C fighters would save maintenance and upgrade costs, which could be redirected to speed procurement of the F-35 Lightning II. The air-to-air combat role would be taken up pre-eminently by the F-22 Raptor supported by the F-35. Even if this option is pursued, at least part of the F-15C fleet is likely to be preserved. The Air Force's FY 2015 budget proposal would reduce the F-15C fleet by 51 aircraft. Then in April 2017, Air Force officials announced plans to retire the F-15C/D in the mid-2020s and press more F-16s into roles occupied by the F-15. In December 2018 Bloomberg Government reported that the Pentagon, not the Air Force, in its 2020 budget request, will likely request US$1.2 billion for 12 new-built F-15X fighters to replace older F-15Cs operated by Air National Guard units.
The F-15E will remain in service for years to come because of the model's primary air-to-ground role and the lower number of hours on the F-15E airframes.
During the Yemeni Civil War (2015-present), Houthis have used R-27T missiles modified to serve as surface-to-air missiles. A video released on 7 January 2018 also shows a modified R-27T hitting a Saudi F-15 on a forward-looking infrared camera. Houthi sources claim to have downed the F-15, although this has been disputed, as the missile apparently proximity detonated, though the F-15 continued to fly in its trajectory seemingly unaffected. Rebels later released footage showing an aircraft wreck, but serial numbers on the wreckage suggested the aircraft was a Panavia Tornado, also operated by Saudi forces. On 8 January, the Saudi admitted the loss of an aircraft but by "Technical Reasons".
On 21 March 2018, Houthi rebels released a video where they hit and possibly shot down a Saudi F-15 in Saada province. In the video a R-27T air-to-air missile adapted for surface-to-air use was launched, appearing to have successfully hit a jet. As in the video of the previous similar hit recorded on 8 January, the target, while clearly hit, did not appear to be downed. Saudi forces confirmed the hit, while saying the jet safely landed at a Saudi base. Saudi official sources confirmed the incident, reporting that it happened at 3:48 pm local time after a surface-to-air defense missile was launched at the fighter jet from inside Saada airport.
Twelve prototypes were built and used for trials by the F-15 Joint Test Force at Edwards Air Force Base using McDonnell Douglas and United States Air Force personnel. Most prototypes were later used by NASA for trials and experiments.
A total of 175 F-15s have been lost to non-combat causes as of June 2016. However, the F-15 aircraft is very reliable with only 1 loss per 50,000 flight hours.
Although the F-15 continues to be a front-line fighter, a number of older USAF and IAF models have been retired, with several placed on outdoor display or in museums.
The F-15 was the subject of the IMAX movie "", about the RED FLAG exercises. In Tom Clancy's nonfiction book, "Fighter Wing" (1995), a detailed analysis of the Air Force's premier fighter aircraft, the F-15 Eagle and its capabilities are showcased.
The F-15 has also been a popular subject as a toy, and a fictional likeness of an aircraft similar to the F-15 has been used in cartoons, books, video games, animated television series, and animated films. | https://en.wikipedia.org/wiki?curid=11715 |
Grumman F-14 Tomcat
The Grumman F-14 Tomcat is an American supersonic, twin-engine, two-seat, twin-tail, variable-sweep wing fighter aircraft. It was the first such U.S. jet fighter with twin tails. The Tomcat was developed for the United States Navy's Naval Fighter Experimental (VFX) program after the collapse of the F-111B project. The F-14 was the first of the American Teen Series fighters, which were designed incorporating air combat experience against MiG fighters during the Vietnam War.
The F-14 first flew on 21 December 1970 and made its first deployment in 1974 with the U.S. Navy aboard , replacing the McDonnell Douglas F-4 Phantom II. The F-14 served as the U.S. Navy's primary maritime air superiority fighter, fleet defense interceptor, and tactical aerial reconnaissance platform into the 2000s. The Low Altitude Navigation and Targeting Infrared for Night (LANTIRN) pod system was added in the 1990s and the Tomcat began performing precision ground-attack missions.
In the 1980s, F-14s were used as land-based interceptors by the Islamic Republic of Iran Air Force during the Iran–Iraq War, where they saw combat against Iraqi warplanes. Iranian F-14s reportedly shot down at least 160 Iraqi aircraft during the war, while only 12 to 16 Tomcats were lost; at least half of these losses were due to accidents.
The Tomcat was retired by U.S. Navy on 22 September 2006, having been supplanted by the Boeing F/A-18E/F Super Hornet. The F-14 remains in service with Iran's air force, having been exported to Iran in 1976. In November 2015, reports emerged of Iranian F-14s flying escort for Russian Tupolev Tu-95, Tu-160, and Tu-22M bombers on air strikes in Syria.
Beginning in the late 1950s, the U.S. Navy sought a long-range, high-endurance interceptor to defend its carrier battle groups against long-range anti-ship missiles launched from the jet bombers and submarines of the Soviet Union. The U.S. Navy needed a Fleet Air Defense (FAD) aircraft with a more powerful radar and longer range missiles than the F-4 Phantom II carried to intercept both enemy bombers and missiles. The Navy was directed to participate in the Tactical Fighter Experimental (TFX) program with the U.S. Air Force by Secretary of Defense Robert McNamara. McNamara wanted "joint" solutions to service aircraft needs to reduce development costs and had already directed the Air Force to buy the F-4 Phantom II, which was developed for the Navy and Marine Corps. The Navy strenuously opposed the TFX as it feared compromises necessary for the Air Force's need for a low-level attack aircraft would adversely impact the aircraft's performance as a fighter.
Weight and performance issues plagued the U.S. Navy F-111B variant for TFX and would not be resolved to the Navy's satisfaction. The F-111 manufacturer General Dynamics partnered with Grumman on the Navy F-111B. With the F-111B program in distress, Grumman began studying improvements and alternatives. In 1966, the Navy awarded Grumman a contract to begin studying advanced fighter designs. Grumman narrowed down these designs to its 303 design. Vice Admiral Thomas F. Connolly, Deputy Chief of Naval Operations for Air Warfare, flew the developmental F-111A variant on a flight and discovered that it had difficulty going supersonic and had poor carrier landing characteristics. He later testified before Congress about his concerns against the official U.S. Department of the Navy position and, in May 1968, Congress stopped funding for the F-111B, allowing the Navy to pursue an answer tailored to its requirements. The name "Tomcat" was partially chosen to pay tribute to Admiral Connolly, as the nickname "Tom's Cat" had already been widely used by the manufacturer, although the name also followed the Grumman tradition of naming its fighter aircraft after felines.
The F-111B had been designed for the long-range Fleet Air Defense (FAD) interceptor role, but not for new requirements for air combat based on the experience of American aircraft against agile MiG fighters over Vietnam. The Navy studied the need for VFAX, an additional fighter that was more agile than the F-4 Phantom for air-combat and ground-attack roles. Grumman continued work on its 303 design and offered it to the Navy in 1967, which led to fighter studies by the Navy. The company continued to refine the design into 1968.
In July 1968, the Naval Air Systems Command (NAVAIR) issued a request for proposals (RFP) for the Naval Fighter Experimental (VFX) program. VFX called for a tandem two-seat, twin-engined air-to-air fighter with a maximum speed of Mach 2.2. It would also have a built-in M61 Vulcan cannon and a secondary close air support role. The VFX's air-to-air missiles would be either six AIM-54 Phoenix or a combination of six AIM-7 Sparrow and four AIM-9 Sidewinder missiles. Bids were received from General Dynamics, Grumman, Ling-Temco-Vought, McDonnell Douglas and North American Rockwell; four bids incorporated variable-geometry wings.
McDonnell Douglas and Grumman were selected as finalists in December 1968. Grumman was selected for the contract award in January 1969. Grumman's design reused the TF30 engines from the F-111B, though the Navy planned on replacing them with the Pratt & Whitney F401-400 engines under development for the Navy, along with the related Pratt & Whitney F100 for the USAF. Though lighter than the F-111B, it was still the largest and heaviest U.S. fighter to fly from an aircraft carrier, a consequence of the requirement to carry the large AWG-9 radar and AIM-54 Phoenix missiles (from the F-111B) and an internal fuel load of .
Upon winning the contract for the F-14, Grumman greatly expanded its Calverton, Long Island, New York facility for evaluating the aircraft. Much of the testing, including the first of many compressor stalls and multiple ejections, took place over Long Island Sound. In order to save time and forestall interference from Secretary McNamara, the Navy skipped the prototype phase and jumped directly to full-scale development; the Air Force took a similar approach with its F-15. The F-14 first flew on 21 December 1970, just 22 months after Grumman was awarded the contract, and reached initial operational capability (IOC) in 1973. The United States Marine Corps was initially interested in the F-14 as an F-4 Phantom II replacement; going so far as to send officers to Fighter Squadron One Twenty-Four (VF-124) to train as instructors. The Marine Corps pulled out of any procurement when the development of the stores' management system for ground attack munitions was not pursued. An air-to-ground capability was not developed until the 1990s.
Firing trials involved launches against simulated targets of various types, from cruise missiles to high-flying bombers. AIM-54 Phoenix missile testing from the F-14 began in April 1972. The longest single Phoenix launch was successful against a target at a range of in April 1973. Another unusual test was made on 22 November 1973, when six missiles were fired within 38 seconds at Mach 0.78 and ; four scored direct hits.
With time, the early versions of all the missiles were replaced by more advanced versions, especially with the move to full solid-state electronics that allowed better reliability, better ECCM and more space for the rocket engine. So the early arrangement of the AIM-54A Phoenix active-radar air-to-air missile, the AIM-7E-2 Sparrow semi-active radar homing air-to-air missile, and the AIM-9J Sidewinder heat-seeking air-to-air missile was replaced in the 1980s with the B (1983) and C (1986) version of the Phoenix, the F (1977), M (1982), P (1987 or later) for Sparrows, and with the Sidewinder, L (1979) and M (1982). Within these versions, there are several improved batches (for example, Phoenix AIM-54C++).
The Tactical Airborne Reconnaissance Pod System (TARPS) was developed in the late 1970s for the F-14. Approximately 65 F-14As and all F-14Ds were modified to carry the pod. TARPS was primarily controlled by the Radar Intercept Officer (RIO) via an extra display for observing reconnaissance data. The "TARPS Digital (TARPS-DI)" was a 1996 upgrade featuring a digital camera. The digital camera was further updated beginning in 1998 with the "TARPS Completely Digital (TARPS-CD)" configuration that also provided real-time transmission of imagery.
Some of the F-14A aircraft underwent engine upgrades to the GE F110-400 in 1987. These upgraded Tomcats were redesignated F-14A+, which was later changed to F-14B in 1991. The F-14D variant was developed at the same time; it included the GE F110-400 engines with newer digital avionics systems such as a glass cockpit, and compatibility with the Link 16 secure datalink. The Digital Flight Control System (DFCS) notably improved the F-14's handling qualities when flying at a high angle of attack or in air combat maneuvering.
While the F-14 had been developed as a lightweight alternative to the F-111B, the F-14 was still the heaviest and most expensive fighter of its time. VFAX was revived in the 1970s as a lower cost solution to replacing the Navy and Marine Corps's fleets of F-4s, and A-7s. VFAX was directed to review the fighters in the USAF Light Weight Fighter competition, which led to the development of the F/A-18 Hornet as roughly a midsize fighter and attack aircraft. In 1994, Congress would reject Grumman proposals to the Navy to upgrade the Tomcat beyond the D model (such as the Super Tomcat 21, the cheaper QuickStrike version, and the more advanced Attack Super Tomcat 21).
In the 1990s, with the pending retirement of the A-6 Intruder, the F-14 air-to-ground program was resurrected. Trials with live bombs had been carried out in the 1980s; the F-14 was cleared to use basic iron bombs in 1992. During Operation Desert Storm of the Gulf War, most air-to-ground missions were left to A-7, A-6 Intruder and F/A-18 Hornet squadrons, while the F-14s focused on air defense operations. Following Desert Storm, F-14As and F-14Bs underwent upgrades to avionics and cockpit displays to enable the use of precision munitions, enhance defensive systems, and apply structural improvements. The new avionics were comparable with the F-14D; these upgraded aircraft were designated F-14A (Upgrade) and F-14B (Upgrade) respectively.
By 1994, Grumman and the Navy were proposing ambitious plans for Tomcat upgrades to plug the gap between the retirement of the A-6 and the F/A-18E/F Super Hornet entering service. However, the upgrades would have taken too long to implement to meet the gap, and were priced in the billions. The U.S. Congress considered this too expensive for an interim solution. A quick, inexpensive upgrade using the Low Altitude Navigation and Targeting Infrared for Night (LANTIRN) targeting pod was devised. The LANTIRN pod provided the F-14 with a forward-looking infrared (FLIR) camera for night operations and a laser target designator to direct laser-guided bombs (LGB). Although LANTIRN is traditionally a two-pod system, an AN/AAQ-13 navigation pod with terrain-following radar and a wide-angle FLIR, along with an AN/AAQ-14 targeting pod with a steerable FLIR and a laser target designator, the decision was made to only use the targeting pod. The Tomcat's LANTIRN pod was altered and improved over the baseline configuration, such as a Global Positioning System / Inertial Navigation System (GPS-INS) capability to allow an F-14 to accurately locate itself. The pod was carried on the right wing glove pylon.
The LANTIRN pod did not require changes to the F-14's own system software, but the pod was designed to operate on a MIL-STD-1553B bus not present on the F-14A or B. Consequently, Martin Marietta specially developed an interface card for LANTIRN. The Radar Intercept Officer (RIO) would receive pod imagery on a 10-inch Programmable Tactical Information Display (PTID) or another Multi-Function Display in the F-14 rear cockpit and guided LGBs using a new hand controller installed on the right side console. Initially, the hand controller replaced the RIO's TARPS control panel, meaning a Tomcat configured for LANTIRN could not carry TARPS and the reverse, but eventually a workaround was later developed to allow a Tomcat to carry LANTIRN or TARPS as needed.
An upgraded LANTIRN named "LANTIRN 40K" for operations up to was introduced in 2001, followed by Tomcat Tactical Targeting (T3) and Fast Tactical Imagery (FTI), to provide precise target coordinate determination and ability to transmit images in-flight. Tomcats also added the ability to carry the GBU-38 Joint Direct Attack Munition (JDAM) in 2003, giving it the option of a variety of LGB and GPS-guided weapons. Some F-14Ds were upgraded in 2005 with a ROVER III Full Motion Video (FMV) downlink, a system that transmits real-time images from the aircraft's sensors to the laptop of Forward air controller (FAC) on the ground.
The F-14 Tomcat was designed as both an air superiority fighter and a long-range naval interceptor, which enabled it to both serve as escort attack aircraft when armed with Sparrow missiles and fleet air defense loitering interceptor role when armed with Phoenix missiles. The F-14 was designed with a two-seat cockpit with a bubble canopy which affords all-around visibility aiding aircrew in air-to-air combat. It features variable geometry wings that swing automatically during flight. For high-speed intercept, they are swept back and they swing forward for lower speed flight. It was designed to improve on the F-4 Phantom's air combat performance in most respects.
The F-14's fuselage and wings allow it to climb faster than the F-4, while the twin-tail arrangement offers better stability. The F-14 is equipped with an internal 20 mm M61 Vulcan Gatling cannon mounted on the left side (unlike the Phantom, which was not equipped with an internal gun in the US Navy), and can carry AIM-54 Phoenix, AIM-7 Sparrow, and AIM-9 Sidewinder anti-aircraft missiles. The twin engines are housed in widely spaced nacelles. The flat area of the fuselage between the nacelles is used to contain fuel and avionics systems, such as the wing-sweep mechanism and flight controls, as well as weaponry since the wings are not used for carrying ordnance. By itself, the fuselage provides approximately 40 to 60 percent of the F-14's aerodynamic lifting surface depending on the wing sweep position. The lifting body characteristics of the fuselage allowed one F-14 to safely land after suffering a mid-air collision that sheared off more than half of the plane's right wing in 1991.
The F-14's wing sweep can be varied between 20° and 68° in flight, and can be automatically controlled by the Central Air Data Computer, which maintains wing sweep at the optimum lift-to-drag ratio as the Mach number varies; pilots can manually override the system if desired. When parked, the wings can be "overswept" to 75° to overlap the horizontal stabilizers to save deck space aboard carriers. In an emergency, the F-14 can land with the wings fully swept to 68°, although this presents a significant safety hazard due to greatly increased stall speed. Such an aircraft would typically be diverted from an aircraft carrier to a land base if an incident did occur. The F-14 has flown safely with an asymmetrical wing-sweep during testing, and was deemed able to land aboard a carrier if needed in an emergency. The wing pivot points are significantly spaced far apart. This has two benefits. The first is that weaponry can be fitted on a pylon on the fixed wing glove, liberating the wings from having swiveling pylons fitted, a feature which had proven to add significant drag on the F-111B. Since less of the total lifting area is variable, the centre of lift moves less as the wings moves, reducing trim drag at high speed. When the wing is swept back, its thickness-to-chord ratio decreases, which allows the aircraft to satisfy the Mach 2.4 top speed required by the U.S. Navy. The body of the aircraft contributes significantly to overall lift and so the Tomcat possesses a lower wing loading than its wing area would suggest. When carrying four Phoenix missiles or other heavy stores between the engines this advantage is lost and maneuverability is reduced in those configurations.
Ailerons are not fitted, with roll control being provided by wing-mounted spoilers at low speed (which are disabled if the sweep angle exceeds 57°), and by differential operation of the all-moving tailerons at high speed. Full-span slats and flaps are used to increase lift both for landing and combat, with slats being set at 17° for landing and 7° for combat, while flaps are set at 35° for landing and 10° for combat. An air bag fills up the space occupied by the swept-back wing when the wing is in the forward position and a flexible fairing on top of the wing smooths out the shape transition between the fuselage and top wing area. The twin tail layout helps in maneuvers at high angle of attack (AoA) while reducing the height of the aircraft to fit within the limited roof clearance of hangars aboard aircraft carriers.
Two triangular shaped retractable surfaces, called glove vanes, were originally mounted in the forward part of the wing glove, and could be automatically extended by the flight control system at high Mach numbers. They were used to generate additional lift (force) ahead of the aircraft's center of gravity, thus helping to compensate for mach tuck at supersonic speeds. Automatically deployed at above Mach 1.4, they allowed the F-14 to pull 7.5 g at Mach 2 and could be manually extended with wings swept full aft. They were later disabled, however, owing to their additional weight and complexity. The air brakes consist of top-and-bottom extendable surfaces at the rearmost portion of the fuselage, between the engine nacelles. The bottom surface is split into left and right halves; the tailhook hangs between the two-halves, an arrangement sometimes called the "castor tail".
The F-14 was initially equipped with two Pratt & Whitney TF30 (or JTF10A) augmented turbofan engines, each rated at 20,900 lb (93 kN) of thrust, which enabled the aircraft to attain a maximum speed of Mach 2.34. The F-14 would normally fly at a cruising speed for reduced fuel consumption, which was important for conducting lengthy patrol missions. The rectangular air inlets for the engines were equipped with movable ramps and bleed doors to meet the different airflow requirements of the engine from take-off to maximum supersonic speed. De Laval nozzles were also fitted to the engine's exhaust.
The performance of the TF30 engine became an object of criticism. John Lehman, Secretary of the Navy in the 1980s, told the U.S. Congress that the TF30/F-14 combination was "probably the worst engine/airframe mismatch we have had in years" and that the TF30 was "a terrible engine"; 28% of all F-14 accidents were attributed to the engine. A high frequency of turbine blade failures led to the reinforcement of the entire engine bay to limit damage from such failures. The engines also had proved to be extremely prone to compressor stalls, which could easily result in loss of control, severe yaw oscillations, and could lead to an unrecoverable flat spin. At specific altitudes, exhaust produced by missile launches could cause an engine compressor stall. This led to the development of a bleed system that temporarily blocks the frontal intake ramp and reduces engine power during missile launch. With the TF30, the F-14's overall thrust-to-weight ratio at maximum takeoff weight is around 0.56, considerably less than the F-15A's ratio of 0.85; when fitted with the General Electric F110 engine, an improved thrust-to-weight ratio of 0.73 at maximum weight and 0.88 at normal takeoff weight was achieved. Despite having large differences in thrust, the F-14A, F-14B, and later F-14D with the newer General Electric F110 engines were rated at the same top speed.
The wings have a two-spar structure with integral fuel tanks. Around 25% of the structure is made of titanium, including the wing box, wing pivots, and upper and lower wing skins; this is a light, rigid, and strong material. Electron beam welding was used in the construction of the titanium parts.
The landing gear is very robust, in order to withstand catapult launches (takeoffs) and recoveries (landings) needed for carrier operations. It comprises a double nosewheel and widely spaced single main wheels. There are no hardpoints on the sweeping parts of the wings, and so all the armament is fitted on the belly between the air intake ramps and on pylons under the wing gloves. Internal fuel capacity is : in each wing, in a series of tanks aft of the cockpit, and a further in two feeder tanks. It can carry two external drop tanks under the engine intake ramps. There is also an air-to-air refueling probe, which folds into the starboard nose.
The cockpit has two seats, arranged in tandem, outfitted with Martin-Baker GRU-7A rocket-propelled ejection seats, rated from zero altitude and zero airspeed up to 450 knots. The canopy is spacious, and fitted with four mirrors to effectively provide all-round visibility. Only the pilot has flight controls; the flight instruments themselves are of a hybrid analog-digital nature. The cockpit also features a head-up display (HUD) to show primarily navigational information; several other avionics systems such as communications and direction-finders are integrated into the AWG-9 radar's display. A feature of the F-14 is its Central Air Data Computer (CADC), designed by Garrett AiResearch, that forms the onboard integrated flight control system. It uses a MOSFET-based Large-Scale Integration chipset.
The aircraft's large nose contains a two-person crew and several bulky avionics systems. The main element is the Hughes AN/AWG-9 X band radar; the antenna is a -wide planar array, and has integrated Identification friend or foe antennas. The AWG-9 has several search and tracking modes, such as Track while scan (TWS), Range-While-Search (RWS), Pulse-Doppler Single-Target Track (PDSTT), and Jam Angle Track (JAT); a maximum of 24 targets can be tracked simultaneously, and six can be engaged in TWS mode up to around . Cruise missiles are also possible targets with the AWG-9, which can lock onto and track small objects even at low altitude when in Pulse-Doppler mode. For the F-14D, the AWG-9 was replaced by the upgraded APG-71 radar. The Joint Tactical Information Distribution System (JTIDS)/Link 16 for data communications was added later on.
The F-14 also features electronic countermeasures (ECM) and radar warning receiver (RWR) systems, chaff/flare dispensers, fighter-to-fighter data link, and a precise inertial navigation system. The early navigation system was inertial-based; point-of-origin coordinates were programmed into a navigation computer and gyroscopes would track the aircraft's every motion to calculate distance and direction from that starting point. Global Positioning System later was integrated to provide more precise navigation and redundancy in case either system failed. The chaff/flare dispensers are located on the underside of the fuselage and on the tail. The RWR system consists of several antennas on the aircraft's fuselage, which can roughly calculate both direction and distance of enemy radar users; it can also differentiate between search radar, tracking radar, and missile-homing radar.
Featured in the sensor suite was the AN/ALR-23, an Infra-red search and track sensor using indium antimonide detectors, mounted under the nose; however this was replaced by an optical system, Northrop's AAX-1, also designated TCS (TV Camera Set). The AAX-1 helps pilots visually identify and track aircraft, up to a range of for large aircraft. The radar and the AAX-1 are linked, allowing the one detector to follow the direction of the other. A dual infrared/optical detection system was adopted on the later F-14D.
The F-14 was designed to combat highly maneuverable aircraft as well as the Soviet anti-ship cruise missile and bomber (Tupolev Tu-16, Tupolev Tu-22, Tupolev Tu-22M) threats. The Tomcat was to be a platform for the AIM-54 Phoenix, but unlike the canceled F-111B, it could also engage medium- and short-range threats with other weapons. The F-14 is an air superiority fighter, not just a long-range interceptor aircraft. Over of stores can be carried for combat missions on several hardpoints under the fuselage and under the wings. Commonly, this means a maximum of two–four Phoenixes or Sparrows on the belly stations, two Phoenixes/Sparrows on the wing hardpoints, and two Sidewinders on the wing hardpoints. The F-14 is also fitted with an internal 20 mm M61 Vulcan Gatling-type cannon.
Operationally, the capability to hold up to six Phoenix missiles was never used, although early testing was conducted; there was never a threat requirement to engage six hostile targets simultaneously and the load was too heavy to safely recover aboard an aircraft carrier in the event that the missiles were not fired. During the height of Cold War operations in the late 1970s and 1980s, the typical weapon loadout on carrier-deployed F-14s was usually two AIM-54 Phoenixes, augmented by two AIM-9 Sidewinders, three AIM-7 Sparrow IIIs, a full loadout of 20 mm ammunition and two drop tanks. The Phoenix missile was used twice in combat by the U.S. Navy, both over Iraq in 1999, but the missiles did not score any kills.
Iran made use of the Phoenix system, claiming dozens of kills with it during the 1980–1988 Iran–Iraq War. Due to the shortage of air-to-air missiles as a result of sanctions, Iran tried to use other missiles on the Tomcat. It attempted to integrate the Russian R-27R "Alamo" BVR missile, but was apparently unsuccessful. In 1985, Iran started Project Sky Hawk, attempting to adapt I-Hawk surface-to-air missiles, which Iran had in its inventory, for F-14s. The modified missiles were successfully tested in 1986 and one or two were used in combat, but the project was abandoned due to guidance problems.
The F-14 began replacing the F-4 Phantom II in U.S. Navy service starting in September 1974 with squadrons VF-1 "Wolfpack" and VF-2 "Bounty Hunters" aboard and participated in the American withdrawal from Saigon. The F-14 had its first kills in U.S. Navy service on 19 August 1981 over the Gulf of Sidra in what is known as the Gulf of Sidra incident. In that engagement, two F-14s from VF-41 Black Aces were engaged by two Libyan Su-22 "Fitters". The F-14s evaded the short range heat seeking AA-2 "Atoll" missile and returned fire, downing both Libyan aircraft. U.S. Navy F-14s once again were pitted against Libyan aircraft on 4 January 1989, when two F-14s from VF-32 shot down two Libyan MiG-23 "Floggers" over the Gulf of Sidra in a second Gulf of Sidra incident.
Its first sustained combat use was as a photo reconnaissance platform. The Tomcat was selected to inherit the reconnaissance mission upon the departure of the dedicated RA-5C Vigilante and RF-8G Crusaders from the fleet. A large pod called the Tactical Airborne Reconnaissance Pod System (TARPS) was developed and fielded on the Tomcat in 1981. With the retirement of the last RF-8G Crusaders in 1982, TARPS F-14s became the U.S. Navy's primary tactical reconnaissance system. One of two Tomcat squadrons per airwing was designated as a TARPS unit and received 3 TARPS capable aircraft and training for 4 TARPS aircrews.
While the Tomcat was being used by Iran in combat against Iraq in its intended air superiority mission in the early 1980s, the U.S. Navy found itself flying regular daily combat missions over Lebanon to photograph activity in the Bekaa Valley. At the time, the Tomcat had been thought too large and vulnerable to be used over land, but the need for imagery was so great that Tomcat aircrews developed high-speed medium altitude tactics to deal with considerable AAA and SA-7 SAM threat in the Bekaa area. The first exposure of a Navy Tomcat to an SA-2 missile was over Somalia in April 1983 when a local battery was unaware of two Tomcats scheduled for a TARPS mission in a prelude to an upcoming international exercise in the vicinity of Berbera. An SA-2 was fired at the second Tomcat while conducting 10,000-ft mapping profile at max conserve setting. The Tomcat aircrews spotted the missile launch and dove for the deck thereby evading it without damage. The unexpected demand for combat TARPS laid the way for high altitude sensors such as the KA-93 Long Range Optics (LOROP) to be rapidly procured for the Tomcat as well as an Expanded Chaff Adapter (ECA) to be incorporated in an AIM-54 Phoenix Rail. Commercial "Fuzz buster" type radar detectors were also procured and mounted in pairs in the forward cockpit as a stop gap solution to detect SAM radars such as the SA-6. The ultimate solution was an upgrade to the ALR-67 then being developed, but it would not be ready until the advent of the F-14A+ later in the 1980s.
The participation of the F-14 in the 1991 Operation Desert Storm consisted of Combat Air Patrol (CAP) over the Red Sea and the Persian Gulf and overland missions consisting of strike escort and reconnaissance. Until the waning days of Desert Storm, in-country air superiority was tasked to USAF F-15 Eagles due to the way the Air Tasking Orders (ATO) delegated primary overland CAP stations to the F-15 Eagle. The governing Rules of Engagement (ROE) also dictated a strict Identification Friend or Foe (IFF) requirement when employing Beyond Visual Range weapons such as the AIM-7 Sparrow and particularly the AIM-54 Phoenix. This hampered the Tomcat from using its most powerful weapon. Furthermore, the powerful emissions from the AWG-9 radar are detectable at great range with a radar warning receiver. Iraqi fighters routinely retreated as soon as the Tomcats "lit them up" with the AWG-9. The U.S. Navy suffered its only F-14 loss from enemy action on 21 January 1991 when BuNo 161430, an F-14A upgraded to an F-14A+, from VF-103 was shot down by an SA-2 surface-to-air missile while on an escort mission near Al Asad airbase in Iraq. Both crews survived ejection with the pilot being rescued by USAF Special Operation Forces and the RIO being captured by Iraqi troops as a POW until the end of the war. The F-14 also achieved its final kill in US service, a Mi-8 "Hip" helicopter, with an AIM-9 Sidewinder.
In 1995, F-14s from VF-14 and VF-41 participated in Operation Deliberate Force as well as Operation Allied Force in 1999, and in 1998, VF-32 and VF-213 participated in Operation Desert Fox. On 15 February 2001, the Joint Direct Attack Munition or JDAM was added to the Tomcat's arsenal. On 7 October 2001, F-14s would lead some of the first strikes into Afghanistan marking the start of Operation Enduring Freedom and the first F-14 drop of a JDAM occurred on 11 March 2002. F-14s from VF-2, VF-31, VF-32, VF-154, and VF-213 would also participate in Operation Iraqi Freedom. The F-14Ds of VF-2, VF-31, and VF-213 obtained JDAM capability in March 2003. On 10 December 2005, the F-14Ds of VF-31 and VF-213 were upgraded with a ROVER III downlink for transmitting images to a ground Forward Air Controller (FAC). The Navy decided to retire the F-14 with the F/A-18E/F Super Hornet filling the roles of fleet defense and strike formerly filled by the F-14.
The last American F-14 combat mission was completed on 8 February 2006, when a pair of Tomcats landed aboard after one dropped a bomb over Iraq. During their final deployment with "Theodore Roosevelt", VF-31 and VF-213 collectively completed 1,163 combat sorties totaling 6,876 flight hours, and dropped of ordnance during reconnaissance, surveillance, and close air support missions in support of Operation Iraqi Freedom. USS "Theodore Roosevelt" launched an F-14D, of VF-31, for the last time on 28 July 2006; piloted by Lt. Blake Coleman and Lt. Cmdr Dave Lauderbaugh as RIO. The last two F-14 squadrons, the VF-31 Tomcatters, and the VF-213 Black Lions conducted their last fly-in at Naval Air Station Oceana on 10 March 2006.
The official final flight retirement ceremony was on 22 September 2006 at Naval Air Station Oceana and was flown by Lt. Cmdr. Chris Richard and Lt. Mike Petronis as RIO in a backup F-14 after the primary aircraft experienced mechanical problems. The actual last flight of an F-14 in U.S. service took place 4 October 2006, when an F-14D of VF-31 was ferried from NAS Oceana to Republic Airport on Long Island, New York. The remaining intact F-14 aircraft in the U.S. were flown to and stored at the 309th Aerospace Maintenance and Regeneration Group "Boneyard", at Davis-Monthan Air Force Base, Arizona; in 2007 the U.S. Navy announced plans to shred the remaining F-14s to prevent any components from being acquired by Iran. In August 2009, the 309th AMARG stated that the last aircraft were taken to HVF West, Tucson, Arizona for shredding. At that time only 11 F-14s remained in desert storage.
The sole foreign customer for the Tomcat was the Imperial Iranian Air Force, during the reign of the last Shah of Iran, Mohammad Reza Pahlavi. In the early 1970s, the Imperial Iranian Air Force (IIAF) was searching for an advanced fighter, specifically one capable of intercepting Soviet MiG-25 reconnaissance flights. After a visit of U.S. President Richard Nixon to Iran in 1972, during which Iran was offered the latest in American military technology, the IIAF narrowed its choice between the F-14 Tomcat or the McDonnell Douglas F-15 Eagle. Grumman Corporation arranged a competitive demonstration of the Eagle against the Tomcat before the Shah, and in January 1974, Iran ordered 30 F-14s and 424 AIM-54 Phoenix missiles, initiating Project "Persian King", worth US$300 million. A few months later, this order was increased to a total of 80 Tomcats and 714 Phoenix missiles as well as spare parts and replacement engines for 10 years, complete armament package, and support infrastructure (including construction of the Khatami Air Base near Isfahan).
The first F-14 arrived in January 1976, modified only by the removal of classified avionics components, but fitted with the TF-30-414 engines. The following year 12 more were delivered. Meanwhile, training of the first groups of Iranian crews by the U.S. Navy was underway in the US; one of these conducted a successful shoot-down with a Phoenix missile of a target drone flying at .
Following the overthrow of the Shah in 1979, the air force was renamed the Islamic Republic of Iran Air Force (IRIAF) and the post-revolution Interim Government of Iran canceled most Western arms orders. In 1980, an Iranian F-14 shot down an Iraqi Mil Mi-25 helicopter for its first air-to-air kill during the Iran–Iraq War (1980–1988). According to research by Tom Cooper, Iranian F-14s scored at least 50 air-to-air victories in the first six months of the war against Iraqi MiG-21s, MiG-23s, and some Su-20s/22s. During the same period, only one Iranian F-14 suffered damage after being hit by debris from a nearby MiG-21 that exploded.
Iranian Tomcats were originally used as an early-warning platform assisting other less-sophisticated aircraft with targeting and defense. They were also crucial to the defense of areas deemed vital by the Iranian government, such as oil terminals on Kharg Island and industrial infrastructure in the capital Tehran. Many of these patrols had the support of Boeing 707-3J9C in-flight refueling tankers. As fighting escalated between 1982 and 1986, the F-14s gradually became more involved in the battle. They performed well, but their primary role was to intimidate the Iraqi Air Force and avoid heavy engagement to protect the fleet's numbers. Their presence was often enough to drive away opposing Iraqi fighters. The precision and effectiveness of the Tomcat's AWG-9 weapons system and AIM-54A Phoenix long-range air-to-air missiles enabled the F-14 to maintain air superiority. In December 1980 an Iraqi Mig-21bis accounted for the first and only confirmed kill of an F-14 by that type of aircraft. On 11 August 1984, a MiG-23ML shot down an F-14 using an R-60 missile. In another engagement, a MiG-23ML shot down another F-14 on 17 January 1987.
By 1987, the Iraqis had suffered heavy losses and were forced to find a solution to level the battlefield. They obtained Mirage F.1EQ-6 fighters from France in 1988, armed with Super530D and Magic Mk.2 air-to-air missiles. The Mirage F.1 fighters were eventually responsible for three confirmed F-14 kills. The IRIAF attempted to keep 60 F-14s operational throughout the war, but reports indicate this number was reduced to 30 by 1986 with only half fully mission-capable.
Based on research by Tom Cooper and Farzad Bishop, Iranian F-14s shot down at least 160 Iraqi aircraft during the Iran–Iraq War, including 58 MiG-23s, 33 Mirage F1s, 23 MiG-21s, 23 Su-20s/22s, nine MiG-25s, five Tu-22s, two MiG-27s, one Mil Mi-24, one Dassault Mirage 5, one B-6D, one Aérospatiale Super Frelon, and two unidentified aircraft. Despite the circumstances the F-14s and their crews faced during the war against Iraq – lacking support from AWACS, AEW aircraft, and Ground Control Intercept (GCI) – the F-14 proved to be successful in combat. It achieved this in the midst of a confrontation with an enemy that was constantly upgrading its capabilities and receiving support from three major countries – France, the US, and the USSR. Part of the success is attributed to the resilient Iranian economy and IRIAF personnel.
While Iraq's army claimed it shot down more than 70 F-14s, the Foreign Broadcast Information System in Washington DC estimated that Iran lost 12 to 16 during the war. Cooper writes only three F-14 were shot down by Iraqis and four by Iranian surface-to-air missiles (SAM). Two Tomcats were lost in unknown circumstances during the battle, and seven crashed due to technical failure or accidents. During the war, the Iranian Air Force F-14s suffered 9 confirmed losses, one lost due to engine stall, one in unknown conditions, two by Iranian Hawk SAMs, two by MIG-23s and three were shot down by Mirage F-1EQs. There are also unconfirmed reports of the downing of 10 more Tomcats.
On 31 August 1986, an Iranian F-14A armed with at least one AIM-54A missile defected to Iraq. In addition, one or more of Iran's F-14A was delivered to the Soviet Union in exchange for technical assistance; at least one of its crew defected to the Soviet Union.
Iran had an estimated 44 F-14s in 2009 according to Combat Aircraft. Aviation Week estimated it had 19 operational F-14s in January 2013, and Flight Global estimated that 28 were in service in 2014.
Following the US Navy's retirement of its Tomcats in 2006, Iran sought to purchase spare parts for its aircraft. In January 2007, the U.S. Department of Defense announced that sales of spare F-14 parts would be suspended over concerns of the parts ending up in Iran. In July 2007, the remaining American F-14s were shredded to ensure that any parts could not be acquired. Despite these measures, Iran managed to significantly increase its stocks of spare parts, increasing the number of airworthy Tomcats, although as it did not manage to obtain spare parts for the aircraft's weapon systems, the number of combat ready Tomcats was still low (seven in 2008). In 2010, Iran requested that the U.S. deliver the 80th F-14 that it had purchased in 1974 but never received due to the Islamic Revolution. In October 2010, an Iranian Air Force commander claimed that the country overhauls and optimizes different types of military aircraft, mentioning that Air Force has installed Iran-made radar systems on the F-14. In 2012, the Iranian Air Force's Mehrabad Overhaul Center delivered an F-14 with upgraded weapon systems with locally sourced components, designated F-14AM. Shortages of Phoenix missiles, led to attempts to integrate the Russian R-27 semi-active radar-guided missile, but these proved unsuccessful. An alternative was the use of modified MIM-23 Hawk missiles to replace the Tomcat's Phoenixes and Sparrows, but as the Tomcat could only carry two Hawks, this project was also abandoned, and the Fakour-90 missile, which used the guidance system of the Hawk packaged into the airframe of the Phoenix, launched. Pre-production Fakour-90s were delivered in 2017, and a production order for 100 missiles (now designated AIM-23B) was placed in 2018, intending to replace the F-14s AIM-7E Sparrow missiles.
On 26 January 2012, an Iranian F-14 crashed three minutes after takeoff. Both crew members were killed.
In November 2015, Iranian F-14s had been reported flying escort for Russian Tu-95 bombers on air strikes in Syria against the Islamic State of Iraq and the Levant.
A total of 712 F-14s were built from 1969 to 1991. F-14 assembly and test flights were performed at Grumman's plant in Calverton on Long Island, New York. Grumman facility at nearby Bethpage, New York was directly involved in F-14 manufacturing and was home to its engineers. The airframes were partially assembled in Bethpage and then shipped to Calverton for final assembly. Various tests were also performed at the Bethpage Plant. Over 160 of the U.S. aircraft were destroyed in accidents.
The F-14A was the initial two-seat, twin-engine, all-weather interceptor fighter variant for the U.S. Navy. It first flew on 21 December 1970. The first 12 F-14As were prototype versions (sometimes called YF-14As). Modifications late in its service life added precision strike munitions to its armament. The U.S. Navy received 478 F-14A aircraft and 79 were received by Iran. The final 102 F-14As were delivered with improved Pratt & Whitney TF30-P-414A engines. Additionally, an 80th F-14A was manufactured for Iran, but was delivered to the U.S. Navy.
The F-14 received its first of many major upgrades in March 1987 with the F-14A Plus (or F-14A+). The F-14A's TF30 engine was replaced with the improved GE F110-GE-400 engine. The F-14A+ also received the state-of-the-art ALR-67 Radar Homing and Warning (RHAW) system. Much of the avionics components, as well as the AWG-9 radar, were retained. The F-14A+ was later redesignated F-14B on 1 May 1991. A total of 38 new aircraft were manufactured and 48 F-14A were upgraded into B variants.
The TF30 had been plagued from the start with susceptibility to compressor stalls at high AoA and during rapid throttle transients or above . The F110-400 engine provided a significant increase in thrust, producing with afterburner at sea level, which rose to at Mach 0.9. The increased thrust gave the Tomcat a better than 1:1 thrust-to-weight ratio at low fuel quantities. The basic engine thrust without afterburner was powerful enough for carrier launches, further increasing safety. Another benefit was allowing the Tomcat to cruise comfortably above , which increased its range and survivability. The F-14B arrived in time to participate in Desert Storm.
In the late 1990s, 67 F-14Bs were upgraded to extend airframe life and improve offensive and defensive avionics systems. The modified aircraft became known as "F-14B Upgrade".
The final variant of the F-14 was the F-14D Super Tomcat. The F-14D variant was first delivered in 1991. The original Pratt & Whitney TF30 engines were replaced with General Electric F110-GE-400 engines, similar to the F-14B. The F-14D also included newer digital avionics systems including a glass cockpit and replaced the AWG-9 with the newer AN/APG-71 radar. Other systems included the Airborne Self Protection Jammer (ASPJ), Joint Tactical Information Distribution System (JTIDS), SJU-17(V) Naval Aircrew Common Ejection Seats (NACES), and Infra-red search and track (IRST).
The GE F110-GE-400 engine provided increased thrust and additional endurance to extend range or to stay on station much longer. In the overland attack role this gave the F-14D 60 percent more striking range or one-third more time on station. The rate of climb was increased by 61 percent. The F110's increased thrust allowed almost all carrier launches to be made in military (dry) power. While this did result in fuel savings, the main reason not to use afterburner during carrier launches was that if an engine failed the F110's thrust in full afterburner would produce a yawing moment too abruptly for the pilot to correct. Thus the launch of an F-14D with afterburner was rare, while the F-14A required full afterburner unless very lightly loaded.
Although the F-14D was to be the definitive version of the Tomcat, not all fleet units received the D variant. In 1989, Secretary of Defense Dick Cheney refused to approve the purchase of any more F-14D model aircraft for $50 million each and pushed for a $25 million modernization of the F-14 fleet instead. Congress decided not to shut production down and funded 55 aircraft as part of a compromise. A total of 37 new aircraft were completed, and 18 F-14A models were upgraded to D-models, designated F-14D(R) for a rebuild. An upgrade to the F-14D's computer software to allow AIM-120 AMRAAM missile capability was planned but was later terminated.
While upgrades had kept the F-14 competitive with modern fighter aircraft technology, Cheney called the F-14 1960s technology. Despite an appeal from the Secretary of the Navy for at least 132 F-14Ds and some aggressive proposals from Grumman for a replacement, Cheney planned to replace the F-14 with a fighter that was not manufactured by Grumman. Cheney called the F-14 a "jobs program", and when the F-14 was canceled, an estimated 80,000 jobs of Grumman employees, subcontractors, or support personnel were affected. Starting in 2005, some F-14Ds received the ROVER III upgrade.
The first "F-14B" was to be an improved version of the F-14A with more powerful "Advanced Technology Engine" F401 turbofans. The "F-14C" was a projected variant of this initial F-14B with advanced multi-mission avionics. Grumman also offered an interceptor version of the F-14B in response to the U.S. Air Force's Improved Manned Interceptor Program to replace the Convair F-106 Delta Dart as an Aerospace Defense Command interceptor in the 1970s. The F-14B program was terminated in April 1974.
Grumman proposed a few improved "Super Tomcat" versions. The first was the "Quickstrike", which was an F-14D with navigational and targeting pods, additional attach points for weapons, and added ground attack capabilities to its radar. The Quickstrike was to fill the role of the A-6 Intruder after it was retired. This was not considered enough of an improvement by Congress, so the company shifted to the "Super Tomcat 21" proposed design. The Super Tomcat 21 was a proposed lower cost alternative to the Navy Advanced Tactical Fighter (NATF). The Grumman design would have the same shape and body as the Tomcat, and an upgraded AN/APG-71 radar. New GE F110-129 engines were to provide a supercruise speed of Mach 1.3 and featured thrust vectoring nozzles. The version would have increased fuel capacity and modified control surfaces for improved takeoffs and lower landing approach speed. The "Attack Super Tomcat 21" version was the last Super Tomcat proposed design. It added even more fuel capacity, more improvements to control surfaces, and possibly an active electronically scanned array (AESA) radar from the canceled A-12 attack aircraft.
The last "Tomcat" variant was the "ASF-14" (Advanced Strike Fighter-14), Grumman's replacement for the NATF concept. By all accounts, it would not be even remotely related to the previous Tomcats save in appearance, incorporating the new technology and design know-how from the Advanced Tactical Fighter (ATF) and Advanced Tactical Aircraft (ATA) programs. The ASF-14 would have been a new-build aircraft; however, its projected capabilities were not that much better than that of the (A)ST-21 variants. In the end, the Attack Super Tomcat was considered to be too costly. The Navy decided to pursue the cheaper F/A-18E/F Super Hornet to fill the fighter-attack role.
Notable F-14s preserved at museums and military installations include:
The Tomcat logo design came when Grumman's Director of Presentation Services, Dick Milligan, and one of his artists, Grumman employee Jim Rodriguez, were asked for a logo by Grumman's Director of Business Development and former Blue Angels No. 5 pilot, Norm Gandia. Per Rodriguez, "He asked me to draw a lifelike Tomcat wearing boxing gloves and trunks sporting a six-shooter on his left side; where the guns are located on the F-14, along with two tails." The Cat was drawn up after a tabby cat was sourced and used for photographs, and named "Tom". The logo has gone through many variations, including one for the then–Imperial Iranian Air Force F-14, called "Ali-cat". The accompanying slogan "Anytime Baby!" was developed by Norm Gandia as a challenge to the U.S. Air Force's McDonnell Douglas F-15 Eagle.
The Grumman F-14 Tomcat was central to the 1986 film "Top Gun". The aviation-themed film was such a success in creating interest in naval aviation that the US Navy, which assisted with the film, set up recruitment desks outside some theaters. Producers paid the US Navy $886,000 as reimbursement for flight time of aircraft in the film with an F-14 billed at $7,600 per flight hour.
Two F-14As of VF-84 from the USS "Nimitz" appeared in the 1980 film "The Final Countdown", with four from the squadron in the 1996 release "Executive Decision", the Jolly Rogers' final film appearance before being disestablished. The military legal drama TV series "JAG" (1995–2005) featured lead character Harmon Rabb, a Tomcat pilot-turned-lawyer.
Multiple F-14s are featured in the 2008 documentary "Speed and Angels", featuring the story of two young Navy recruits working to achieve their dream of becoming F-14 fighter pilots. | https://en.wikipedia.org/wiki?curid=11719 |
Lockheed F-117 Nighthawk
The Lockheed F-117 Nighthawk is an American single-seat, twin-engine stealth attack aircraft that was developed by Lockheed's secretive Skunk Works division and operated by the United States Air Force (USAF). The F-117 was based on the "Have Blue" technology demonstrator.
The Nighthawk was the first operational aircraft to be designed around stealth technology. Its maiden flight took place in 1981 at Groom Lake, Nevada, and the aircraft achieved initial operating capability status in 1983. The Nighthawk was shrouded in secrecy until it was revealed to the public in 1988. Of the 64 F-117s built, 59 were production versions, with the other five being prototypes.
The F-117 was widely publicized for its role in the Persian Gulf War of 1991. Although it was commonly referred to as the "Stealth Fighter", it was strictly a ground-attack aircraft. F-117s took part in the conflict in Yugoslavia, where one was shot down by a surface-to-air missile (SAM) in 1999; it was the only Nighthawk to be lost in combat. The U.S. Air Force retired the F-117 in April 2008, primarily due to the fielding of the F-22 Raptor. Despite the type's retirement, a portion of the fleet has been kept in airworthy condition, and Nighthawks have been observed flying in 2020.
In 1964, Pyotr Ufimtsev, a Soviet mathematician, published a seminal paper titled "Method of Edge Waves in the Physical Theory of Diffraction" in the journal of the Moscow Institute for Radio Engineering, in which he showed that the strength of the radar return from an object is related to its edge configuration, not its size. Ufimtsev was extending theoretical work published by the German physicist Arnold Sommerfeld. Ufimtsev demonstrated that he could calculate the radar cross-section across a wing's surface and along its edge. The obvious and logical conclusion was that even a large aircraft could reduce its radar signature by exploiting this principle. However, the resulting design would make the aircraft aerodynamically unstable, and the state of computer technology in the early 1960s could not provide the kinds of flight computers which would later allow aircraft such as the F-117 and B-2 Spirit to stay airborne. By the 1970s, when Lockheed analyst Denys Overholser found Ufimtsev's paper, computers and software had advanced significantly, and the stage was set for the development of a stealth airplane.
The F-117 was born after combat experience in the Vietnam War when increasingly sophisticated Soviet surface-to-air missiles (SAMs) downed heavy bombers. It was a black project, an ultra-secret program for much of its life: very few people in the Pentagon knew the program even existed, until the F-117s were revealed to the public in 1988. The project began in 1975 with a model called the "Hopeless Diamond" (a wordplay on the Hope Diamond because of its appearance). The following year, the Defense Advanced Research Projects Agency (DARPA) issued Lockheed Skunk Works a contract to build and test two Stealth Strike Fighters, under the code name ""Have Blue"". These subscale aircraft incorporated jet engines of the Northrop T-38A, fly-by-wire systems of the F-16, landing gear of the A-10, and environmental systems of the C-130. By bringing together existing technology and components, Lockheed built two demonstrators under budget, at $35 million for both aircraft, and in record time.
The maiden flight of the demonstrators occurred on 1 December 1977. Although both aircraft were lost during the demonstration program, test data proved positive. The success of "Have Blue" led the government to increase funding for stealth technology. Much of that increase was allocated towards the production of an operational stealth aircraft, the Lockheed F-117A, under the program code name ""Senior Trend"".
The decision to produce the F-117A was made on 1 November 1978, and a contract was awarded to Lockheed Advanced Development Projects, popularly known as the Skunk Works, in Burbank, California. The program was led by Ben Rich, with Alan Brown as manager of the project. Rich called on Bill Schroeder, a Lockheed mathematician, and Denys Overholser, a computer scientist, to exploit Ufimtsev's work. The three designed a computer program called "Echo", which made it possible to design an airplane with flat panels, called facets, which were arranged so as to scatter over 99% of a radar's signal energy "painting" the aircraft.
The first YF-117A, serial number "79-10780", made its maiden flight from Groom Lake ("Area 51"), Nevada, on 18 June 1981, only 31 months after the full-scale development decision. The first production F-117A was delivered in 1982, and operational capability was achieved in October 1983. The 4450th Tactical Group stationed at Nellis AFB, Nevada were tasked with the operational development of the early F-117, and between 1981 (prior to the arrival of the first models) and 1989 they used LTV A-7 Corsair IIs for training, to bring all pilots to a common flight training baseline and later as chase planes for F-117A tests.
The F-117 was secret for much of the 1980s. Many news articles discussed what they called a "F-19" stealth fighter, and the Testor Corporation produced a very inaccurate scale model. When a F-117 crashed in Sequoia National Forest in July 1986, killing the pilot and starting a fire, the Air Force established restricted airspace. Armed guards prohibited entry, including firefighters, and a helicopter gunship circled the site. All F-117 debris was replaced with remains of a F-101A Voodoo crash stored at Area 51. When another fatal crash in October 1987 occurred inside Nellis, the military again provided little information to the press.
The Air Force denied the existence of the aircraft until 10 November 1988, when Assistant Secretary of Defense J. Daniel Howard displayed a grainy photograph at a Pentagon press conference, disproving the many inaccurate rumors about the shape of the "F-19". After the announcement pilots could fly the F-117 during daytime and no longer needed to be associated with the A-7, flying the T-38 supersonic trainer for travel and training instead. In April 1990, two F-117 aircraft were flown into Nellis Air Force Base, Nevada, arriving during daylight and publicly displayed to a crowd of tens of thousands.
Five Full Scale Development (FSD) aircraft were built, designated "YF-117A". The last of 59 production F-117s were delivered on 3 July 1990.
As the Air Force has stated, "Streamlined management by Aeronautical Systems Center, Wright-Patterson AFB, Ohio, combined breakthrough stealth technology with concurrent development and production to rapidly field the aircraft... The F-117A program demonstrates that a stealth aircraft can be designed for reliability and maintainability."
The operational aircraft was officially designated "F-117A". Most modern U.S. military aircraft use post-1962 designations in which the designation "F" is usually an air-to-air fighter, "B" is usually a bomber, "A" is usually a ground-attack aircraft, etc. (Examples include the F-15, the B-2, and the A-6.) The F-117 is primarily an attack aircraft, so its "F" designation is inconsistent with the DoD system. This is an inconsistency that has been repeatedly employed by the U.S. Air Force with several of its attack aircraft since the late 1950s, including the Republic F-105 Thunderchief and General Dynamics F-111 Aardvark. A televised documentary quoted project manager Alan Brown as saying that Robert J. Dixon, a four-star Air Force general who was the head of Tactical Air Command felt that the top-notch USAF fighter pilots required to fly the new aircraft were more easily attracted to an aircraft with an "F" designation for fighter, as opposed to a bomber ("B") or attack ("A") designation.
The designation "F-117" seems to indicate that it was given an official designation prior to the 1962 U.S. Tri-Service Aircraft Designation System and could be considered numerically to be a part of the earlier "Century series" of fighters. The assumption prior to the revealing of the aircraft to the public was that it would likely receive the F-19 designation as that number had not been used. However, there were no other aircraft to receive a "100" series number following the F-111. Soviet fighters obtained by the U.S. via various means under the Constant Peg program were given F-series numbers for their evaluation by U.S. pilots, and with the advent of the Teen Series fighters, most often Century Series designations.
As with other exotic military aircraft types flying in the southern Nevada area, such as captured fighters, an arbitrary radio call of "117" was assigned. This same radio call had been used by the enigmatic 4477th Test and Evaluation Squadron, also known as the "Red Hats" or "Red Eagles", that often had flown expatriated MiG jet fighters in the area, but there was no relationship to the call and the formal F-19 designation then being considered by the Air Force. Apparently, use of the "117" radio call became commonplace and when Lockheed released its first flight manual (i.e., the Air Force "dash one" manual for the aircraft), F-117A was the designation printed on the cover.
When the Air Force first approached Lockheed with the stealth concept, Skunk Works Director Kelly Johnson proposed a rounded design. He believed smoothly blended shapes offered the best combination of speed and stealth. However, his assistant, Ben Rich, showed that faceted-angle surfaces would provide significant reduction in radar signature, and the necessary aerodynamic control could be provided with computer units. A May 1975 Skunk Works report, "Progress Report No. 2, High Stealth Conceptual Studies", showed the rounded concept that was rejected in favor of the flat-sided approach.
The resulting unusual design surprised and puzzled experienced pilots. A Royal Air Force (RAF) pilot who flew it as an exchange officer stated that when he first saw a photograph of the still-secret F-117, he "promptly giggled and thought to [himself] 'this clearly can't fly. Early stealth aircraft were designed with a focus on minimal radar cross-section (RCS) rather than aerodynamic performance. Highly-stealthy aircraft like the F-117 Nighthawk are aerodynamically unstable in all three aircraft principal axes and require constant flight corrections from a fly-by-wire (FBW) flight system to maintain controlled flight. It is shaped to deflect radar signals and is approximately the size of an F-15 Eagle.
The single-seat Nighthawk is powered by two non-afterburning General Electric F404 turbofan engines. It is air refuelable and features a V-tail. The maximum speed is at high altitude, the max rate of climb is per minute, and service ceiling is . The cockpit was quite spacious, with ergonomic displays and controls, but the field of view was somewhat obstructed with a large blind spot to the rear.
It has quadruple-redundant fly-by-wire flight controls. To lower development costs, the avionics, fly-by-wire systems, and other parts were derived from the General Dynamics F-16 Fighting Falcon, McDonnell Douglas F/A-18 Hornet and McDonnell Douglas F-15E Strike Eagle. The parts were originally described as spares on budgets for these aircraft, to keep the F-117 project secret.
The aircraft is equipped with sophisticated navigation and attack systems integrated into a digital avionics suite. It navigates primarily by GPS and high-accuracy inertial navigation. Missions are coordinated by an automated planning system that can automatically perform all aspects of an attack mission, including weapons release. Targets are acquired by a thermal imaging infrared system, slaved to a laser rangefinder/laser designator that finds the range and designates targets for laser-guided bombs. The F-117A's split internal bay can carry of ordnance. Typical weapons are a pair of GBU-10, GBU-12, or GBU-27 laser-guided bombs, two BLU-109 penetration bombs, or two Joint Direct Attack Munitions (JDAMs), a GPS/INS guided stand-off bomb.
The F-117 has a radar cross-section of about . Among the penalties for stealth are lower engine thrust due to losses in the inlet and outlet, a very low wing aspect ratio, and a high sweep angle (50°) needed to deflect incoming radar waves to the sides. With these design considerations and no afterburner, the F-117 is limited to subsonic speeds.
The F-117A carries no radar, which lowers emissions and cross-section, and whether it carries any radar detection equipment is classified. Its faceted shape (made from 2-dimensional flat surfaces) resulted from the limitations of the 1970s-era computer technology used to calculate its radar cross-section. Later supercomputers made it possible for subsequent aircraft like the B-2 bomber to use curved surfaces while maintaining stealth, through the use of far more computational resources to perform the additional calculations.
An exhaust plume contributes a significant infrared signature. The F-117 reduces IR signature with a non-circular tail pipe (a slit shape) to minimize the exhaust cross-section and maximize the mixing of hot exhaust with cool ambient air. The F-117 lacks afterburners, because the hot exhaust would increase the infrared signature, and breaking the sound barrier would produce an obvious sonic boom, as well as surface heating of the aircraft skin which also increases the infrared footprint. As a result, its performance in air combat maneuvering required in a dogfight would never match that of a dedicated fighter aircraft. This was unimportant in the case of this aircraft since it was designed to be a bomber.
Passive (multistatic) radar, bistatic radar and especially multistatic radar systems detect some stealth aircraft better than conventional monostatic radars, since first-generation stealth technology (such as the F-117) reflects energy away from the transmitter's line of sight, effectively increasing the radar cross section (RCS) in other directions, which the passive radars monitor.
During the program's early years, from 1984 to mid-1992, the F-117A fleet was based at Tonopah Test Range Airport, Nevada, where it served under the 4450th Tactical Group. Because the F-117 was classified during this time, the unit was officially located at Nellis Air Force Base, Nevada, and equipped with A-7 Corsair II aircraft. All military personnel were permanently assigned to Nellis AFB, and most personnel and their families lived in Las Vegas. This required commercial air and trucking to transport personnel between Las Vegas and Tonopah each week. The 4450th was absorbed by the 37th Tactical Fighter Wing in 1989. In 1992, the entire fleet was transferred to Holloman Air Force Base, New Mexico, under the command of the 49th Fighter Wing. This move also eliminated the Key Air and American Trans Air contract flights to Tonopah, which flew 22,000 passenger trips on 300 flights from Nellis to Tonopah per month.
The F-117 reached initial operating capability status in 1983. The Nighthawk's pilots called themselves "Bandits". Each of the 558 Air Force pilots who have flown the F-117 has a Bandit number, such as "Bandit 52", that indicates the sequential order of their first flight in the F-117. Pilots told friends and families that they flew the Northrop F-5 in aggressor squadrons against Tactical Air Command.
The F-117 has been used several times in war. Its first mission was during the United States invasion of Panama in 1989. During that invasion two F-117A Nighthawks dropped two bombs on Rio Hato airfield.
During the Gulf War in 1991, the F-117 flew approximately 1,300 sorties and scored direct hits on 1,600 high-value targets in Iraq over 6,905 flight hours. Leaflet drops on Iraqi forces displayed the F-117 destroying ground targets and warned "Escape now and save yourselves". Initial claims of its effectiveness were later found to be overstated. Only 229 Coalition tactical aircraft aircraft could drop and designate laser-guided bombs of which 36 F-117s represented 15.7%, and only the USAF had the I-2000 bombs intended for hardened targets. So the F-117 represented 32% of all coalition aircraft that could deliver such bombs. Initial reports of F-117s hitting 80% of their targets were later scaled back to "41–60%". On the first night, they failed to hit 40% of their assigned air-defense targets, including the Air Defense Operations Center in Baghdad, and 8 such targets remained functional out of 10 that could be assessed. In their Desert Storm white paper, the USAF stated that "the F-117 was the only airplane that the planners dared risk over downtown Baghdad" and that this area was particularly well defended. In fact, most of the air defenses were on the outskirts of the city and many other aircraft hit targets in the downtown area, with minimal casualties when they attacked at night like the F-117. This meant they avoided the optically aimed anti-aircraft cannon and infrared SAMs which were the biggest threat to Coalition aircraft.
The aircraft was operated in secret from Tonopah for almost a decade, but after the Gulf War the aircraft moved to Holloman in 1992—however, its integration with the USAF's non-stealth "iron jets" occurred slowly. As one senior F-117A pilot later said: Because of ongoing secrecy others continued to see the aircraft as "none of their business, a stand-alone system". The F-117A and the men and women of the 49th Fighter Wing were deployed to Southwest Asia on multiple occasions. On their first deployment, with the aid of aerial refueling, pilots flew non-stop from Holloman to Kuwait, a flight of approximately 18.5 hours—a record for single-seat fighters that stands today.
One F-117 (AF ser. no. 82-0806) was lost to enemy action. It was downed during a mission against the Army of Yugoslavia on 27 March 1999, during Operation Allied Force. The aircraft was acquired by a fire control radar at a distance of 13 km and an altitude of 8 km: SA-3s were then launched by a Yugoslav version of the Soviet Isayev S-125 "Neva" (NATO name SA-3 "Goa") anti-aircraft missile system. The launcher was run by the 3rd Battalion of the 250th Air Defence Missile Brigade under the command of Colonel Zoltán Dani.
After the explosion, the aircraft became uncontrollable, forcing the pilot to eject. The pilot was recovered six hours later by a United States Air Force Pararescue team. The stealth technology from the downed F-117 may have been acquired by Russia and China, but the United States did not destroy the wreckage because its technology was two decades old.
Some American sources state that a second F-117A was damaged during the same campaign, allegedly on 30 April 1999; the aircraft returned to base, but it supposedly never flew again.
Use of the aircraft as part of Operation Allied Force continued, and it was later used in the Operation Enduring Freedom in 2001 and Operation Iraqi Freedom in 2003. It was operated by the U.S. Air Force.
The loss in Serbia caused the USAF to create a subsection of their existing weapons school to improve tactics. More training was done with other units, and the F-117A began to participate in Red Flag exercises. Though advanced for its time, the F-117's stealthy faceted airframe required a large amount of maintenance and was eventually superseded by streamlined shapes produced with computer-aided design. Other weapon systems began to take on the F-117's roles, such as the F-22 Raptor gaining the ability to drop guided bombs. By 2005, the aircraft was used only for certain missions, such as if a pilot needed to verify that the correct target had been hit, or when minimal collateral damage was vital.
The Air Force had once planned to retire the F-117 in 2011, but Program Budget Decision 720 (PBD 720), dated 28 December 2005, proposed retiring it by October 2008 to free up an estimated $1.07 billion to buy more F-22s. PBD 720 called for 10 F-117s to be retired in FY2007 and the remaining 42 in FY2008, stating that other Air Force planes and missiles could stealthily deliver precision ordnance, including the B-2 Spirit, F-22 and JASSM. The planned introduction of the multirole F-35 Lightning II also contributed to the retirement decision.
In late 2006, the Air Force closed the F-117 formal training unit (FTU), and announced the retirement of the F-117. The first six aircraft to be retired took their last flight on 12 March 2007 after a ceremony at Holloman AFB to commemorate the aircraft's career. Brigadier General David L. Goldfein, commander of the 49th Fighter Wing, said at the ceremony, "With the launch of these great aircraft today, the circle comes to a close—their service to our nation's defense fulfilled, their mission accomplished and a job well done. We send them today to their final resting place—a home they are intimately familiar with—their first, and only, home outside of Holloman."
Unlike most other Air Force aircraft that are retired to Davis-Monthan AFB for scrapping, or dispersal to museums, most of the F-117s were placed in "Type 1000" storage in their original hangars at the Tonopah Test Range Airport. At Tonopah, their wings were removed and the aircraft are stored in their original climate-controlled hangars. The decommissioning occurred in eight phases, with the operational aircraft retired to Tonopah in seven waves from 13 March 2007 until the last wave's arrival on 22 April 2008. Four aircraft were kept flying beyond April by the 410th Flight Test Squadron at Palmdale for flight test. By August, two were remaining. The last F-117 (AF Serial No. 86-0831) left Palmdale to fly to Tonopah on 11 August 2008. With the last aircraft retired, the 410th was inactivated in a ceremony on 1 August 2008.
Five aircraft were placed in museums, including the first four YF-117As and some remains of the F-117 shot down over Serbia. Through 2009, one F-117 had been scrapped; AF Serial No. 79-0784 was scrapped at the Palmdale test facility on 26 April 2008. It was the last F-117 at Palmdale and was scrapped to test an effective method for destroying F-117 airframes.
Congress had ordered that all F-117s mothballed from 30 September 2006 onwards were to be maintained "in a condition that would allow recall of that aircraft to future service" as part of the 2007 National Defense Authorization Act. By April 2016, lawmakers appeared ready to "remove the requirement that certain F-117 aircraft be maintained in a condition that would allow recall of those aircraft to future service", which would move them from storage to the aerospace maintenance and regeneration yard in Arizona to be scavenged for hard-to-find parts, or completely disassembled. On 11 September 2017, it was reported that in accordance with the National Defense Authorization Act for Fiscal Year 2017, signed into law on 23 December 2016, "the Air Force will remove four F-117s every year to fully divest them—a process known as demilitarizing aircraft".
Although officially retired, the F-117 fleet remains intact and photos show the aircraft carefully mothballed. Some of the aircraft are flown periodically, and have been spotted flying as recently as May 2020. In March 2019, it was reported that four F-117s had been secretly deployed to the Middle East in 2016 and that one had to make an emergency landing at Ali Al Salem (AAS), Kuwait sometime late that year.
In February 2019, an F-117 was observed flying through the R-2508 Special Use Airspace Complex in the vicinity of Edwards Air Force Base, escorted by two F-16 Fighting Falcons that may have been providing top cover. Closer photographs of the aircraft revealed that the tail code had been scrubbed in an attempt to remove the paint. The partially-intact code identified it as a former aircraft of the 49th Operations Group. An F-117 was also photographed in 2019 carrying unit markings previously unassociated with the aircraft – a band on the tail bearing the name "Dark Knights", suggesting either an official or unofficial squadron is maintaining the Nighthawks. In July 2019, one Nighthawk was spotted flying above Death Valley, trailing behind a KC-135R Stratotanker in a hybrid aggressor scheme. In March 2020, a spectator recorded an F-117 flying through a canyon, sometimes called the Star Wars Canyon. On 20 May 2020, two more F-117s were sighted in a common aerial refueling area of Southern California trailing a NKC-135R Stratotanker from Edwards AFB, CA. The Nighthawks trailed the tanker out over the ocean after that.
The United States Navy tested the F-117 in 1984 but determined it was not suitable for carrier use. In the early 1990s, Lockheed proposed an upgraded, carrier-capable variant of the F-117 dubbed the "Seahawk" to the Navy as an alternative to the canceled A/F-X program. The unsolicited proposal was received poorly by the Department of Defense, which had little interest in the single mission capabilities of such an aircraft, particularly as it would take money away from the Joint Advanced Strike Technology program, which evolved into the Joint Strike Fighter. The new aircraft would have differed from the land-based F-117 in several ways, including the addition "of elevators, a bubble canopy, a less sharply swept wing and reconfigured tail". The "N" variant would also be re-engined to use General Electric F414 turbofans instead of the older General Electric F404s. The aircraft would be optionally fitted with hardpoints, allowing for an additional of payload, and a new ground-attack radar with air-to-air capability. In that role the F-117N could carry AIM-120 AMRAAM air-to-air missiles.
After being rebuffed by the Navy, Lockheed submitted an updated proposal that included afterburning capability and a larger emphasis on the F-117N as a multi-mission aircraft, rather than just an attack aircraft. To boost interest, Lockheed also proposed an "F-117B" land-based variant that shared most of the F-117N capabilities. This variant was proposed to USAF and RAF. Two RAF pilots formally evaluated the aircraft in 1986 as a reward for British help with the American bombing of Libya that year, RAF exchange officers began flying the F-117 in 1987, and the British declined an offer during the Reagan administration to purchase the aircraft. This renewed F-117N proposal was also known as the "A/F-117X". Neither the F-117N nor the F-117B were ordered.
United States Air Force
The aircraft's official name is "Night Hawk", however the alternative form "Nighthawk" is frequently used.
As it prioritized stealth over aerodynamics, it earned the nickname "Wobblin' Goblin" due to its alleged instability at low speeds. However, F-117 pilots have stated the nickname is undeserved. "Wobblin' (or Wobbly) Goblin" is likely a holdover from the early "Have Blue" / "Senior Trend" (FSD) days of the project when instability was a problem. In the USAF, "Goblin" (without wobbly) persists as a nickname because of the aircraft's appearance. During Operation Desert Storm, Saudis dubbed the aircraft "Shaba", which is Arabic for "Ghost". Some pilots also called the airplane the "Stinkbug".
The Omaha Nighthawks professional American football team used the F-117 Nighthawk as its logo. The experimental Remora F-117X was featured in the 1996 film "Executive Decision". | https://en.wikipedia.org/wiki?curid=11720 |
Hudson's Bay Company
The Hudson's Bay Company (HBC; "CBH") is a Canadian retail business group. A fur trading business for much of its existence, HBC now owns and operates retail stores in Canada and the United States. The company sold most of its European operations by August 2019 and its remaining stores, in the Netherlands, were closed by the end of 2019. HBC owns the Saks Fifth Avenue and Saks Fifth Avenue Off 5th stores in the United States; most other American operations were sold by mid-2019 and the last remaining stores (Lord & Taylor chain) were sold prior to the end of 2019.
The company's namesake business division is Hudson's Bay, commonly referred to as The Bay ( in French).
After incorporation by English royal charter in 1670, the company functioned as the "de facto" government in parts of North America for nearly 200 years until the HBC sold the land it owned (the entire Hudson Bay drainage basin, known as Rupert's Land) to Canada in 1869 as part of the Deed of Surrender, authorized by the Rupert's Land Act 1868. During its peak, the company controlled the fur trade throughout much of the English- and later British-controlled North America. By the mid-19th century, the company evolved into a mercantile business selling a wide variety of products from furs to fine homeware in a small number of sales shops (as opposed to trading posts) across Canada. These shops were the first step towards the department stores the company owns today.
In 2008, HBC was acquired by NRDC Equity Partners, which also owned the upmarket American department store Lord & Taylor. From 2008 to 2012, the HBC was run through a holding company of NRDC, Hudson's Bay Trading Company, which was dissolved in early 2012. HBC's head office is currently located in Brampton, Ontario. Until March 2020 the company was listed on the Toronto Stock Exchange under the symbol "HBC.TO".
In the 17th century the French had a de facto monopoly on the Canadian fur trade with their colony of New France. Two French traders, Pierre-Esprit Radisson and Médard des Groseilliers (Médard de Chouart, Sieur des Groseilliers), Radisson's brother-in-law, learned from the Cree that the best fur country lay north and west of Lake Superior, and that there was a "frozen sea" still further north. Assuming this was Hudson Bay, they sought French backing for a plan to set up a trading post on the Bay, to reduce the cost of moving furs overland. According to Peter C. Newman, "concerned that exploration of the Hudson Bay route might shift the focus of the fur trade away from the St. Lawrence River, the French governor", Marquis d'Argenson (in office 1658–61), "refused to grant the coureurs de bois permission to scout the distant territory". Despite this refusal, in 1659 Radisson and Groseilliers set out for the upper Great Lakes basin. A year later they returned with premium furs, evidence of the potential of the Hudson Bay region. Subsequently, they were arrested for trading without a licence and fined, and their furs were confiscated by the government.
Determined to establish trade in the Hudson Bay, Radisson and Groseilliers approached a group of English colonial businessmen in Boston, Massachusetts to help finance their explorations. The Bostonians agreed on the plan's merits but their speculative voyage in 1663 failed when their ship ran into pack ice in Hudson Strait. Boston-based English commissioner Colonel George Cartwright learned of the expedition and brought the two to England to raise financing. Radisson and Groseilliers arrived in London in 1665 at the height of the Great Plague. Eventually, the two met and gained the sponsorship of Prince Rupert. Prince Rupert also introduced the two to his cousin, King Charles II. In 1668 the English expedition acquired two ships, the "Nonsuch" and the "Eaglet", to explore possible trade into Hudson Bay. Groseilliers sailed on the "Nonsuch", commanded by Captain Zachariah Gillam, while the "Eaglet" was commanded by Captain William Stannard and accompanied by Radisson. On 5 June 1668, both ships left port at Deptford, England, but the "Eaglet" was forced to turn back off the coast of Ireland.
The "Nonsuch" continued to James Bay, the southern portion of Hudson Bay, where its explorers founded, in 1668, the first fort on Hudson Bay, Charles Fort at the mouth of the Rupert River. (It was later known as Rupert House, and developed as the community of present-day Waskaganish, Quebec.) Both the fort and the river were named after the sponsor of the expedition, Prince Rupert of the Rhine, one of the major investors and soon to be the new company's first governor. After a successful trading expedition over the winter of 1668–69, "Nonsuch" returned to England on 9 October 1669 with the first cargo of fur resulting from trade in Hudson Bay. The bulk of the fur – worth £1,233 – was sold to Thomas Glover, one of London's most prominent furriers. This and subsequent purchases by Glover made it clear the fur trade in Hudson Bay was viable.
The Governor and Company of Adventurers of England Trading into Hudson's Bay was incorporated on 2 May 1670, with a royal charter from King Charles II. The charter granted the company a monopoly over the region drained by all rivers and streams flowing into Hudson Bay in northern Canada. The area was named "Rupert's Land" after Prince Rupert, the first governor of the company appointed by the King. This drainage basin of Hudson Bay constitutes , comprising over one-third of the area of modern-day Canada and stretches into the present-day north-central United States. The specific boundaries were unknown at the time. Rupert's Land would eventually become Canada's largest land "purchase" in the 19th century.
The HBC established six posts between 1668 and 1717. Rupert House(1668, southeast), Moose Factory (1673, south) and Fort Albany, Ontario (1679, west) were erected on James Bay; three other posts were established on the western shore of Hudson Bay proper: Fort Severn (1689), York Factory (1684) and Fort Churchill (1717). Inland posts were not built until 1774. After 1774, York Factory became the main post because of its convenient access to the vast interior waterway systems of the Saskatchewan and Red rivers. Called "factories" (because the "factor," i.e., a person acting as a mercantile agent did business from there), these posts operated in the manner of the Dutch fur trading operations in New Netherlands. By adoption of the Standard of Trade in the 18th century, the HBC ensured consistent pricing throughout Rupert's Land. A means of exchange arose based on the "Made Beaver" (MB); a prime pelt, worn for a year and ready for processing: "the prices of all trade goods were set in values of Made Beaver (MB) with other animal pelts, such as squirrel, otter and moose quoted in their MB (made beaver) equivalents. For example, two otter pelts might equal 1 MB".
During the fall and winter, First Nations men and European trappers accomplished the vast majority of the animal trapping and pelt preparation. They travelled by canoe and on foot to the forts to sell their pelts. In exchange they typically received popular trade goods such as knives, kettles, beads, needles, and the Hudson's Bay point blanket. The arrival of the First Nations trappers was one of the high points of the year, met with pomp and circumstance. The highlight was very formal, an almost ritualized "Trading Ceremony" between the Chief Trader and the Captain of the aboriginal contingent who traded on their behalf. During the initial years of the fur trade, prices for items varied from post to post.
The early coastal factory model of the English contrasted with the system of the French. They established an extensive system of inland posts at native villages, and sent traders to live among the tribes of the region, learning their languages and often forming alliances through marriages with indigenous women. In March 1686, the French sent a raiding party under the Chevalier des Troyes more than to capture the HBC posts along James Bay. The French appointed Pierre Le Moyne d'Iberville, who had shown great heroism during the raids, as commander of the company's captured posts. In 1687 an English attempt to resettle Fort Albany failed due to strategic deceptions by d'Iberville. After 1688 England and France were officially at war, and the conflict played out in North America as well. D'Iberville raided Fort Severn in 1690 but did not attempt to raid the well-defended local headquarters at York Factory. In 1693 the HBC recovered Fort Albany; d'Iberville captured York Factory in 1694, but the company recovered it the next year.
In 1697, d'Iberville again commanded a French naval raid on York Factory. On the way to the fort, he defeated three ships of the Royal Navy in the Battle of Hudson's Bay (5 September 1697), the largest naval battle in the history of the North American Arctic. D'Iberville's depleted French force captured York Factory by laying siege to the fort and pretending to be a much larger army. The French retained all of the outposts except Fort Albany until 1713. (A small French and Indian force attacked Fort Albany again in 1709 during Queen Anne's War but was unsuccessful. The economic consequences of the French possession of these posts for the company were significant; HBC did not pay any dividends for more than 20 years. See Anglo-French conflicts on Hudson Bay.
With the ending of the Nine Years' War in 1697, and the War of the Spanish Succession in 1713 with the signing of the Treaty of Utrecht, France had made substantial concessions. Among the treaty's many provisions, it required France to relinquish all claims to Great Britain on the Hudson Bay, which again became a British possession. (The Kingdom of Great Britain had been established following the union of Scotland and England in 1707).
After the treaty, the HBC built Prince of Wales Fort, a stone star fort at the mouth of the nearby Churchill River. In 1782, during the American Revolutionary War, a French squadron under Jean-François de Galaup, comte de Lapérouse captured and demolished York Factory and Prince of Wales Fort in support of the American rebels.
In its trade with native peoples, Hudson's Bay Company exchanged wool blankets, called Hudson's Bay point blankets, for the beaver pelts trapped by aboriginal hunters. By 1700, point blankets accounted for more than 60 per cent of the trade. The number of indigo stripes (a.k.a. points) woven into the blankets identified its finished size. A long-held misconception is that the number of stripes was related to its value in beaver pelts.
A parallel may be drawn between the HBC's control over Rupert's Land with the trade monopoly and government functions enjoyed by the Honourable East India Company over India during roughly the same period. The HBC invested £10,000 in the East India Company in 1732, which it viewed as a major competitor.
Hudson's Bay Company's first inland trading post was established by Samuel Hearne in 1774 with Cumberland House, Saskatchewan.
Conversely, a number of inland HBC "houses" pre-date the construction of Cumberland House, in 1774. Henley House, established in 1743, inland from Hudson Bay, at the confluence of the Albany and Kabinakagami Rivers, was dependent on Albany River – Fort Albany for lines of communication, was not "finished" until 1768. Next, the inland houses of Split Lake and Nelson Houses were established between 1740 and 1760. These were dependent on York River – York Factory and Churchill River, respectively. Although not inland, Richmond Fort was established in 1749. This was on an island within Hudson Bay. It was titled a "New Discovery" in 1749, and by 1750 was titled Richmond Gulf. The name was changed to Richmond Fort and given the abbreviation RF from 1756–59, it served mainly as a trade goods and provisions storage location. Additional inland posts were Capusco River and Chickney Creek, both circa 1750. Likewise, Brunswick, Gloucester, Hudson, Rupert, and Wapiscogami Houses were established in the decade of the 1770s. These post-date Cumberland House, yet speak to the expanding inland incursion of the HBC in the last quarter of the 18th century. Minor posts also during this time period include Mesackamy/Mesagami Lake, Sturgeon Lake, Beaver Lake Posts.
In 1779, other traders founded the North West Company (NWC) in Montreal as a seasonal partnership to provide more capital and to continue competing with the HBC. It became operative for the outfit of 1780 and was the first joint-stock company in Canada and possibly North America. The agreement lasted one year. A second agreement established in 1780 had a three-year term. The company became a permanent entity in 1783. By 1784, the NWC had begun to make serious inroads into the HBC's profits.
The North West Company (NWC) was the main rival in the fur trade. The competition led to the small Pemmican War in 1816. The Battle of Seven Oaks on 19 June 1816 was the climax of the long dispute. In 1821, the North West Company of Montreal and Hudson's Bay Company were forcibly merged by intervention of the British government to put an end to often-violent competition. 175 posts, 68 of them the HBC's, were reduced to 52 for efficiency and because many were redundant as a result of the rivalry and were inherently unprofitable. Their combined territory was extended by a licence to the North-Western Territory, which reached to the Arctic Ocean in the north and, with the creation of the Columbia Department in the Pacific Northwest, to the Pacific Ocean in the west. The NWC's regional headquarters at Fort George (Fort Astoria) was relocated to Fort Vancouver on the north bank of the Columbia River; it became the HBC base of operations on the Pacific Slope.
Before the merger, the employees of the HBC, unlike those of the North West Company, did not participate in its profits. After the merger, with all operations under the management of Sir George Simpson (1826–60), the company had a corps of commissioned officers: 25 chief factors and 28 chief traders, who shared in the company's profits during the monopoly years. Its trade covered , and it had 1,500 contract employees.
They also operated a store in what were then known as the Sandwich Islands (Hawai'i), engaging in merchant shipping to the islands between 1828 and 1859.
The career progression for officers, together referred to as the Commissioned Gentlemen, was to enter the company as a fur trader. Typically, they were men who had the capital to invest in starting up their trading. They sought to be promoted to the rank of Chief Trader. A Chief Trader would be in charge of an individual post and was entitled to one share of the company's profits. Chief Factors sat in council with the Governors and were the heads of districts. They were entitled to two shares of the company's profits or losses. The average income of a Chief Trader was £360 and that of a Chief Factor was £720.
Although the HBC maintained a monopoly on the fur trade during the early to mid-19th century, there was competition from James Sinclair and Andrew McDermot (Dermott), independent traders in the Red River Colony. They shipped furs by the Red River Trails to Norman Kittson a buyer in the United States. In addition, Americans controlled the Maritime fur trade on the Northwest Coast until the 1830s.
Throughout the 1820s and 1830s, the HBC controlled nearly all trading operations in the Pacific Northwest, based at the company headquarters at Fort Vancouver on the Columbia River. Although claims to the region were by agreement in abeyance, commercial operating rights were nominally shared by the United States and Britain through the Anglo-American Convention of 1818, but company policy, enforced via Chief Factor John McLoughlin of the company's Columbia District, was to discourage U.S. settlement of the territory. The company's effective monopoly on trade virtually forbade any settlement in the region. It established Fort Boise in 1834 (in present-day southwestern Idaho) to compete with the American Fort Hall, to the east. In 1837, it purchased Fort Hall, also along the route of the Oregon Trail. The outpost director displayed the abandoned wagons of discouraged settlers to those seeking to move west along the trail.
The company's stranglehold on the region was broken by the first successful large wagon train to reach Oregon in 1843, led by Marcus Whitman. In the years that followed, thousands of emigrants poured into the Willamette Valley of Oregon. In 1846, the United States acquired full authority south of the 49th parallel; the most settled areas of the Oregon Country were south of the Columbia River in what is now Oregon. McLoughlin, who had once turned away would-be settlers as company director, then welcomed them from his general store at Oregon City. He was later proclaimed the "Father of Oregon". The company retains no presence today in the portion of the Pacific Northwest governed by the United States.
During the 1820s and 1830s, HBC trappers were deeply involved in the early exploration and development of Northern California. Company trapping brigades were sent south from Fort Vancouver, along what became known as the Siskiyou Trail, into Northern California as far south as the San Francisco Bay Area, where the company operated a trading post at Yerba Buena (San Francisco). These trapping brigades in Northern California faced serious risks, and were often the first to explore relatively uncharted territory. They included the lesser known Peter Skene Ogden and Samuel Black.
Between 1820 and 1870, the HBC issued its own paper money. The notes, denominated in pounds sterling, were printed in London and issued at the York Factory, Fort Garry and the Red River Colony. For forty or so years beginning in 1870, the company employed paddle wheel steamships on the rivers of the prairies.
The Guillaume Sayer Trial in 1849 contributed to the end of the HBC monopoly. Sayer, a Métis trapper and trader, was accused of illegal trading in furs. The Court of Assiniboia brought Sayer to trial, before a jury of HBC officials and supporters. During the trial, a crowd of armed Métis men led by Louis Riel, Sr. gathered outside the courtroom. Although Sayer was found guilty of illegal trade, having evaded the HBC monopoly, Judge Adam Thom did not levy a fine or punishment. Some accounts attributed that to the intimidating armed crowd gathered outside the courthouse. With the cry, "Le commerce est libre! Le commerce est libre!" ("Trade is free! Trade is free!"), the Métis loosened the HBC's previous control of the courts, which had enforced their monopoly on the settlers of Red River.
Another factor was the findings of the Palliser Expedition of 1857 to 1860, led by Captain John Palliser. He surveyed the area of the prairies and wilderness from Lake Superior to the southern passes of the Rocky Mountains. Although he recommended against settlement of the region, the report sparked a debate. It ended the myth publicized by Hudson's Bay Company: that the Canadian West was unfit for agricultural settlement.
In 1863, the International Financial Society bought controlling interest in the HBC, signalling a shift in the company's outlook: most of the new shareholders were less interested in the fur trade than in real estate speculation and economic development in the West. The Society floated £2 million in public shares on non-ceded land held ostensibly by the Hudson's Bay Company as an asset and leveraged this asset for collateral for these funds. These funds allowed the Society the financial means to weather the financial collapse of 1866 which destroyed many competitors and invest in railways in North America.
In 1869, after rejecting the American government offer of , the company approved the return of Rupert's Land to Britain. The government gave it to Canada and loaned the new country the £300,000 required to compensate HBC for its losses. HBC also received one-twentieth of the fertile areas to be opened for settlement and retained title to the lands on which it had built trading establishments. The deal, known as The Deed of Surrender, came into force the following year. The resulting territory, now known as the Northwest Territories, was brought under Canadian jurisdiction under the terms of the Rupert's Land Act 1868, enacted by the Parliament of the United Kingdom. The Deed enabled the admission of the fifth province, Manitoba, to the Confederation on 15 July 1870, the same day that the deed itself came into force.
During the 19th century the Hudson's Bay Company went through great changes in response to such factors as growth of population and new settlements in part of its territory, and ongoing pressure from Britain. It seemed unlikely that it would continue to control the future of the West.
The iconic department store today evolved from trading posts at the start of the 19th century, when they began to see demand for general merchandise grow rapidly. HBC soon expanded into the interior and set-up posts along river settlements that later developed into the modern cities of Winnipeg, Calgary and Edmonton. In 1857, the first sales shop was established in Fort Langley. This was followed by other sales shops in Fort Victoria (1859), Winnipeg (1881), Calgary (1884), Vancouver (1887), Vernon (1887), Edmonton (1890), Yorkton (1898), and Nelson (1902). The first of the grand "original six" department stores was built in Calgary in 1913. The other department stores that followed were in Edmonton, Vancouver, Victoria, Saskatoon, and Winnipeg.
The First World War interrupted a major remodelling and restoration of retail trade shops planned in 1912. Following the war, the company revitalized its fur-trade and real-estate activities, and diversified its operations by venturing into the oil business.
Today, the department store business is the only remaining part of the company's operations, in the form of department stores under the Hudson's Bay brand.
The company co-founded Hudson's Bay Oil and Gas Company (HBOG) in 1926 with Marland Oil Company (which merged with Conoco in 1929). HBOG expanded during the 1940s and 1950s, and in 1960 began shipping Canadian crude through a new link to the Glacier pipeline and on to the refinery in Billings, Montana. The company became the sixth-largest Canadian oil producer in 1967. In 1973, HBOG acquired a 35 per cent stake in Siebens Oil and Gas, and, in 1979, it divested that interest. In 1980, it bought a controlling interest in Roxy Petroleum. In the 1980s, sales and oil prices slipped, while debt from acquisitions piled up which led to Hudson's Bay Company selling its 52.9 per cent stake in HBOG to Dome Petroleum in 1981.
During his 1927 Arctic trip with A. Y. Jackson, discoverer of insulin Frederick Banting realized that crew or passengers on board the HBC paddle wheeler SS "Distributor" were responsible for spreading the influenza virus down the Slave River and Mackenzie River, a virus that had over the summer and autumn spread territory-wide, devastating the aboriginal population of the north. Returning from the trip, Banting gave an interview in Montreal with a "Toronto Star" reporter under the agreement that his statements on HBC would remain off the record. The conversation was nonetheless published in the "Toronto Star" and rapidly reached a wide audience across Europe and Australia. Banting was angry at the leak, having promised the Department of the Interior not to make any statements to the press prior to clearing them.
The article noted that Banting had given the journalist C. R. Greenaway repeated instances of how the fox fur trade always favoured the company: "For over $100,000 of fox skins, he estimated that the Eskimos had not received $5,000 worth of goods." He traced this treatment to health, consistent with reports made in previous years by RCMP officers, suggesting that "the result was a diet of "flour, biscuits, tea and tobacco," with the skins that once were used for clothing traded merely for "cheap whiteman's goods.""
The HBC fur trade commissioner called Banting's remarks "false and slanderous", and a month later, the governor and general manager met Banting at the King Edward Hotel to demand a retraction. Banting stated that the reporter had betrayed his confidence, but did not retract his statement and reaffirmed that HBC was responsible for the death of indigenous residents by supplying the wrong kind of food and introducing diseases into the Arctic. As A. Y. Jackson notes in his memoir, since neither the governor nor the general manager had been to the Arctic, the meeting ended with them asking Banting's advice on what HBC ought to do: "He gave them some good advice and later he received a card at Christmas with the Governor's best wishes."
Banting maintained this position in his report to the Department of the Interior:He noted that "infant mortality was high because of the undernourishment of the mother before birth"; that "white man’s food leads to decay of native teeth"; that "tuberculosis has commenced. Saw several cases at Godhavn, Etah, Port Burwell, Arctic Bay"; that "an epidemic resembling influenza killed a considerable proportion of population at Port Burwell"; and that "the gravest danger faces the Eskimo in his transfer from a race-long hunter to a dependent trapper. White flour, sea-biscuits, tea and tobacco do not provide sufficient fuel to warm and nourish him". Furthermore, he discouraged the establishment of an Arctic hospital. The "proposed hospital at Pangnirtung would be a waste of money, as it could be reached by only a few natives". Banting's report contrasted starkly with the bland descriptions provided by the ship's physician, F. H. Stringer.
In 1960, the company acquired Morgan's allowing it to expand into Montreal, Toronto, Hamilton, and Ottawa. In 1965, HBC rebranded its department stores as The Bay. The Morgan's logo was changed to match the new visual identity. By 1972 the last of the former Morgan's stores had been rebranded to Bay stores.
In 1970, on the company's 300th anniversary, as a result of punishing new British tax laws, the company relocated to Canada, and was rechartered as a Canadian business corporation under Canadian law, Head Office functions were transferred from London to Winnipeg. By 1974, as the company expanded into eastern Canada, head office functions were moved to Toronto.
In 1972, the company acquired the four-store Shop-Rite chain of catalogue stores. The chain was quickly expanded to 65 stores in Ontario, but closed in 1982 due to declining sales. In these stores, little merchandise was displayed; customers made their selections from catalogues, and staff would retrieve the merchandise from storerooms. The HBC also acquired Freimans department stores in Ottawa and converted them to The Bay.
In 1978, the Zellers discount store chain made a bid to acquire the HBC, but the HBC turned the tables and acquired Zellers. Also in 1978, Simpson's department stores were acquired by Hudson's Bay Company, and were converted to Bay stores in 1991. (The related chain Simpsons-Sears was not acquired by the Bay, but became Sears Canada in 1978.) In 1991, Simpsons disappeared, when the last Simpsons store was converted to the Bay banner.
In 1979, Canadian billionaire Kenneth Thomson won control of the company in a battle with George Weston Limited, and acquired a 75 per cent stake for $400 million. Thomson sold the company's oil and gas business, financial services, distillery, and other interests for approximately $550 million, transforming the company into a leaner, more focused operation. In 1997, the Thomson family sold the last of its remaining shares.
Hudson's Bay Company reversed a formidable debt problem in 1987, by shedding non-strategic assets such as its wholesale division and getting completely out of the oil and gas business. HBC also sold its Canadian fur-auction business to Hudson's Bay Fur Sales Canada (now North American Fur Auctions). The Northern Stores Division was sold that same year to a group of investors and employees, which adopted The North West Company name three years later.
The HBC acquired Towers Department Stores in 1990, combining them with the Zellers chain, and Woodward's stores in 1993, converting them into Bay or Zellers stores. Kmart Canada was acquired in 1998 and merged with Zellers.
In 1991, the Bay agreed to stop retailing fur in response to complaints from people opposed to killing animals for this purpose. In 1997, the Bay reopened its fur salons to meet the demand of consumers.
In December 2003, Maple Leaf Heritage Investments, a Nova Scotia-based company created to acquire shares of Hudson's Bay Company, announced that it was considering making an offer to acquire all or some of the common shares of Hudson's Bay Company. Maple Leaf Heritage Investments is a subsidiary of B-Bay Inc. Its CEO and chairman is American businesswoman Anita Zucker, widow of Jerry Zucker. Zucker had previously been the head of the Polymer Group, which acquired another Canadian institution, Dominion Textile.
On 26 January 2006, the HBC's board unanimously agreed to a bid of /share from Jerry Zucker, whose original bid was /share, ending a prolonged fight between the HBC and Zucker. The South Carolina billionaire financier was a longtime HBC minority shareholder. In a 9 March 2006 press release, the HBC announced that Zucker would replace Yves Fortier as governor and George Heller as CEO, becoming the first US citizen to lead the company. After Jerry Zucker's death, the board named his widow, Anita Zucker, as HBC Governor and HBC Deputy-Governor Rob Johnston as CEO.
On 16 July 2008, the company was sold to NRDC Equity Partners, a private equity firm based in Purchase, New York, which already owned Lord & Taylor, the oldest luxury department store chain in the United States. The Canadian and U.S. holdings were transferred to NRDC Equity Partners' holding company, Hudson's Bay Trading Company, as of late 2008.
In June 2019, a consortium including chairman Richard Baker, Rhône Group, WeWork, Hanover Investments (Luxembourg) and Abrams Capital Management announced that it wanted to take the company private. The group then owned just over 50 per cent of HBC shares. In mid-August, the consortium said that it owned 57 per cent of the HBC shares. By 19 August 2019, however, Canadian investment firm Catalyst Capital Group Inc. said it had acquired enough shares to block the plan. A U.S. company, Land & Buildings Investment Management, the owner of over 6 per cent of the shares, had also criticized the Baker plan.
In September 2011, the HBC began downsizing the Zellers chain with the announcement that it would sell the majority of the leases for its locations to the U.S.-based retailer Target Corporation and close all of their remaining locations by early 2013. Target used the acquisition of this real estate as a means to enable its entry in the Canadian market. HBC used the proceeds to allow it to pay down debt and to invest in growing its Hudson's Bay and Lord & Taylor banners. In January 2013, it was confirmed that only three of the remaining Zellers locations would remain open.
By September 2019, the Toronto and Ottawa Zellers locations were still operating but the company announced that they would both be closed in January 2020.
On 24 January 2012, the "Financial Post" reported that Richard Baker (owner of NDRC and governor of Hudson's Bay Company) had dissolved Hudson's Bay Trading Company and that the HBC would now also operate the Lord & Taylor chain. At the time, the company was run by president Bonnie Brooks. Baker remained governor and CEO of the business, and Donald Watros stayed on as chief operating officer.
In 2018, HBC sold the building that housed its flagship Lord & Taylor store on Fifth Avenue in Manhattan to WeWork Property Advisors after pressure from Land & Buildings Investment Management. The deal also included the use of floors of certain HBC-owned department stores in New York, Toronto, Vancouver and Germany as WeWork's shared office workspaces.
In August 2019, HBC announced that it would sell their Lord & Taylor business to Le Tote Inc., which was to pay in cash when the deal closes (probably before year end 2019) and an additional two years later. HBC was to get a 25 per cent equity stake in Le Tote. The buyer would retain the stores' inventory, with an estimated value of . The deal, expected to close before year end, required HBC to pay the stores' rent for at least three years, leading one news report to describe it as "Not a clean exit". The liability to HBC for the rents was estimated at cash per year.
In October 2012, the HBC announced a $1.6 billion initial public offering (IPO); Baker planned to use the IPO to allow Canadian ownership to return to the company, and to help pay off debts with other partners. Additionally, the company also announced that it would re-brand The Bay department store chain as "Hudson's Bay".
From 2004 to 2008, the HBC owned and operated a small chain of off-price stores called Designer Depot. Similar to the Winners and HomeSense retail format, Designer Depot did not meet sales expectations, and its nine stores were sold. Another HBC chain, Fields, was sold to a private firm in 2012. Established in 1950, Fields was acquired by Zellers in 1976.
When Zellers was acquired by HBC in 1978, Fields became part of the HBC portfolio. Zellers was still owned by HBC but had been reduced to a chain of two liquidation stores following the sale of its lease portfolio to Target Canada in 2011. The Target Canada chain folded in 2015; the leases were subsequently returned to landlords or re-sold to other retailers.
In early 2019, HBC announced that it would close all 37 of the Home Outfitters stores by year end.
The new Hudson's Bay brand was launched in March 2013, incorporating a new logo with an updated rendition of the classic Hudson's Bay Company coat of arms, designed to be modern and better reflect the company's heritage. Following the IPO, HBC had also introduced a new corporate logo of its own (reviving a wordmark from the original HBC flag), but the new logo was not intended to be a consumer-facing brand.
On 29 July 2013, Hudson's Bay Company announced that it would buy Saks, Inc., operator of the U.S. Saks Fifth Avenue brand, for US$2.9 billion, or $16 per share. The merger was completed on 3 November 2013. The company also stated that as a result of the purchase, Canadian consumers would see Saks stores arriving in their country soon. After the purchase was finalized, HBC had a net loss of $124.2 million in the 2013 3Q due to the cost of the purchase and promotions.
In late February 2019, HBC announced that it would close 20 of the 133 Saks stores and that all of the remaining locations would be "subject to review".
In January 2016, HBC announced it would expand deeper into digital space with the acquisition of an online flash sales site, the Gilt Groupe, for US$250 million. In June 2018, HBC announced it would sell Gilt Groupe to online fashion store Rue La La for an undisclosed sum.
In early 2017, the Hudson's Bay Company made an overture to Macy's for a potential takeover of the struggling department store. Later, HBC also considered a purchase of the struggling Neiman Marcus Group Inc. It did not proceed with either deal.
As of November 2017, the company also had retail operations in Europe, including 20 Hudson's Bay stores in the Netherlands and five Saks Off Fifth stores in Germany, as well as the 135 stores of the Galeria Kaufhof department store chain in Germany. HBC had announced its expansion into the Netherlands in May 2016 with the takeover of 20 former Vroom & Dreesmann (V&D) sites by 2017. V&D, a historic Dutch department store chain, had gone bankrupt and shut down in early 2016.
HBC had acquired the German department store chain Galeria Kaufhof and its Belgian subsidiary from Metro Group in September 2015 for .
On 1 November 2017, HBC received an unsolicited offer from Austrian firm Signa Holding for Kaufhof and other real estate. An unnamed source told CNBC that the value of the offer was approximately 3 billion euros. This information on the offer was also reiterated in a press release by activist shareholder Land & Buildings Investment Management, which urged HBC to accept the offer; the company replied that the offer was incomplete and did not provide indication of financing for the deal. In late 2018, Galeria Kaufhof and Karstadt merged as part of a spin off.
HBC announced its intent to sell the last 49.99 percent of Galeria Kaufhof shares it held to Austrian firm Signa Holding in June 2019. The sale of the real estate in Germany had gained US$1.5 billion (€1 billion) for HBC. At that time, HBC still had a retail operation in the Netherlands, using the Vroom & Dreesmann locations it had purchased in 2017. On 31 August 2019, the company announced that all 15 of those stores would close by year end, the final chapter of HBC's "ill-fated European venture", according to Bloomberg News.
On 1 April 2018, HBC disclosed that more than five million credit and debit cards used for in-store purchases had been recently breached by hackers. The compromised credit card transactions took place at Saks Fifth Avenue, Saks Off 5th, and Lord & Taylor stores. The hack had been discovered by Gemini Advisory, which called the breach "amongst the biggest and most damaging to ever hit retail companies".
A July 2019 hack of Capital One, which provides HBC Mastercards, did not impact the HBC credit cards or card applications, according to HBC.
By early September 2019, it was clear that HBC was downsizing its operations, with the planned sale of Lord & Taylor the most recent step. A feature article by Bloomberg News mentioned that CEO Helena Foulkes, recruited in 2018, "had helped to turn around Hudson’s Bay". She was closing stores and selling assets "to put the company on more solid financial footing" and could then "focus on the two remaining 'crown jewels' in her portfolio: Saks Fifth Avenue and the Bay". On the other hand, Bloomberg suggested that millennial shoppers prefer to make purchases online, or direct from various brands' own stores, and that HBC "has yet to offer something they can't find somewhere else and risks drifting into irrelevance".
In February 2020, shareholders of the company voted in favour of a plan to become a private company at a special meeting of shareholders. Under the plan of arrangement, the company will be owned by a group of continuing shareholders led by HBC Governor and Executive Chairman Richard Baker. Effective March 3, 2020, the company was delisted from the Toronto Stock Exchange, with Richard A. Baker replacing Foulkes as CEO.
On May 15, 2020, the Hudson's Bay Company has announced that it is shutting its 168,000 square foot store in downtown Edmonton in late 2020. The store will reopen temporarily on May 19, along with the company's other Alberta stores, but will gradually close. Employees will be permitted to transfer to the five remaining Edmonton-area stores.
The HBC is diversified into joint ventures and other types of business products. The HBC has credit card, mortgage, and personal insurance branches. These other products and services are joint partnerships with other corporations. The HBC also has other HBC Rewards corporate partners such as: Imperial Oil/Esso, M&M Meat Shops, Chapters/Indigo Books, Kelsey's/Montana's Restaurants, Thrifty Car Rental, Cineplex Entertainment Theatres, etc. HBC Rewards points can be redeemed in house or into corporate partners' gift cards and certificates. Points can also be converted to Air Miles.
The HBC is involved in community and charity activities. The HBC Rewards Community Program raises funds for community causes. The HBC Foundation is a charity agency involved in social issues and service. The HBC used to sponsor the annual HBC Run for Canada, a series of public-participation runs and walks held across the country on Canada Day to raise funds for Canadian athletes. The company discontinued this event in 2009.
The HBC was the official outfitter of clothing for members of the Canadian Olympic team in 1936, 1960, 1964, 1968, 2006, 2008, 2010, 2012, 2014 and 2016. The sponsorship has been renewed through 2020. Since the late 2000s, HBC has used its status as the official Canadian Olympics team outfitter to gain global exposure, as part of a turnaround plan that included shedding under-performing brands and luring new high-end brands.
On 2 March 2005, the company was announced as the new clothing outfitter for the Canadian Olympic team, in a $100 million deal, providing apparel for the 2006, 2008, 2010, and 2012 Games, having outbid the existing Canadian Olympic wear-supplier, Roots Canada, which had supplied Canada's Olympic teams from 1998 to 2004. The Canadian Olympic collection is sold through Hudson's Bay (and Zellers until 2013 when the Zellers leases were sold to Target Canada).
HBC's 2006 Winter Olympics and 2008 Summer Olympics uniforms and toques received a mixed reception for their multicoloured stripes (green, red, yellow, blue) which seemed to be not-so-subtle advertising for HBC rather than representing the Canadian Olympic team's traditional colours of red and white (with black as a secondary), in contrast to well-received Root's 1998 collection with its trendy red letter jackets and Poor Boy caps. HBC produced 80 per cent to 90 per cent of their Olympic clothes in China which was criticized, as Roots ensured that the Olympic clothes were made in Canada using Canadian material.
HBC's apparel for the 2010 Winter Olympics held in Vancouver proved to be extremely successful, in part because Canada was the host country and their athletes had a record medal haul. The "Red Mittens" (red-and-white mittens featuring a large maple leaf) that were sold for , with one-third of the proceeds going to the Canadian Olympic Committee, proved very popular, as were the "Canada" hoodies.
The HBC's 2010 Winter Olympics apparel was also controversial due to a knitted, machine-made sweater that looked like a Cowichan sweater. After a meeting between HBC representatives and Cowichan Tribes, a compromise was made between the parties; knitters would have an opportunity to sell their sweaters at the downtown Vancouver HBC store, alongside the HBC imitations.
Lord Sebastian Coe, chairman of the 2012 London Olympic Games Organizing Committee, who attended the Vancouver Olympics, noted that the Canadians were passionate in embracing the Games with their "Canada" hoodies and their red mittens (of which 2.6 million pairs sold that year). HBC has continued to produce these red mittens for subsequent Olympic Games.
The legacy of the HBC has been maintained in part by the detailed record-keeping and archiving of material by the company. Before 1974, the records of the HBC were kept in the London office headquarters. The HBC opened an archives department to researchers in 1931. In 1974, Hudson's Bay Company Archives (HBCA) were transferred from London and placed on deposit with the Manitoba archives in Winnipeg. The company granted public access to the collection the following year.
On 27 January 1994, the company's archives were formally donated to the Archives of Manitoba.
At the time of the donation, the appraised value of the records was nearly $60 million. A foundation, Hudson's Bay Company History Foundation funded through the tax savings resulting from the donation, was established to support the operations of the HBC Archive as a division of the Archives of Manitoba, along with other activities and programs. More than of filed documents and hundreds of microfilm reels are now stored in a special climate-controlled vault in the Manitoba Archives Building.
In 2007, Hudson's Bay Company Archives became part of the United Nations "Memory of the World Programme" project, under UNESCO. The records covered the HBC history from the founding of the company in 1670. The records contained business transactions, medical records, personal journals of officials, inventories, company reports, etc.
, the members of the board of directors of Hudson's Bay Company are:
In the 18th and 19th Centuries, Hudson's Bay Company operated with a very rigid hierarchy when it came to its employees. This hierarchy essentially broke down into two levels; the officers and the servants. Comprising the officers were the factors, masters and chief traders, clerks and surgeons. The servants were the tradesmen, boatmen, and labourers. The officers essentially ran the fur trading posts. They had many duties which included supervising the workers in their trade posts, valuing the furs, and keeping trade and post records. In 1821, when Hudson's Bay Company and the North West Company merged, the hierarchy became even stricter and the lines between officers and servants became virtually impossible to cross. Officers in charge of individual trading posts had much responsibility because they were directly in charge of enforcing the policies made by the governor and committee (the board) of the company. One of these policies was the price of particular furs and trade goods. These prices were called the Official and Comparative Standards. Made-Beaver, the quality measurement of the pelt, was the means of exchange used by Hudson's Bay Company to define the Official and Comparative Standards. Because the governor was stationed in London, England, they needed to have reliable officers managing the trade posts halfway around the world. Because the fur trade was a very dynamic market, HBC needed to have some form of flexibility when dealing with prices and traders. Price fluctuation was deferred to the officers in charge of the trade posts, and the head office recorded any difference between the company's standard and that set by the individual officers. Overplus, or any excess revenue gained by officers was strictly documented to insure that it wasn't being pocketed and taken from the company. This strict yet flexible hierarchy exemplifies how Hudson's Bay Company was able to be so successful while still having its central management and trade posts located so far apart.
Chronological list of Governors of the Hudson's Bay Company:
Under the charter establishing Hudson's Bay Company, the company was required to give two elk skins and two black beaver pelts to the English king, then Charles II, or his heirs, whenever the monarch visited Rupert's Land. The exact text from the 1670 Charter reads:
The ceremony was first conducted with the Prince of Wales (the future Edward VIII) in 1927, then with King George VI in 1939, and last with his daughter, Queen Elizabeth II in 1959 and 1970. On the last such visit, the pelts were given in the form of two live beavers, which the Queen donated to the Winnipeg Zoo in Assiniboine Park. However, when the company permanently moved its headquarters to Canada, the Charter was amended to remove the rent obligation. Each of the four "rent ceremonies" took place in or around Winnipeg.
The HBC is the only European trading company to have survived and outlived all its rivals. | https://en.wikipedia.org/wiki?curid=13297 |
Hoplite
Hoplites () () were citizen-soldiers of Ancient Greek city-states who were primarily armed with spears and shields. Hoplite soldiers utilized the phalanx formation to be effective in war with fewer soldiers. The formation discouraged the soldiers from acting alone, for this would compromise the formation and minimize its strengths. The hoplites were primarily represented by free citizens – propertied farmers and artisans – who were able to afford the bronze armour suit and weapons (estimated at a third to a half of its able-bodied adult male population). Most hoplites were not professional soldiers and often lacked sufficient military training. Some states maintained a small elite professional unit, known as the "epilektoi" ("chosen") since they were picked from the regular citizen infantry. These existed at times in Athens, Argos, Thebes, and Syracuse, among others. Hoplite soldiers made up the bulk of ancient Greek armies.
In the 8th or 7th century BC, Greek armies adopted the phalanx formation. The formation proved successful in defeating the Persians when employed by the Athenians at the Battle of Marathon in 490 BC during the First Greco-Persian War. The Persian archers and light troops who fought in the Battle of Marathon failed because their bows were too weak for their arrows to penetrate the wall of Greek shields that comprised the phalanx formation. The phalanx was also employed by the Greeks at the Battle of Thermopylae in 480 BC and at the Battle of Plataea in 479 BC during the Second Greco-Persian War.
The word "hoplite" (Greek: "hoplitēs"; pl. "hoplitai") derives from "hoplon" (, plural "hopla" ), referring to the hoplite's shield (Originally the term hoplon was believed to refer to the hoplite's shield, research has found the term "aspis" instead refers to the large round shield). In the modern Hellenic Army, the word "hoplite" (Greek: ) is used to refer to an infantryman.
The fragmented political structure of Ancient Greece, with many competing city-states, increased the frequency of conflict, but at the same time limited the scale of warfare. Limited manpower did not allow most Greek city-states to form large armies which could operate for long periods because they were generally not formed from professional soldiers. Most soldiers had careers as farmers or workers and returned to these professions after the campaign. All hoplites were expected to take part in any military campaign when called for duty by leaders of the state. The Lacedaemonian citizens of Sparta were renowned for their lifelong combat training and almost mythical military prowess, while their greatest adversaries, the Athenians, were exempted from service only after the age of 60. This inevitably reduced the potential duration of campaigns and often resulted in the campaign season being restricted to one summer .
Armies generally marched directly to their destination, and in some cases the battlefield was agreed to by the contestants in advance. Battles were fought on level ground, and hoplites preferred to fight with high terrain on both sides of the phalanx so the formation could not be flanked. An example of this was the Battle of Thermopylae, where the Spartans specifically chose a narrow coastal pass to make their stand against the massive Persian army. The vastly outnumbered Greeks held off the Persians for seven days.
When battles occurred, they were usually set piece and intended to be decisive. The battlefield would be flat and open to facilitate phalanx warfare. These battles were usually short and required a high degree of discipline. At least in the early classical period, when cavalry was present, its role was restricted to protection of the flanks of the phalanx, pursuit of a defeated enemy, and covering a retreat if required. Light infantry and missile troops took part in the battles but their role was less important. Before the opposing phalanxes engaged, the light troops would skirmish with the enemy's light forces, and then protect the flanks and rear of the phalanx.
The military structure created by the Spartans was a rectangular phalanx formation. The formation was organized from eight to ten rows deep and could cover a front of a quarter of a mile or more if sufficient hoplites were available. The two lines would close to a short distance to allow effective use of their spears, while the psiloi threw stones and javelins from behind their lines. The shields would clash and the first lines (protostates) would stab at their opponents, at the same time trying to keep in position. The ranks behind them would support them with their own spears and the mass of their shields gently pushing them, not to force them into the enemy formation but to keep them steady and in place. The soldiers in the back provided motivation to the ranks in the front being that most hoplites were close community members. At certain points, a command would be given to the phalanx or a part thereof to collectively take a certain number of steps forward (ranging from half to multiple steps). This was the famed "othismos".
At this point, the phalanx would put its collective weight to push back the enemy line and thus create fear and panic among its ranks. There could be multiple such instances of attempts to push, but it seems from the accounts of the ancients that these were perfectly orchestrated and attempted organized "en masse". Once one of the lines broke, the troops would generally flee from the field, sometimes chased by psiloi, peltasts, or light cavalry.
If a hoplite escaped, he would sometimes be forced to drop his cumbersome aspis, thereby disgracing himself to his friends and family (becoming a "ripsaspis", one who threw his shield). To lessen the number of casualties inflicted by the enemy during battles, soldiers were positioned to stand shoulder to shoulder with their hoplon. The hoplites' most prominent citizens and generals led from the front. Thus, the war could be decided by a single battle. Victory was enforced by ransoming the fallen back to the defeated, called the "Custom of the Greeks".
Individual hoplites carried their shields on their left arm, protecting themselves and the soldier to the left. This meant that the men at the extreme right of the phalanx were only half-protected. In battle, opposing phalanxes would exploit this weakness by attempting to overlap the enemy's right flank. It also meant that, in battle, a phalanx would tend to drift to the right (as hoplites sought to remain behind the shield of their neighbour). The most experienced hoplites were often placed on the right side of the phalanx, to counteract these problems. According to Plutarch's "Sayings of Spartans", "a man carried a shield for the sake of the whole line".
The phalanx is an example of a military formation in which single combat and other individualistic forms of battle were suppressed for the good of the whole. In earlier Homeric, dark age combat, the words and deeds of supremely powerful heroes turned the tide of battle. Instead of having individual heroes, hoplite warfare relied heavily on the community and unity of soldiers. With friends and family pushing on either side and enemies forming a solid wall of shields in front, the hoplite had little opportunity for feats of technique and weapon skill, but great need for commitment and mental toughness. By forming a human wall to provide a powerful defensive armour, the hoplites became much more effective while suffering fewer casualties. The hoplites had much discipline and were taught to be loyal and trustworthy. They had to trust their neighbours for mutual protection, so a phalanx was only as strong as its weakest elements. Its effectiveness depended on how well the hoplites could maintain this formation in combat, and how well they could stand their ground, especially when engaged against another phalanx. The more disciplined and courageous the army, the more likely it was to win. Often engagements between various city-states of Greece would be resolved by one side fleeing after their phalanx had broken formation.
Each hoplite provided his own equipment. Thus, only those who could afford such weaponry fought as hoplites. As with the Roman Republican army it was the middle classes who formed the bulk of the infantry. Equipment was not standardized, although there were doubtless trends in general designs over time, and between city-states. Hoplites had customized armour, the shield was decorated with family or clan emblems, although in later years these were replaced by symbols or monograms of the city states. The equipment might be passed down in families, as it was expensive to manufacture.
The hoplite army consisted of heavy infantrymen. Their armour, also called panoply, was sometimes made of full bronze for those who could afford it, weighing nearly . Armor was more commonly made out of linen fabric glued together, called linothorax. The average farmer-peasant hoplite who could not afford any armor typically wore no armour, carrying only a shield, a spear, and perhaps a helmet plus a secondary weapon.The linothorax was the most popular type armour worn by the hoplites, since it was cost-effective and provided decent protection. The richer upper-class hoplites typically had a bronze cuirass of either the bell or muscled variety, a bronze helmet with cheekplates, as well as greaves and other armour. The design of helmets used varied through time. The Corinthian helmet was at first standardized and was a successful design. Later variants included the Chalcidian helmet, a lightened version of the Corinthian helmet, and the simple Pilos helmet worn by the later hoplites. Often the helmet was decorated with one, sometimes more horsehair crests, and/or bronze animal horns and ears. Helmets were often painted as well. The Thracian helmet had a large visor to further increase protection. In later periods, "linothorax" was also used, as it is tougher and cheaper to produce. The linen was thick.
By contrast with hoplites, other contemporary infantry (e.g., Persian) tended to wear relatively light armour, wicker shields, and were armed with shorter spears, javelins, and bows. The most famous are the Peltasts, light-armed troops who wore no armour and were armed with a light shield, javelins and a short sword. The Athenian general Iphicrates developed a new type of armour and arms for his mercenary army, which included light linen armour, smaller shields and longer spears, whilst arming his Peltasts with larger shields, helmets and a longer spear, thus enabling them to defend themselves more easily against hoplites. With this new type of army he defeated a Spartan army in 392 BC. The arms and armour described above were most common for hoplites.
Hoplites carried a large concave shield called an "aspis" (often referred to as a "hoplon"), measuring between 80–100 centimetres (31–39 in) in diameter and weighing between 6.5–8 kilograms (14–18 lbs). This large shield was made possible partly by its shape, which allowed it to be supported on the shoulder. The hoplon shield was assembled in three layers: the center layer was made of thick wood, the outside layer facing the enemy was made of bronze, and leather comprised the inside of the shield. The revolutionary part of the shield was the grip. Known as an Argive grip, it placed the handle at the edge of the shield, and was supported by a leather fastening (for the forearm) at the centre. These two points of contact eliminated the possibility of the shield swaying to the side after being struck, and as a result soldiers rarely lost their shields. This allowed the hoplite soldier more mobility with the shield, as well as the ability to capitalize on its offensive capabilities and better support the phalanx. The large hoplon shields, designed for pushing ahead, were the most essential equipment for the hoplites.
The main offensive weapon used was a long and in diameter spear called a "doru", or "dory". It was held with the right hand, with the left hand holding the hoplite's shield. Soldiers usually held their spears in an underhand position when approaching but once they came into close contact with their opponents, they were held in an overhand position ready to strike. The spearhead was usually a curved leaf shape, while the rear of the spear had a spike called a "sauroter" ("lizard-killer") which was used to stand the spear in the ground (hence the name). It was also used as a secondary weapon if the main shaft snapped, or for the rear ranks to finish off fallen opponents as the phalanx advanced over them. In addition to being used as a secondary weapon, the sauroter doubled to balance the spear, but not for throwing purposes. It is a matter of contention, among historians, whether the hoplite used the spear overarm or underarm. Held underarm, the thrusts would have been less powerful but under more control, and vice versa. It seems likely that both motions were used, depending on the situation. If attack was called for, an overarm motion was more likely to break through an opponent's defence. The upward thrust is more easily deflected by armour due to its lesser leverage. When defending, an underarm carry absorbed more shock and could be 'couched' under the shoulder for maximum stability. An overarm motion would allow more effective combination of the "aspis" and "doru" if the shield wall had broken down, while the underarm motion would be more effective when the shield had to be interlocked with those of one's neighbours in the battle-line. Hoplites in the rows behind the lead would almost certainly have made overarm thrusts. The rear ranks held their spears underarm, and raised their shields upwards at increasing angles. This was an effective defence against missiles, deflecting their force.
Hoplites also carried a sword, mostly a short sword called a "xiphos", but later also longer and heavier types. The short sword was a secondary weapon, used if or when their spears were broken or lost, or if the phalanx broke rank. The xiphos usually had a blade around long; however, those used by the Spartans were often only 30–45 centimetres long. This very short xiphos would be very advantageous in the press that occurred when two lines of hoplites met, capable of being thrust through gaps in the shieldwall into an enemy's unprotected groin or throat, while there was no room to swing a longer sword. Such a small weapon would be particularly useful after many hoplites had started to abandon body armour during the Peloponnesian War. Hoplites could also alternatively carry the "kopis", a heavy knife with a forward-curving blade.
Dark age warfare transitioned into hoplite warfare in the 8th century BC. Historians and researchers have debated the reason and speed of the transition for centuries. So far 3 popular theories exist:
Developed by Anthony Snodgrass, the Gradualist Theory states that the hoplite style of battle developed in a series of steps as a result of innovations in armour and weaponry. Chronologically dating the archeological findings of hoplite armour and using the findings to approximate the development of the phalanx formation, Snodgrass claims that the transition took approximately 100 years to complete from 750–650 BC. The progression of the phalanx took time because as the phalanx matured it required denser formations that made the elite warriors recruit Greek citizens. The large amounts of hoplite armour needed to then be distributed to the populations of Greek citizens only increased the time for the phalanx to be implemented. Snodgrass believes, only once the armour was in place that the phalanx formation became popular.
The Rapid Adaptation model was developed by historians Paul Cartledge and Victor Hanson. The historians believe that the phalanx was created individually by military forces, but was so effective that others had to immediately adapt their way of war to combat the formation. Rapid Adoptionists propose that the double grip, hoplon shield that was required for the phalanx formation was so constricting in mobility that once it was introduced, dark age, free flowing warfare was inadequate to fight against the hoplites only escalating the speed of the transition. Quickly, the phalanx formation and hoplite armour became widely used throughout Ancient Greece. Cartledge and Hanson estimate the transition took place from 725–675 BC.
Developed by Hans Van Wees, the Extended Gradualist theory is the most lengthy of the three popular transition theories. Van Wees depicts iconography found on pots of the Dark Ages believing that the foundation of the phalanx formation was birthed during this time. Specifically, he uses an example of the Chigi Vase to point out that hoplite soldiers were carrying normal spears as well as javelins on their backs. Matured hoplites did not carry long-range weapons including javelins. The Chigi vase is important for our knowledge of the hoplite soldier because it is one if not the only representation of the hoplite formation, known as the phalanx, in Greek art. This led Van Wees to believe that there was a transitional period from long-range warfare of the Dark Ages to the close combat of hoplite warfare. Some other evidence of a transitional period lies within the text of Spartan poet Tyrtaios, who wrote, "…will they draw back for the pounding [of the missiles, no,] despite the battery of great hurl-stones, the helmets shall abide the rattle [of war unbowed]". At no point in other texts does Tyrtaios discuss missiles or rocks, making another case for a transitional period in which hoplite warriors had some ranged capabilities. Extended Gradualists argue that hoplite warriors did not fight in a true phalanx until the 5th century BC. Making estimations of the speed of the transition reached as long as 300 years, from 750–450 BC.
The exact time when hoplite warfare was developed is uncertain, the prevalent theory being that it was established sometime during the 8th or 7th century BC, when the "heroic age was abandoned and a far more disciplined system introduced" and the Argive shield became popular. Peter Krentz argues that "the ideology of hoplitic warfare as a ritualized contest developed not in the 7th century [BC], but only after 480, when non-hoplite arms began to be excluded from the phalanx". Anagnostis Agelarakis, based on recent archaeo-anthropological discoveries of the earliest monumental polyandrion (communal burial of male warriors) at Paros Island in Greece, unveils a last quarter of the 8th century BC date for a hoplitic phalangeal military organization.
The rise and fall of hoplite warfare was tied to the rise and fall of the city-state. As discussed above, hoplites were a solution to the armed clashes between independent city-states. As Greek civilization found itself confronted by the world at large, particularly the Persians, the emphasis in warfare shifted. Confronted by huge numbers of enemy troops, individual city-states could not realistically fight alone. During the Greco-Persian Wars (499–448 BC), alliances between groups of cities (whose composition varied over time) fought against the Persians. This drastically altered the scale of warfare and the numbers of troops involved. The hoplite phalanx proved itself far superior to the Persian infantry at such conflicts as the Battle of Marathon, Thermopylae, and the Battle of Plataea.
During this period, Athens and Sparta rose to a position of political eminence in Greece, and their rivalry in the aftermath of the Persian wars brought Greece into renewed internal conflict. The Peloponnesian War was on a scale unlike conflicts before. Fought between leagues of cities, dominated by Athens and Sparta respectively, the pooled manpower and financial resources allowed a diversification of warfare. Hoplite warfare was in decline. There were three major battles in the Peloponnesian War, and none proved decisive. Instead there was increased reliance on navies, skirmishers, mercenaries, city walls, siege engines, and non-set piece tactics. These reforms made wars of attrition possible and greatly increased the number of casualties. In the Persian war, hoplites faced large numbers of skirmishers and missile-armed troops, and such troops (e.g., peltasts) became much more commonly used by the Greeks during the Peloponnesian War. As a result, hoplites began wearing less armour, carrying shorter swords, and in general adapting for greater mobility. This led to the development of the ekdromos light hoplite.
Many famous personalities, philosophers, artists, and poets fought as hoplites.
According to Nefiodkin, fighting against Greek heavy infantry during the Greco-Persian Wars inspired the Persians to introduce scythed chariots.
Sparta is one of the most famous city-states, along with Athens, which had a unique position in ancient Greece. Contrary to other city states, the free citizens of Sparta served as hoplites their entire lives, training and exercising in peacetime, which gave Sparta a professional standing army. Often small, numbering around 6000 at its peak to no more than 1000 soldiers at lowest point, divided into six mora or battalions, the Spartan army was feared for its discipline and ferocity. Military service was the primary duty of Spartan men, and Spartan society was organized around its army.
Military service for hoplites lasted until the age of 40, and sometimes until 60 years of age, depending on a man's physical ability to perform on the battlefield.
Later in the hoplite era, more sophisticated tactics were developed, in particular by the Theban general Epaminondas. These tactics inspired the future king Philip II of Macedon, who was at the time a hostage in Thebes, also inspired the development of new type of infantry, the Macedonian phalanx. After the Macedonian conquests of the 4th century BC, the hoplite was slowly abandoned in favour of the phalangite, armed in the Macedonian fashion, in the armies of the southern Greek states. Although clearly a development of the hoplite, the Macedonian phalanx was tactically more versatile, especially used in the combined arms tactics favoured by the Macedonians. These forces defeated the last major hoplite army, at the Battle of Chaeronea (338 BC), after which Athens and its allies joined the Macedonian empire.
While Alexander's army mainly fielded "Pezhetairoi" (= Foot Companions) as his main force, his army also included some classic hoplites, either provided by the League of Corinth or from hired mercenaries. Beside these units, the Macedonians also used the so-called "Hypaspists", an elite force of units possibly originally fighting as hoplites and used to guard the exposed right wing of Alexander's phalanx.
Hoplite-style warfare was influential, and influenced several other nations in the Mediterranean. Hoplite warfare was the dominant fighting style on the Italian Peninsula until the early 3rd century BC, employed by both the Etruscans and the Early Roman army. The Romans later changed their fighting style to a more flexible maniple organization, which was more versatile on rough terrain like that of Samnium. Roman equipment also changed, and they reequipped their soldiers with longer oval shields ("scutum"), swords and heavy javelins ("pilum"). In the end only the "triarii" would keep a long spear ("hasta") as their main weapon. The triarii would still fight in a traditional phalanx formation. Though the Italian tribes, namely the socii fighting with the Romans, later adopted the new Roman fighting style, some continued to fight as hoplites. Local levied troops or mercenaries serving under Pyrrhus of Epirus or Hannibal (namely Etruscans) were equipped and fought as hoplites.
Early in its history, Ancient Carthage also equipped its troops as Greek hoplites, in units such as the Sacred Band of Carthage. Many Greek hoplite mercenaries fought in foreign armies, such as Carthage and Achaemenid Empire, where it is believed by some that they inspired the formation of the Cardaces. Some hoplites served under the Illyrian king Bardylis in the 4th century. The Illyrians were known to import many weapons and tactics from the Greeks.
The Diadochi imported the Greek phalanx to their kingdoms. Though they mostly fielded Greek citizens or mercenaries, they also armed and drilled local natives as hoplites or rather Macedonian phalanx, like the Machimoi of the Ptolemaic army.
The Greek armies of the Hellenistic period mostly fielded troops in the fashion of the Macedonian phalanx. Many armies of mainland Greece retained hoplite warfare. Besides classical hoplites Hellenistic nations began to field two new types of hoplites, the "Thureophoroi" and the "Thorakitai". They developed when Greeks adopted the Celtic "Thureos" shield, of an oval shape that was similar to the shields of the Romans, but flatter. The Thureophoroi were armed with a long thrusting spear, a short sword and, if needed, javelins. While the Thorakitai were similar to the Thureophoroi, they were more heavily armoured, as their name implies, usually wearing a mail shirt. These troops were used as a link between the light infantry and the phalanx, a form of medium infantry to bridge the gaps. | https://en.wikipedia.org/wiki?curid=13298 |
History of Spain
The history of Spain dates back to the Middle Ages. In 1516, Habsburg Spain unified a number of disparate predecessor kingdoms; its modern form of a constitutional monarchy was introduced in 1813, and the current democratic constitution dates to 1978.
After the completion of the Reconquista, the Crown of Castile began to explore across the Atlantic Ocean in 1492, expanding into the New World and marking the beginning of the Golden Age under the Spanish Empire. The kingdoms of Spain were united under Habsburg rule in 1516, that unified the Crown of Castile, the Crown of Aragon and smaller kingdoms under the same rule. Until the 1650s, Habsburg Spain was among the most powerful states in the world.
During this period, Spain was involved in all major European wars, including the Italian Wars, the Eighty Years' War, the Thirty Years' War, and the Franco-Spanish War. In the later 17th century, however, Spanish power began to decline, and after the death of the last Habsburg ruler, the War of the Spanish Succession ended with the relegation of Spain, now under Bourbon rule, to the status of a second-rate power with a reduced influence in European affairs. The so-called Bourbon Reforms attempted the renewal of state institutions, with some success, but as the century ended, instability set in with the French Revolution and the Peninsular War, so that Spain never regained its former strength.
Spain after 1814 was destabilised as different political parties representing "liberal", "reactionary", and "moderate" groups throughout the remainder of the century fought for and won short-lived control without any being sufficiently strong to bring about lasting stability. The former Spanish Empire overseas quickly disintegrated with the Latin American wars of independence. Only Cuba and the Philippines and some small islands were left; they revolted and the United States acquired ownership (or control, in the case of Cuba) after the Spanish–American War of 1898.
A tenuous balance between liberal and conservative forces was struck in the establishment of a constitutional monarchy during the Restoration period but brought no lasting solution, and ultimately the last governments of the monarchy changed into a dictatorial rule. Opposing the trend toward authoritarianism of regime changes during the interwar period in Europe, a democratic republic was proclaimed in Spain in 1931. However, six years later the country descended into the Spanish Civil War between the Republican and the Nationalist factions.
The rebel victory in the conflict installed a dictatorship led by Francisco Franco, that lasted until 1975. The first post-war decade was particularly violent, autocratic, and repressive both in a political, cultural, social, and economical sense. The country experienced rapid economic growth in the 1960s and early 1970s.
Only with the death of Franco in 1975 did Spain return to the monarchy, this time headed by Juan Carlos I, and to democracy. With a fresh Constitution voted in 1978, Spain entered the European Economic Community in 1986 (transformed into the European Union with the Maastricht Treaty of 1992), and the Eurozone in 1999. The financial crisis of 2007–08 ended a decade of economic boom and Spain entered a recession and debt crisis and remains plagued by very high unemployment and a weak economy.
The Iberian Peninsula was first inhabited by anatomically modern humans about 32,000 years BP.
The earliest record of hominids living in Western Europe has been found in the Spanish cave of Atapuerca; a flint tool found there dates from 1.4 million years ago, and early human fossils date to roughly 1.2 million years ago. Modern humans in the form of Cro-Magnons began arriving in the Iberian Peninsula from north of the Pyrenees some 35,000 years ago. The most conspicuous sign of prehistoric human settlements are the famous paintings in the northern Spanish cave of Altamira, which were done c. 15,000 BC and are regarded as paramount instances of cave art.
Furthermore, archeological evidence in places like Los Millares and El Argar, both in the province of Almería, and La Almoloya near Murcia suggests developed cultures existed in the eastern part of the Iberian Peninsula during the late Neolithic and the Bronze Age.
Around 2500 BC, the nomadic shepherds known as the Yamna or Pit Grave culture conquered the peninsula using new technologies and horses while killing all local males according to DNA studies.
Spanish prehistory extends to the pre-Roman Iron Age cultures that controlled most of Iberia: those of the Iberians, Celtiberians, Tartessians, Lusitanians, and Vascones and trading settlements of Phoenicians, Carthaginians, and Greeks on the Mediterranean coast.
Before the Roman conquest the major cultures along the Mediterranean coast were the Iberians, the Celts in the interior and north-west, the Lusitanians in the west, and the Tartessians in the southwest. The seafaring Phoenicians, Carthaginians, and Greeks successively established trading settlements along the eastern and southern coast. The first Greek colonies, such as Emporion (modern Ampurias), were founded along the northeast coast in the 9th century BC, leaving the south coast to the Phoenicians.
The Greeks are responsible for the name "Iberia", apparently after the river Iber (Ebro). In the 6th century BC, the Carthaginians arrived in Iberia, struggling first with the Greeks, and shortly after, with the newly arriving Romans for control of the Western Mediterranean. Their most important colony was Carthago Nova (Latin name of modern-day Cartagena).
The peoples whom the Romans met at the time of their invasion in what is now known as Spain were the Iberians, inhabiting an area stretching from the northeast part of the Iberian Peninsula through the southeast. The Celts mostly inhabited the inner and north-west part of the peninsula. In the inner part of the peninsula, where both groups were in contact, a mixed culture arose, the Celtiberians. The Celtiberian Wars were fought between the advancing legions of the Roman Republic and the Celtiberian tribes of Hispania Citerior from 181 to 133 BC. The Roman conquest of the peninsula was completed in 19 BC.
"Hispania" was the name used for the Iberian Peninsula under Roman rule from the 2nd century BC. The populations of the peninsula were gradually culturally Romanized, and local leaders were admitted into the Roman aristocratic class.
The Romans improved existing cities, such as Tarragona ("Tarraco"), and established others like Zaragoza ("Caesaraugusta"), Mérida ("Augusta Emerita"), Valencia ("Valentia"), León ("Legio Septima"), Badajoz ("Pax Augusta"), and Palencia. The peninsula's economy expanded under Roman tutelage. Hispania supplied Rome with food, olive oil, wine and metal. The emperors Trajan, Hadrian, and Theodosius I, the philosopher Seneca, and the poets Martial, Quintilian, and Lucan were born in Hispania. Hispanic bishops held the Council of Elvira around 306.
After the fall of the Western Roman Empire in the 5th century, parts of Hispania came under the control of the Germanic tribes of Vandals, Suebi, and Visigoths.
The collapse of the Western Roman Empire did not lead to the same wholesale destruction of Western classical society as happened in areas like Roman Britain, Gaul and Germania Inferior during the Early Middle Ages, although the institutions and infrastructure did decline. Spain's present languages, its religion, and the basis of its laws originate from this period. The centuries of uninterrupted Roman rule and settlement left a deep and enduring imprint upon the culture of Spain.
The first Germanic tribes to invade Hispania arrived in the 5th century, as the Roman Empire decayed. The Visigoths, Suebi, Vandals and Alans arrived in Spain by crossing the Pyrenees mountain range, leading to the establishment of the Suebi Kingdom in Gallaecia, in the northwest, the Vandal Kingdom of Vandalusia (Andalusia), and the Visigothic Kingdom in Toledo. The Romanized Visigoths entered Hispania in 415. After the conversion of their monarchy to Roman Catholicism and after conquering the disordered Suebic territories in the northwest and Byzantine territories in the southeast, the Visigothic Kingdom eventually encompassed a great part of the Iberian Peninsula.
As the Roman Empire declined, Germanic tribes invaded the former empire. Some were "foederati", tribes enlisted to serve in Roman armies, and given land within the empire as payment, while others, such as the Vandals, took advantage of the empire's weakening defenses to seek plunder within its borders. Those tribes that survived took over existing Roman institutions, and created successor-kingdoms to the Romans in various parts of Europe. Iberia was taken over by the Visigoths after 410.
At the same time, there was a process of "Romanization" of the Germanic and Hunnic tribes settled on both sides of the "limes" (the fortified frontier of the Empire along the Rhine and Danube rivers). The Visigoths, for example, were converted to Arian Christianity around 360, even before they were pushed into imperial territory by the expansion of the Huns.
In the winter of 406, taking advantage of the frozen Rhine, refugees from (Germanic) Vandals and Sueves, and the (Sarmatian) Alans, fleeing the advancing Huns, invaded the empire in force. Three years later they crossed the Pyrenees into Iberia and divided the Western parts, roughly corresponding to modern Portugal and western Spain as far as Madrid, between them.
The Visigoths, having sacked Rome two years earlier, arrived in the region in 412, founding the Visigothic kingdom of Toulouse (in the south of modern France) and gradually expanded their influence into the Iberian peninsula at the expense of the Vandals and Alans, who moved on into North Africa without leaving much permanent mark on Hispanic culture. The Visigothic Kingdom shifted its capital to Toledo and reached a high point during the reign of Leovigild.
The Visigothic Kingdom conquered all of Hispania and ruled it until the early 8th century, when the peninsula fell to the Muslim conquests. The Muslim state in Iberia came to be known as Al-Andalus. After a period of Muslim dominance, the medieval history of Spain is dominated by the long Christian "Reconquista" or "reconquest" of the Iberian Peninsula from Muslim rule. The Reconquista gathered momentum during the 12th century, leading to the establishment of the Christian kingdoms of Portugal, Aragon, Castile and Navarre and by 1250, had reduced Muslim control to the Emirate of Granada in the south-east of the peninsula. Muslim rule in Granada survived until 1492, when it fell to the Catholic Monarchs.
Importantly, Spain never saw a decline in interest in classical culture to the degree observable in Britain, Gaul, Lombardy and Germany. The Visigoths, having assimilated Roman culture during their tenure as "foederati", tended to maintain more of the old Roman institutions, and they had a unique respect for legal codes that resulted in continuous frameworks and historical records for most of the period between 415, when Visigothic rule in Spain began, and 711, when it is traditionally said to end. However, during the Visigothic dominion the cultural efforts made by the Franks and other Germanic tribes were not felt in the peninsula, nor achieved in the lesser kingdoms that emerged after the Muslim conquest.
The proximity of the Visigothic kingdoms to the Mediterranean and the continuity of western Mediterranean trade, though in reduced quantity, supported Visigothic culture. Arian Visigothic nobility kept apart from the local Catholic population. The Visigothic ruling class looked to Constantinople for style and technology while the rivals of Visigothic power and culture were the Catholic bishops – and a brief incursion of Byzantine power in Córdoba.
Spanish Catholic religion also coalesced during this time. The period of rule by the Visigothic Kingdom saw the spread of Arianism briefly in Spain. The Councils of Toledo debated creed and liturgy in orthodox Catholicism, and the Council of Lerida in 546 constrained the clergy and extended the power of law over them under the blessings of Rome. In 587, the Visigothic king at Toledo, Reccared, converted to Catholicism and launched a movement in Spain to unify the various religious doctrines that existed in the land. This put an end to dissension on the question of Arianism. (For additional information about this period, see the History of Roman Catholicism in Spain.)
The Visigoths inherited from Late Antiquity a sort of feudal system in Spain, based in the south on the Roman villa system and in the north drawing on their vassals to supply troops in exchange for protection. The bulk of the Visigothic army was composed of slaves, raised from the countryside. The loose council of nobles that advised Spain's Visigothic kings and legitimized their rule was responsible for raising the army, and only upon its consent was the king able to summon soldiers.
The impact of Visigothic rule was not widely felt on society at large, and certainly not compared to the vast bureaucracy of the Roman Empire; they tended to rule as barbarians of a mild sort, uninterested in the events of the nation and economy, working for personal benefit, and little literature remains to us from the period. They did not, until the period of Muslim rule, merge with the Spanish population, preferring to remain separate, and indeed the Visigothic language left only the faintest mark on the modern languages of Iberia.
The most visible effect was the depopulation of the cities as they moved to the countryside. Even while the country enjoyed a degree of prosperity when compared to the famines of France and Germany in this period, the Visigoths felt little reason to contribute to the welfare, permanency, and infrastructure of their people and state. This contributed to their downfall, as they could not count on the loyalty of their subjects when the Moors arrived in the 8th century.
The Arab Islamic conquest dominated most of North Africa by 710 AD. In 711 an Islamic Berber conquering party, led by Tariq ibn Ziyad, was sent to Iberia to intervene in a civil war in the Visigothic Kingdom. Tariq's army contained about 7,000 Berber horsemen, and Musa bin Nusayr is said to have sent an additional 5,000 reinforcements after the conquest. Crossing the Strait of Gibraltar, they won a decisive victory in the summer of 711 when the Visigothic King Roderic was defeated and killed on July 19 at the Battle of Guadalete. Tariq's commander, Musa, quickly crossed with Arab reinforcements, and by 718 the Muslims were in control of nearly the whole Iberian Peninsula. The advance into Western Europe was only stopped in what is now north-central France by the West Germanic Franks under Charles Martel at the Battle of Tours in 732.
A decisive victory for the Christians took place at Covadonga, in the north of the Iberian Peninsula, in the summer of 722. In a minor battle known as the Battle of Covadonga, a Muslim force sent to put down the Christian rebels in the northern mountains was defeated by Pelagius of Asturias, who established the monarchy of the Christian Kingdom of Asturias. In 739, a rebellion in Galicia, assisted by the Asturians, drove out Muslim forces and it joined the Asturian kingdom. The Kingdom of Asturias became the main base for Christian resistance to Islamic rule in the Iberian Peninsula for several centuries.
Caliph Al-Walid I had paid great attention to the expansion of an organized military, building the strongest navy in the Umayyad Caliphate era (the second major Arab dynasty after Mohammad and the first Arab dynasty of Al-Andalus). It was this tactic that supported the ultimate expansion to Spain. Caliph Al-Walid I's reign is considered as the apex of Islamic power, though Islamic power in Spain specifically climaxed in the 10th century under Abd-ar-Rahman III. The rulers of Al-Andalus were granted the rank of Emir by the Umayyad Caliph Al-Walid I in Damascus. When the Abbasids overthrew the Umayyad Caliphate, Abd al-Rahman I managed to escape to al-Andalus. Once arrived, he declared al-Andalus independent. It is not clear if Abd al-Rahman considered himself to be a rival caliph, perpetuating the Umayyad Caliphate, or merely an independent Emir. The state founded by him is known as the Emirate of Cordoba. Al-Andalus was rife with internal conflict between the Islamic Umayyad rulers and people and the Christian Visigoth-Roman leaders and people.
The Vikings invaded Galicia in 844, but were heavily defeated by Ramiro I at A Coruña. Many of the Vikings' casualties were caused by the Galicians' ballistas – powerful torsion-powered projectile weapons that looked rather like giant crossbows. 70 Viking ships were captured and burned. Vikings returned to Galicia in 859, during the reign of Ordoño I. Ordoño was at the moment engaged against his constant enemies the Moors; but a count of the province, Don Pedro, attacked the Vikings and defeated them, destroying 38 of their ships.
In the 10th century Abd-ar-Rahman III declared the Caliphate of Córdoba, effectively breaking all ties with the Egyptian and Syrian caliphs. The Caliphate was mostly concerned with maintaining its power base in North Africa, but these possessions eventually dwindled to the Ceuta province. The first navy of the Emir of Córdoba was built after the humiliating Viking ascent of the Guadalquivir in 844 when they sacked Seville.
In 942, pagan Magyars raided northern Spain. Meanwhile, a slow but steady migration of Christian subjects to the northern kingdoms in Christian Hispania was slowly increasing the latter's power. Even so, Al-Andalus remained vastly superior to all the northern kingdoms combined in population, economy and military might; and internal conflict between the Christian kingdoms contributed to keep them relatively harmless.
Al-Andalus coincided with "La Convivencia", an era of relative religious tolerance, and with the Golden age of Jewish culture in the Iberian Peninsula. (See: Emir Abd-ar-Rahman III 912; the Granada massacre 1066).
Muslim interest in the peninsula returned in force around the year 1000 when Al-Mansur (also known as "Almanzor") sacked Barcelona in 985. Under his son, other Christian cities were subjected to numerous raids. After his son's death, the caliphate plunged into a civil war and splintered into the so-called "Taifa Kingdoms". The Taifa kings competed against each other not only in war but also in the protection of the arts, and culture enjoyed a brief upswing.
The Almohads, who had taken control of the Almoravids' Maghribi and al-Andalus territories by 1147, surpassed the Almoravides in fundamentalist Islamic outlook, and they treated the non-believer "dhimmis" harshly. Faced with the choice of death, conversion, or emigration, many Jews and Christians left.
By the mid-13th century, the Emirate of Granada was the only independent Muslim realm in Spain, which survived until 1492 by becoming a vassal state to Castile, to which it paid tribute.
The Kings of Aragón ruled territories that consisted of not only the present administrative region of Aragon but also Catalonia, and later the Balearic Islands, Valencia, Sicily, Naples and Sardinia (see Crown of Aragon). Considered by most to have been the first mercenary company in Western Europe, the Catalan Company proceeded to occupy the Frankish Duchy of Athens, which they placed under the protection of a prince of the House of Aragon and ruled until 1379.
Medieval Spain was the scene of almost constant warfare between Muslims and Christians.
The Taifa kingdoms lost ground to the Christian realms in the north. After the loss of Toledo in 1085, the Muslim rulers reluctantly invited the Almoravides, who invaded Al-Andalus from North Africa and established an empire. In the 12th century the Almoravid empire broke up again, only to be taken over by the Almohad invasion, who were defeated by an alliance of the Christian kingdoms in the decisive Battle of Las Navas de Tolosa in 1212. By 1250, nearly all of Iberia was back under Christian rule with the exception of the Muslim kingdom of Granada.
In the 13th century, many languages were spoken in the Christian kingdoms of Iberia. These were the Latin-based Romance languages of Castilian, Aragonese, Catalan, Galician, Aranese, Asturian, Leonese, and Portuguese, and the ancient language isolate of Basque. Throughout the century, Castilian (what is also known today as Spanish) gained a growing prominence in the Kingdom of Castile as the language of culture and communication, at the expense of Leonese and of other close dialects.
One example of this is the oldest preserved Castilian epic poem, "Cantar de Mio Cid", written about the military leader "El Cid". In the last years of the reign of Ferdinand III of Castile, Castilian began to be used for certain types of documents, and it was during the reign of Alfonso X that it became the official language. Henceforth all public documents were written in Castilian; likewise all translations were made into Castilian instead of Latin.
At the same time, Catalan and Galician became the standard languages in their respective territories, developing important literary traditions and being the normal languages in which public and private documents were issued: Galician from the 13th to the 16th century in Galicia and nearby regions of Asturias and Leon, and Catalan from the 12th to the 18th century in Catalonia, the Balearic Islands and Valencia, where it was known as Valencian. Both languages were later substituted in its official status by Castilian Spanish, till the 20th century.
In the 13th century many universities were founded in León and in Castile. Some, such as the Leonese Salamanca and the Castilian Palencia, were among the earliest universities in Europe.
In 1492, under the Catholic Monarchs, the first edition of the "Grammar of the Castilian Language" by Antonio de Nebrija was published.
In the 15th century, the most important among all of the separate Christian kingdoms that made up the old Hispania were the Kingdom of Castile (occupying northern and central portions of the Iberian Peninsula), the Kingdom of Aragon (occupying northeastern portions of the peninsula), and the Kingdom of Portugal occupying the far western Iberian Peninsula. The rulers of the kingdoms of Castile and Aragon were allied with dynastic families in Portugal, France, and other neighboring kingdoms.
The death of King Henry IV of Castile in 1474 set off a struggle for power called the War of the Castilian Succession (1475–1479). Contenders for the throne of Castile were Henry's one-time heir Joanna la Beltraneja, supported by Portugal and France, and Henry's half-sister Queen Isabella I of Castile, supported by the Kingdom of Aragon and by the Castilian nobility.
Isabella retained the throne and ruled jointly with her husband, King Ferdinand II. Isabella and Ferdinand had married in 1469 in Valladolid. Their marriage united both crowns and set the stage for the creation of the Kingdom of Spain, at the dawn of the modern era. That union, however, was a union in title only, as each region retained its own political and judicial structure. Pursuant to an agreement signed by Isabella and Ferdinand on January 15, 1474, Isabella held more authority over the newly unified Spain than her husband, although their rule was shared. Together, Isabella of Castile and Ferdinand of Aragon were known as the "Catholic Monarchs" (), a title bestowed on them by Pope Alexander VI.
The monarchs oversaw the final stages of the Reconquista of Iberian territory from the Moors with the conquest of Granada, conquered the Canary Islands, and expelled the Jews from Spain under the Alhambra Decree.
Although until the 13th century religious minorities (Jews and Muslims) had enjoyed considerable tolerance in Castile and Aragon – the only Christian kingdoms where Jews were not restricted from any professional occupation – the situation of the Jews collapsed over the 14th century, reaching a climax in 1391 with large scale massacres in every major city except Ávila.
Over the next century, half of the estimated 80,000 Spanish Jews converted to Christianity (becoming "conversos"). The final step was taken by the Catholic Monarchs, who, in 1492, ordered the remaining Jews to convert or face expulsion from Spain. Depending on different sources, the number of Jews actually expelled, traditionally estimated at 120,000 people, is now believed to have numbered about 40,000. There would be more to follow as Spanish monarchs extended the expulsion decrees to their territories on the Italian peninsula, including Sicily (1493), Naples (1542), and Milan (1597).
Over the following decades, Muslims faced the same fate; and about 60 years after the Jews, they were also compelled to convert ("Moriscos") or be expelled. In the early 17th century, the converts were also expelled. Jews and Muslims were not the only people to be persecuted during this time period. All Roma (Gitano, Gypsy) males between the ages of 18 and 26 were forced to serve in galleys – which was equivalent to a death sentence – but the majority managed to hide and avoid arrest.
Isabella ensured long-term political stability in Spain by arranging strategic marriages for each of her five children. Her firstborn, a daughter named Isabella, married Afonso of Portugal, forging important ties between these two neighboring countries and hopefully ensuring future alliance, but Isabella soon died before giving birth to an heir. Juana, Isabella's second daughter, married into the Habsburg dynasty when she wed Philip the Fair, the son of Maximilian I, King of Bohemia (Austria) and likely heir to the crown of the Holy Roman Emperor.
This ensured an alliance with the Habsburgs and the Holy Roman Empire, a powerful, far-reaching territory that assured Spain's future political security. Isabella's only son, Juan, married Margaret of Austria, further strengthening ties with the Habsburg dynasty. Isabella's fourth child, Maria, married Manuel I of Portugal, strengthening the link forged by her older sister's marriage. Her fifth child, Catherine, married King Henry VIII of England and was mother to Queen Mary I of England.
The Castilian conquest of the Canary Islands, inhabited by Guanche people, took place between 1402 (with the conquest of Lanzarote) and 1496 (with the conquest of Tenerife). Two periods can be distinguished in this process: the noble conquest, carried out by the nobility in exchange for a pact of vassalage, and the royal conquest, carried out directly by the Crown, during the reign of the Catholic Monarchs. By 1520, European military technology combined with the devastating epidemics such as bubonic plague and pneumonia brought by the Castilians and enslavement and deportation of natives led to the extinction of the Guanches.
Isabella and Ferdinand authorized the 1492 expedition of Christopher Columbus, who became the first known European to reach the New World since Leif Ericson. This and subsequent expeditions led to an influx of wealth into Spain, supplementing income from within Castile for the state that would prove to be a dominant power of Europe for the next two centuries.
Spain established colonies in North Africa that ranged from the Atlantic Moroccan coast to Tripoli in Libya. Melilla was occupied in 1497, Oran in 1509, Larache in 1610, and Ceuta was annexed from the Portuguese in 1668. Today, both Ceuta and Melilla still remain under Spanish control, together with smaller islets known as the "presidios menores" (Peñón de Vélez de la Gomera, las Islas de Alhucemas, las Islas de Chafarinas).
The Spanish Empire was the first global empire. It was also one of the largest empires in world history. In the 16th century, Spain and Portugal were in the vanguard of European global exploration and colonial expansion. The two kingdoms on the conquest and Iberian Peninsula competed with each other in opening of trade routes across the oceans. Spanish imperial conquest and colonization began with the Canary Islands in 1312 and 1402. which began the Castilian conquest of the Canary Islands, completed in 1495.
In the 15th and 16th centuries, trade flourished across the Atlantic between Spain and the Americas and across the Pacific between East Asia and Mexico via the Philippines. Spanish Conquistadors, operating privately, deposed the Aztec, Inca and Maya governments with extensive help from local factions and took control of vast stretches of land.
This New World empire was at first a disappointment, as the natives had little to trade. Diseases such as smallpox and measles that arrived with the colonizers devastated the native populations, especially in the densely populated regions of the Aztec, Maya and Inca civilizations, and this reduced the economic potential of conquered areas. Estimates of the pre-Columbian population of the Americas vary but possibly stood at 100 million—one fifth of humanity in 1492. Between 1500 and 1600 the population of the Americas was halved. In Mexico alone, it has been estimated that the pre-conquest population of around 25 million people was reduced within 80 years to about 1.3 million.
In the 1520s, large-scale extraction of silver from the rich deposits of Mexico's Guanajuato began to be greatly augmented by the silver mines in Mexico's Zacatecas and Bolivia's Potosí from 1546. These silver shipments re-oriented the Spanish economy, leading to the importation of luxuries and grain. The resource-rich colonies of Spain thus caused large cash inflows for the country. They also became indispensable in financing the military capability of Habsburg Spain in its long series of European and North African wars, though, with the exception of a few years in the 17th century, Taxes in Castile were the most important source of revenue.
Spain enjoyed a cultural golden age in the 16th and 17th centuries. For a time, the Spanish Empire dominated the oceans with its experienced navy and ruled the European battlefield with its fearsome and well trained infantry, the famous "".
The financial burden within the peninsula was on the backs of the peasant class while the nobility enjoyed an increasingly lavish lifestyle. From the time beginning with the incorporation of the Portuguese Empire in 1580 (lost in 1640) until the loss of its American colonies in the 19th century, Spain maintained one of the largest empires in the world even though it suffered military and economic misfortunes from the 1640s.
Religion played a very strong role in the spread of the Spanish empire. The thought that Spain could bring Christianity to the New World and protect Catholicism in Europe certainly played a strong role in the expansion of Spain's empire.
Spain's world empire reached its greatest territorial extent in the late 18th century but it was under the Habsburg dynasty in the 16th and 17th centuries it reached the peak of its power and declined. The Iberian Union with Portugal meant that the monarch of Castile was also the monarch of Portugal, but they were ruled as separate entities both on the peninsula and in Spanish America and Brazil. In 1640, the House of Braganza revolted against Spanish rule and reasserted Portugal's independence.
When Spain's first Habsburg ruler Charles I became king of Spain in 1516, Spain became central to the dynastic struggles of Europe. After he became king of Spain, Charles also became Charles V, Holy Roman Emperor and because of his widely scattered domains was not often in Spain.
In 1556 Charles abdicated from his positions, giving his Spanish empire to his only surviving son, Philip II of Spain, and the Holy Roman Empire to his brother, Ferdinand. Philip treated Castile as the foundation of his empire, but the population of Castile (about a third of France's) was never large enough to provide the soldiers needed to support the Empire. His marriage to Mary Tudor allied England with Spain.
In the 1560s, plans to consolidate control of the Netherlands led to unrest, which gradually led to the Calvinist leadership of the revolt and the Eighty Years' War. The Dutch armies waged a war of maneuver and siege, successfully avoiding set piece battles. This conflict consumed much Spanish expenditure during the later 16th century. Other extremely expensive failures included an attempt to invade Protestant England in 1588 that produced the worst military disaster in Spanish history when the Spanish Armada—costing 10 million ducats—was scattered by a freak storm. Over 8,000 English sailors died from diseases such as dysentery and typhus while the Spanish Armada was at sea.
Economic and administrative problems multiplied in Castile, and the weakness of the native economy became evident in the following century. Rising inflation, financially draining wars in Europe, the ongoing aftermath of the expulsion of the Jews and Moors from Spain, and Spain's growing dependency on the silver imports, combined to cause several bankruptcies that caused economic crisis in the country, especially in heavily burdened Castile. The great plague of 1596–1602 killed 600,000 to 700,000 people, or about 10% of the population. Altogether more than 1,250,000 deaths resulted from the extreme incidence of plague in 17th-century Spain. Economically, the plague destroyed the labor force as well as creating a psychological blow to an already problematic Spain.
The Spanish Golden Age (in Spanish, "Siglo de Oro") was a period of flourishing arts and letters in the Spanish Empire (now Spain and the Spanish-speaking countries of Latin America), coinciding with the political decline and fall of the Habsburgs (Philip III, Philip IV and Charles II). Arts during the Golden Age flourished despite the decline of the empire in the 17th century. The last great writer of the age, Sor Juana Inés de la Cruz, died in New Spain in 1695.
The Habsburgs, both in Spain and Austria, were great patrons of art in their countries. "El Escorial", the great royal monastery built by King Philip II, invited the attention of some of Europe's greatest architects and painters. Diego Velázquez, regarded as one of the most influential painters of European history and a greatly respected artist in his own time, cultivated a relationship with King Philip IV and his chief minister, the Count-Duke of Olivares, leaving us several portraits that demonstrate his style and skill. El Greco, a respected Greek artist from the period, settled in Spain, and infused Spanish art with the styles of the Italian renaissance and helped create a uniquely Spanish style of painting.
Some of Spain's greatest music is regarded as having been written in the period. Such composers as Tomás Luis de Victoria, Luis de Milán and Alonso Lobo helped to shape Renaissance music and the styles of counterpoint and polychoral music, and their influence lasted far into the Baroque period.
Spanish literature blossomed as well, most famously demonstrated in the work of Miguel de Cervantes, the author of "Don Quixote de la Mancha". Spain's most prolific playwright, Lope de Vega, wrote possibly as many as one thousand plays over his lifetime, over four hundred of which survive to the present day.
Spain's severe financial difficulties began in the middle 16th century, and would continue for the remainder of Habsburg rule. Despite the successes of Spanish armies, at home the period was marked by monetary inflation, mercantilism, and a variety of government monopolies and interventions. Spanish kings were forced to declare sovereign defaults nine times between 1557 and 1666.
Philip II died in 1598, and was succeeded by his son Philip III. In his reign (1598–1621) a ten-year truce with the Dutch was overshadowed in 1618 by Spain's involvement in the European-wide Thirty Years' War. Government policy was dominated by favorites, but it was also the period in which the geniuses of Cervantes and El Greco flourished. Philip III was succeeded in 1621 by his son Philip IV of Spain (reigned 1621–65). Much of the policy was conducted by the Count-Duke of Olivares. The Count-Duke of Olivares was the inept prime minister from 1621 to 1643. He over-exerted Spain in foreign affairs and unsuccessfully attempted domestic reform. His policy of committing Spain to recapture Holland led to a renewal of the Eighty Years' War while Spain was also embroiled in the Thirty Years' War (1618–1648). His attempts to centralise power and increase wartime taxation led to revolts in Catalonia and in Portugal, which brought about his downfall.
During the Thirty Years' War, in which various Protestant forces battled Imperial armies, France provided subsidies to Habsburg enemies, especially Sweden. Sweden lost and France's First Minister, Cardinal Richelieu, in 1635 declared war on Spain. The open war with Spain started with a promising victory for the French at Les Avins in 1635. The following year Spanish forces based in the Southern Netherlands hit back with devastating lightning campaigns in northern France that left French forces reeling and the economy of the region in tatters. After 1636, however, Olivares, fearful of provoking another disastrous bankruptcy, stopped the advance. In 1640, both Portugal and Catalonia rebelled. Portugal was lost to the crown for good; in northern Italy and most of Catalonia, French forces were expelled and Catalonia's independence was suppressed. In 1643, the French defeated one of Spain's best armies at Rocroi, northern France.
The Spanish "Golden Age" politically ends no later than 1659, with the Treaty of the Pyrenees, ratified between France and Habsburg Spain.
During the long regency for Charles II, the last of the Spanish Habsburgs, favouritism milked Spain's treasury, and Spain's government operated principally as a dispenser of patronage. Plague, famine, floods, drought, and renewed war with France wasted the country. The Peace of the Pyrenees (1659) had ended fifty years of warfare with France, whose king, Louis XIV, found the temptation to exploit a weakened Spain too great. Louis instigated the War of Devolution (1667–68) to acquire the Spanish Netherlands.
By the 17th century, the Catholic Church and Spain had showcased a close bond to one another, attesting to the fact that Spain was virtually free of Protestantism during the 16th century. In 1620, there were 100,000 Spaniards in the clergy; by 1660 the number had grown to about 200,000, and the Church owned 20% of all the land in Spain. The Spanish bureaucracy in this period was highly centralized, and totally reliant on the king for its efficient functioning. Under Charles II, the councils became the sinecures of wealthy aristocrats despite various attempt at reform. Political commentators in Spain, known as arbitristas, proposed a number of measures to reverse the decline of the Spanish economy, with limited success. In rural areas of Spain, heavy taxation of peasants reduced agricultural output as peasants in the countryside migrated to the cities. The influx of silver from the Americas has been cited as the cause of inflation, although only one fifth of the precious metal, i.e. the "quinto real" (royal fifth), actually went to Spain. A prominent internal factor was the Spanish economy's dependence on the export of luxurious Merino wool, which had its markets in northern Europe reduced by war and growing competition from cheaper textiles.
The once proud Spanish army was falling far behind its foes. It did badly at Bergen op Zoom in 1622, and finance was not to blame. The Dutch won very easily at Hertogenbosch and Wesel in 1629. In 1632 the Dutch captured the strategic fortress town of Maastricht, repulsing three relief armies and dooming the Spanish to defeat.
While Spain built a rich American Empire that exported a silver treasure fleet every year, it was unable to focus its financial, military, and diplomatic power on building up its Spanish base. The Crown's dedication to destroying Protestantism through almost constant warfare created a cultural ethos among Spanish leaders that undermined the opportunity for economic modernization or industrialization. When Philip II died in 1598, his treasury spent most of its income on funding the huge deficit, which continued to grow. In peninsular Spain, the productive forces were undermined by steady inflation, heavy taxation, immigration of ambitious youth to the colonies, and by depopulation. Industry went into reverse – Seville in 1621 operated 400 looms, where it had 16,000 a century before. Religiosity led by saints and mystics, missionaries and crusaders, theologians and friars dominated Spanish culture, with the psychology of a reward in the next world. Palmer and Colton argue:
Elliott cites the achievements of Castille in many areas, especially high culture. He finds:
The Habsburg dynasty became extinct in Spain with Charles II's death in 1700, and the War of the Spanish Succession ensued in which the other European powers tried to assume control of the Spanish monarchy. King Louis XIV of France eventually lost the War of the Spanish Succession The victors were Britain, the Dutch Republic and Austria. They allowed the crown of Spain to pass to the Bourbon dynasty, provided Spain and France would never be merged.
Charles II, having no direct heir, was succeeded by his great-nephew Philippe d'Anjou, a French prince, in 1700. Concern among other European powers that Spain and France united under a single Bourbon monarch would upset the balance of power led to the War of the Spanish Succession between 1701 and 1714. It pitted powerful France and fairly strong Spain against the Grand Alliance of England, Portugal, Savoy, the Netherlands and Austria.
After many battles, especially in Spain, the treaty of Utrecht recognised Philip, Duke of Anjou, Louis XIV's grandson, as King of Spain (as Philip V), thus confirming the succession stipulated in the will of the Charles II of Spain. However, Philip was compelled to renounce for himself and his descendants any right to the French throne, despite some doubts as to the lawfulness of such an act. Spain's Italian territories were apportioned.
Philip V signed the "Decreto de Nueva Planta" in 1715. This new law revoked most of the historical rights and privileges of the different kingdoms that formed the Spanish Crown, especially the Crown of Aragon, unifying them under the laws of Castile, where the Castilian Cortes Generales had been more receptive to the royal wish. Spain became culturally and politically a follower of absolutist France. Lynch says Philip V advanced the government only marginally over that of his predecessors and was more of a liability than the incapacitated Charles II; when a conflict came up between the interests of Spain and France, he usually favored France.
Philip made reforms in government, and strengthened the central authorities relative to the provinces. Merit became more important, although most senior positions still went to the landed aristocracy. Below the elite level, inefficiency and corruption was as widespread as ever.
The reforms started by Philip V culminated in much more important reforms of Charles III. The historian Jonathan Israel, however, argues that King Charles III cared little for the Enlightenment and his ministers paid little attention to the Enlightenment ideas influential elsewhere on the Continent. Israel says, "Only a few ministers and officials were seriously committed to enlightened aims. Most were first and foremost absolutists and their objective was always to reinforce monarchy, empire, aristocracy...and ecclesiastical control and authority over education."
The economy, on the whole, improved over the depressed 1650–1700 era, with greater productivity and fewer famines and epidemics.
Elisabeth of Parma, Philip V's wife, exerted great influence on Spain's foreign policy. Her principal aim was to have Spain's lost territories in Italy restored. In 1717, Philip V ordered an invasion of Sardinia, which had been given to Austria by the Treaty of Utrecht. Spanish troops then invaded Sicily. The aggression prompted the Holy Roman Empire to form a new pact with the members of the Triple Alliance, resulting in the Quadruple Alliance of 1718. All members demanded Spanish retreat from Sardinia and Sicily, resulting in war by December 1718. The war lasted two years and resulted in a rout of the Spanish. Hostilities ceased with the Treaty of The Hague in February 1720. In this settlement, Philip V abandoned all claims on Italy. Later, however, Spain reconquered Naples and Sicily during the War of the Polish Succession (1733–35). In 1748, after the War of the Austrian Succession (1740–48), Spain obtained the duchies of Parma, Piacenza and Guastalla in northern Italy.
The rule of the Spanish Bourbons continued under Ferdinand VI (1746–59) and Charles III (1759–88). Under the rule of Charles III and his ministers – Leopoldo de Gregorio, Marquis of Esquilache and José Moñino, Count of Floridablanca – the economy improved. Fearing that Britain's victory over France in the Seven Years' War (1756–63) threatened the European balance of power, Spain allied itself to France and invaded Portugal, a British ally, but suffered a series of military defeats and ended up having to cede Florida to the British at the Treaty of Paris (1763) while gaining Louisiana from France. Spain regained Florida with the Treaty of Paris (1783), which ended the American Revolutionary War (1775–83), and gained an improved international standing.
However, there were no reforming impulses in the reign of Charles IV (1788 to abdication in 1808), seen by some as mentally handicapped. Dominated by his wife's lover, Manuel de Godoy, Charles IV embarked on policies that overturned much of Charles III's reforms. After briefly opposing Revolutionary France early in the French Revolutionary Wars, Spain was cajoled into an uneasy alliance with its northern neighbor, only to be blockaded by the British. Charles IV's vacillation, culminating in his failure to honour the alliance by neglecting to enforce the Continental System, led to the invasion of Spain in 1808 under Napoleon I, Emperor of the French, thereby triggering the Peninsular War, with enormous human and property losses, and loss of control over most of the overseas empire.
During most of the 18th century Spain had arrested its relative decline of the latter part of the 17th century. But despite the progress, it continued to lag in the political and mercantile developments then transforming other parts of Europe, most notably in Great Britain, the Low Countries, and France. The chaos unleashed by the Peninsular War caused this gap to widen greatly and Spain would not have an Industrial Revolution.
The Age of Enlightenment reached Spain in attenuated form about 1750. Attention focused on medicine and physics, with some philosophy. French and Italian visitors were influential but there was little challenge to Catholicism or the Church such as characterized the French philosophes. The leading Spanish figure was Benito Feijóo (1676–1764), a Benedictine monk and professor. He was a successful popularizer noted for encouraging scientific and empirical thought in an effort to debunk myths and superstitions. By the 1770s the conservatives had launched a counterattack and used censorship and the Inquisition to suppress Enlightenment ideas.
At the top of the social structure of Spain in the 1780s stood the nobility and the church. A few hundred families dominated the aristocracy, with another 500,000 holding noble status. There were 200,000 church men and women, half of them in heavily endowed monasteries that controlled much of the land not owned by the nobles. Most people were on farms, either as landless peons or as holders of small properties. The small urban middle class was growing, but was distrusted by the landowners and peasants alike.
In the late 18th century, Bourbon-ruled Spain had an alliance with Bourbon-ruled France, and therefore did not have to fear a land war. Its only serious enemy was Britain, which had a powerful navy; Spain therefore concentrated its resources on its navy. When the French Revolution overthrew the Bourbons, a land war with France became a threat which the king tried to avoid. The Spanish army was ill-prepared. The officer corps was selected primarily on the basis of royal patronage, rather than merit. About a third of the junior officers had been promoted from the ranks, and while they did have talent they had few opportunities for promotion or leadership. The rank-and-file were poorly trained peasants. Elite units included foreign regiments of Irishmen, Italians, Swiss, and Walloons, in addition to elite artillery and engineering units. Equipment was old-fashioned and in disrepair. The army lacked its own horses, oxen and mules for transportation, so these auxiliaries were operated by civilians, who might run away if conditions looked bad. In combat, small units fought well, but their old-fashioned tactics were hardly of use against the Napoleonic forces, despite repeated desperate efforts at last-minute reform. When war broke out with France in 1808, the army was deeply unpopular. Leading generals were assassinated, and the army proved incompetent to handle command-and-control. Junior officers from peasant families deserted and went over to the insurgents; many units disintegrated. Spain was unable to mobilize its artillery or cavalry. In the war, there was one victory at the Battle of Bailén, and many humiliating defeats. Conditions steadily worsened, as the insurgents increasingly took control of Spain's battle against Napoleon. Napoleon ridiculed the army as "the worst in Europe"; the British who had to work with it agreed. It was not the Army that defeated Napoleon, but the insurgent peasants whom Napoleon ridiculed as packs of "bandits led by monks" (they in turn believed Napoleon was the devil). By 1812, the army controlled only scattered enclaves, and could only harass the French with occasional raids. The morale of the army had reached a nadir, and reformers stripped the aristocratic officers of most of their legal privileges.
Spain initially sided against France in the Napoleonic Wars, but the defeat of her army early in the war led to Charles IV's pragmatic decision to align with the revolutionary French. Spain was put under a British blockade, and her colonies began to trade independently with Britain but it was the defeat of the British invasions of the Río de la Plata in South America (1806 and 1807) that emboldened independence and revolutionary hopes in Spain's North and South American colonies. A major Franco-Spanish fleet was lost at the Battle of Trafalgar in 1805, prompting the vacillating king of Spain to reconsider his difficult alliance with Napoleon. Spain temporarily broke off from the Continental System, and Napoleon – irritated with the Bourbon kings of Spain – invaded Spain in 1808 and deposed Ferdinand VII, who had been on the throne only forty-eight days after his father's abdication in March 1808. On July 20, 1808, Joseph Bonaparte, eldest brother of Napoleon Bonaparte, entered Madrid and established a government by which he became King of Spain, serving as a surrogate for Napoleon.
The former Spanish king was dethroned by Napoleon, who put his own brother on the throne. Spaniards revolted. Thompson says the Spanish revolt was, "a reaction against new institutions and ideas, a movement for loyalty to the old order: to the hereditary crown of the Most Catholic kings, which Napoleon, an excommunicated enemy of the Pope, had put on the head of a Frenchman; to the Catholic Church persecuted by republicans who had desecrated churches, murdered priests, and enforced a "loi des cultes"; and to local and provincial rights and privileges threatened by an efficiently centralized government. "Juntas" were formed all across Spain that pronounced themselves in favor of Ferdinand VII. On September 26, 1808, a Central Junta was formed in the town of Aranjuez to coordinate the nationwide struggle against the French. Initially, the Central Junta declared support for Ferdinand VII, and convened a "General and Extraordinary Cortes" for all the kingdoms of the Spanish Monarchy. On February 22 and 23, 1809, a popular insurrection against the French occupation broke out all over Spain.
The peninsular campaign was a disaster for France. Napoleon did well when he was in direct command, but that followed severe losses, and when he left in 1809 conditions grew worse for France. Vicious reprisals, famously portrayed by Goya in "The Disasters of War", only made the Spanish guerrillas angrier and more active; the war in Spain proved to be a major, long-term drain on French money, manpower and prestige.
In March 1812, the Cádiz Cortes created the first modern Spanish constitution, the Constitution of 1812 (informally named "La Pepa"). This constitution provided for a separation of the powers of the executive and the legislative branches of government. The Cortes was to be elected by universal suffrage, albeit by an indirect method. Each member of the Cortes was to represent 70,000 people. Members of the Cortes were to meet in annual sessions. The King was prevented from either convening or proroguing the Cortes. Members of the Cortes were to serve single two-year terms. They could not serve consecutive terms; a member could serve a second term only by allowing someone else to serve a single intervening term in office. This attempt at the development of a modern constitutional government lasted from 1808 until 1814. Leaders of the liberals or reformist forces during this revolution were José Moñino, Count of Floridablanca, Gaspar Melchor de Jovellanos and Pedro Rodríguez, Conde de Campomanes. Born in 1728, Floridablanca was eighty years of age at the time of the revolutionary outbreak in 1808. He had served as Prime Minister under King Charles III of Spain from 1777 until 1792; However, he tended to be suspicious of the popular spontaneity and resisted a revolution. Born in 1744, Jovellanos was somewhat younger than Floridablanco. A writer and follower of the philosophers of the Enlightenment tradition of the previous century, Jovellanos had served as Minister of Justice from 1797 to 1798 and now commanded a substantial and influential group within the Central Junta. However, Jovellanos had been imprisoned by Manuel de Godoy, Duke of Alcudia, who had served as the prime minister, virtually running the country as a dictator from 1792 until 1798 and from 1801 until 1808. Accordingly, even Jovellanos tended to be somewhat overly cautious in his approach to the revolutionary upsurge that was sweeping Spain in 1808.
The Spanish army was stretched as it fought Napoleon's forces because of a lack of supplies and too many untrained recruits, but at Bailén in June 1808, the Spanish army inflicted the first major defeat suffered by a Napoleonic army; this resulted in the collapse of French power in Spain. Napoleon took personal charge and with fresh forces reconquered Spain in a matter of months, defeating the Spanish and British armies in a brilliant campaign of encirclement. After this the Spanish armies lost every battle they fought against the French imperial forces but were never annihilated; after battles they would retreat into the mountains to regroup and launch new attacks and raids. Guerrilla forces sprang up all over the country and, with the army, tied down huge numbers of Napoleon's troops, making it difficult to sustain concentrated attacks on enemy forces. The attacks and raids of the Spanish army and guerrillas became a massive drain on Napoleon's military and economic resources. In this war, Spain was aided by the British and Portuguese, led by the Duke of Wellington. The Duke of Wellington fought Napoleon's forces in the Peninsular War, with Joseph Bonaparte playing a minor role as king at Madrid. The brutal war was one of the first guerrilla wars in modern Western history. French supply lines stretching across Spain were mauled repeatedly by the Spanish armies and guerrilla forces; thereafter, Napoleon's armies were never able to control much of the country. The war fluctuated, with Wellington spending several years behind his fortresses in Portugal while launching occasional campaigns into Spain.
After Napoleon's disastrous 1812 campaign in Russia, Napoleon began to recall his forces for the defence of France against the advancing Russian and other coalition forces, leaving his forces in Spain increasingly undermanned and on the defensive against the advancing Spanish, British and Portuguese armies. At the Battle of Vitoria in 1813, an allied army under the Duke of Wellington decisively defeated the French and in 1814 Ferdinand VII was restored as King of Spain.
Spain lost all of its North and South American colonies, except Cuba and Puerto Rico, in a complex series of revolts 1808–26. Spain was at war with Britain 1798–1808, and the British Navy cut off its ties to its colonies. Trade was handled by American and Dutch traders. The colonies thus had achieved economic independence from Spain, and set up temporary governments or juntas which were generally out of touch with the mother country. After 1814, as Napoleon was defeated and Ferdinand VII was back on the throne, the king sent armies to regain control and reimpose autocratic rule. In the next phase 1809–16, Spain defeated all the uprising. A second round 1816–25 was successful and drove the Spanish out of all of its mainland holdings. Spain had no help from European powers. Indeed, Britain (and the United States) worked against it. When they were cut off from Spain, the colonies saw a struggle for power between Spaniards who were born in Spain (called "peninsulares") and those of Spanish descent born in New Spain (called "creoles"). The creoles were the activists for independence. Multiple revolutions enabled the colonies to break free of the mother country. In 1824 the armies of generals José de San Martín of Argentina and Simón Bolívar of Venezuela defeated the last Spanish forces; the final defeat came at the Battle of Ayacucho in southern Peru. After that Spain played a minor role in international affairs. Business and trade in the ex-colonies were under British control. Spain kept only Cuba and Puerto Rico in the New World.
The Napoleonic wars had severe negative effects on Spain's long-term economic development. The Peninsular war ravaged towns and countryside alike, and the demographic impact was the worst of any Spanish war, with a sharp decline in population in many areas caused by casualties, outmigration, and disruption of family life. The marauding armies seized farmers' crops, and more importantly, farmers lost much of their livestock, their main capital asset. Severe poverty became widespread, reducing market demand, while the disruption of local and international trade, and the shortages of critical inputs, seriously hurt industry and services. The loss of a vast colonial empire reduced Spain's overall wealth, and by 1820 it had become one of Europe's poorest and least-developed societies; three-fourths of the people were illiterate. There was little industry beyond the production of textiles in Catalonia. Natural resources, such as coal and iron, were available for exploitation, but the transportation system was rudimentary, with few canals or navigable rivers, and road travel was slow and expensive. British railroad builders were pessimistic about the potential for freight and passenger traffic and did not invest. Eventually a small railway system was built, radiating from Madrid and bypassing the natural resources. The government relied on high tariffs, especially on grain, which further slowed economic development. For example, eastern Spain was unable to import inexpensive Italian wheat, and had to rely on expensive homegrown products carted in over poor roads. The export market collapsed apart from some agricultural products. Catalonia had some industry, but Castile remained the political and cultural center, and was not interested in promoting industry.
Although the "juntas", that had forced the French to leave Spain, had sworn by the liberal Constitution of 1812, Ferdinand VII had the support of conservatives and he rejected it. He ruled in the authoritarian fashion of his forebears.
The government, nearly bankrupt, was unable to pay her soldiers. There were few settlers or soldiers in Florida, so it was sold to the United States for 5 million dollars. In 1820, an expedition intended for the colonies revolted in Cadiz. When armies throughout Spain pronounced themselves in sympathy with the revolters, led by Rafael del Riego, Ferdinand relented and was forced to accept the liberal Constitution of 1812. This was the start of the second bourgeois revolution in Spain, the "trienio liberal" which would last from 1820 to 1823. Ferdinand himself was placed under effective house arrest for the duration of the liberal experiment.
The tumultuous three years of liberal rule that followed (1820–23) were marked by various absolutist conspiracies. The liberal government, which reminded European statesmen entirely too much of the governments of the French Revolution, was viewed with hostility by the Congress of Verona in 1822, and France was authorized to intervene. France crushed the liberal government with massive force in the so-called "Hundred Thousand Sons of Saint Louis" expedition, and Ferdinand was restored as absolute monarch in 1823. In Spain proper, this marked the end of the second Spanish bourgeois revolution.
In Spain, the failure of the second bourgeois revolution was followed by a period of uneasy peace for the next decade. Having borne only a female heir presumptive, it appeared that Ferdinand would be succeeded by his brother, Infante Carlos of Spain. While Ferdinand aligned with the conservatives, fearing another national insurrection, he did not view Carlos's reactionary policies as a viable option. Ferdinand – resisting the wishes of his brother – decreed the Pragmatic Sanction of 1830, enabling his daughter Isabella to become Queen. Carlos, who made known his intent to resist the sanction, fled to Portugal.
Ferdinand's death in 1833 and the accession of Isabella II as Queen of Spain sparked the First Carlist War (1833–39). Isabella was only three years old at the time so her mother, Maria Cristina of Bourbon-Two Sicilies, was named regent until her daughter came of age. Carlos invaded the Basque country in the north of Spain and attracted support from absolutist reactionaries and conservatives; these forces were known as the "Carlist" forces. The supporters of reform and of limitations on the absolutist rule of the Spanish throne rallied behind Isabella and the regent, Maria Cristina; these reformists were called "Christinos." Though Christino resistance to the insurrection seemed to have been overcome by the end of 1833, Maria Cristina's forces suddenly drove the Carlist armies from most of the Basque country. Carlos then appointed the Basque general Tomás de Zumalacárregui as his commander-in-chief. Zumalacárregui resuscitated the Carlist cause, and by 1835 had driven the Christino armies to the Ebro River and transformed the Carlist army from a demoralized band into a professional army of 30,000 of superior quality to the government forces. Zumalacárregui's death in 1835 changed the Carlists' fortunes. The Christinos found a capable general in Baldomero Espartero. His victory at the Battle of Luchana (1836) turned the tide of the war, and in 1839, the Convention of Vergara put an end to the first Carlist insurrection.
The progressive General Espartero, exploiting his popularity as a war hero and his sobriquet "Pacifier of Spain", demanded liberal reforms from Maria Cristina. The Queen Regent, who resisted any such idea, preferred to resign and let Espartero become regent instead in 1840. Espartero's liberal reforms were then opposed by moderates, and the former general's heavy-handedness caused a series of sporadic uprisings throughout the country from various quarters, all of which were bloodily suppressed. He was overthrown as regent in 1843 by Ramón María Narváez, a moderate, who was in turn perceived as too reactionary. Another Carlist uprising, the Matiners' War, was launched in 1846 in Catalonia, but it was poorly organized and suppressed by 1849.
Isabella II of Spain took a more active role in government after coming of age, but she was immensely unpopular throughout her reign (1833–68). She was viewed as beholden to whoever was closest to her at court, and the people of Spain believed that she cared little for them. As a result, there was another insurrection in 1854 led by General Domingo Dulce y Garay and General Leopoldo O'Donnell y Jarris. Their coup overthrew the dictatorship of Luis Jose Sartorius, 1st Count of San Luis. As the result of the popular insurrection, the "Partido Progresista" (Progressive Party) obtained widespread support in Spain and came to power in the government in 1854. In 1856, Isabella attempted to form the Liberal Union, a pan-national coalition under the leadership of Leopoldo O'Donnell, who had already marched on Madrid that year and deposed another Espartero ministry. Isabella's plan failed and cost Isabella more prestige and favor with the people. In 1860, Isabella launched a successful war against Morocco, waged by generals O'Donnell and Juan Prim that stabilized her popularity in Spain.
Alongside the French, Spain intervened elsewhere in Cochinchina (1857–63) and Mexico (1861–62). Furthermore, the government accepted Santo Domingo's voluntary return to the Spanish Empire. Spain also extended its military presence in the Pacific off the South American coast.
In 1866, a revolt led by Juan Prim was suppressed.
In 1868 another insurgency, known as the Glorious Revolution took place. The "progresista" generals Francisco Serrano and Juan Prim revolted against Isabella and defeated her "moderado" generals at the Battle of Alcolea (1868). Isabella was driven into exile in Paris.
Two years later, in 1870, the Cortes declared that Spain would again have a king. Amadeus of Savoy, the second son of King Victor Emmanuel II of Italy, was selected and duly crowned King of Spain early the following year. Amadeus – a liberal who swore by the liberal constitution the Cortes promulgated – was faced immediately with the incredible task of bringing the disparate political ideologies of Spain to one table. The country was plagued by internecine strife, not merely between Spaniards but within Spanish parties.
Following the Hidalgo affair and an army rebellion, Amadeus famously declared the people of Spain to be ungovernable, abdicated the throne, and left the country (11 February 1873).
In the absence of the Monarch, a government of radicals and Republicans was formed and declared Spain a republic. The First Spanish Republic (1873–74) was immediately under siege from all quarters. The Carlists were the most immediate threat, launching a violent insurrection after their poor showing in the 1872 elections. There were calls for socialist revolution from the International Workingmen's Association, revolts and unrest in the autonomous regions of Navarre and Catalonia, and pressure from the Catholic Church against the fledgling republic.
A coup took place in January 1874, when General Pavía broke into the Cortes. This prevented the formation of a federal republican government, forced the dissolution of the Parliament and led to the instauration of a unitary praetorian republic ruled by General Serrano, paving the way for the Restoration of the Monarchy through another "pronunciamiento", this time by Arsenio Martínez Campos, in December 1874.
Although the former queen, Isabella II was still alive, she recognized that she was too divisive as a leader, and abdicated in 1870 in favor of her son, Alfonso.
Alfonso XII of Spain was duly crowned on 28 December 1874 after returning from exile. After the tumult of the First Spanish Republic, Spaniards were willing to accept a return to stability under Bourbon rule. The Republican armies in Spain – which were resisting a Carlist insurrection – pronounced their allegiance to Alfonso in the winter of 1874–75, led by Brigadier General Martínez-Campos. The Republic was dissolved and Antonio Cánovas del Castillo, a trusted advisor to the king, was named Prime Minister on New Year's Eve, 1874. The Carlist insurrection was put down vigorously by the new king, who took an active role in the war and rapidly gained the support of most of his countrymen. This new period witnessed the instauration of an uncompetitive parliamentary system in which two "dynastic" parties, the conservatives (formally the Liberal-Conservatives), led by Antonio Cánovas del Castillo (the mastermind behind the Restoration system), and the liberals (or liberal-fusionists), led by Práxedes Mateo Sagasta alternated in control of the government ("turnismo"). Election fraud (materialized in the so-called "caciquismo") became ubiquous, with elections reproducing pre-arranged outcomes struck in the Capital. Voter apathy was no less important. Alfonso XII suddenly died at age 28.
Constitutional monarchy continued under King Alfonso XIII. Alfonso XIII was born after his father's death and was proclaimed king upon his birth. Until the coming of age of Alfonso XIII in 1902, his mother Maria Christina of Austria served as Regent. The prime minister Antonio Cánovas del Castillo was assassinated in 1897.
In 1868, Cuba launched a war of independence against Spain. On that island, as had been the case in Santo Domingo, the Spanish government found itself embroiled in a difficult campaign against an indigenous rebellion. Unlike in Santo Domingo, however, Spain would initially win this struggle, having learned the lessons of guerrilla warfare well enough to defeat this rebellion. The pacification of the island was temporary, however, as the conflict revived in 1895 and ended in defeat at the hands of the United States in the Spanish–American War of 1898. Cuba gained its independence and Spain lost its remaining New World colony, Puerto Rico, which together with Guam and the Philippines were ceded to the United States for 20 million dollars. In 1899, Spain sold its remaining Pacific islands – the Northern Mariana Islands, Caroline Islands and Palau – to Germany and Spanish colonial possessions were reduced to Spanish Morocco, Spanish Sahara and Spanish Guinea, all in Africa.
The "disaster" of 1898 created the Generation of '98, a group of statesmen and intellectuals who demanded liberal change from the new government. However both Anarchism on the left and fascism on the right grew rapidly in Spain in the early 20th century. A revolt in 1909 in Catalonia was bloodily suppressed. Jensen (1999) argues that the defeat of 1898 led many military officers to abandon the liberalism that had been strong in the officer corps and turn to the right. They interpreted the American victory in 1898 as well as the Japanese victory against Russia in 1905 as proof of the superiority of willpower and moral values over technology. Over the next three decades, Jensen argues, these values shaped the outlook of Francisco Franco and other Falangists.
The bipartisan system began to collapse in the later years of the constitutional part of the reign of Alfonso XIII, with the dynastic parties largely disintegrating into factions: the conservatives faced a schism between "datistas", "mauristas" and "ciervistas". The liberal camp split into the mainstream liberals followers of the Count of Romanones ("romanonistas") and the followers of Manuel García Prieto, the "democrats" ("prietistas"). An additional liberal "albista" faction was later added to the last two.
Spain's neutrality in World War I allowed it to become a supplier of material for both sides to its great advantage, prompting an economic boom in Spain. However, the outbreak of Spanish influenza in Spain and elsewhere, along with a major economic slowdown in the postwar period, hit Spain particularly hard, and the country went into debt. A major workers' strike was suppressed in 1919.
Spanish colonial policies in Spanish Morocco led to an uprising known as the Rif War; rebels took control of most of the area except for the enclaves of Ceuta and Melilla in 1921.
King Alfonso XIII tacitly endorsed the September 1923 coup by General Miguel Primo de Rivera that installed a dictatorship. As Prime Minister Primo de Rivera promised to reform the country quickly and restore elections soon. He deeply believed that it was the politicians who had ruined Spain and that governing without them he could regenerate the nation. His slogan was "Country, Religion, Monarchy."
Spain (in joint action with France) won a decisive military victory in Morocco, (1925–26). The war had dragged on since 1917 and cost Spain $800 million.
The late 1920s were prosperous until the worldwide Great Depression hit in 1929. In early 1930 bankruptcy and massive unpopularity forced the king to remove Primo de Rivera. Historians depict an idealistic but inept dictator who did not understand government, lacked clear ideas and showed very little political acumen. He consulted no one, had a weak staff, and made frequent strange pronouncements. He started with very broad support but lost every element until only the army was left. His projects ran large deficits which he kept hidden. His multiple repeated mistakes discredited the king and ruined the monarchy, while heightening social tensions.
Primo de Rivera was replaced by Dámaso Berenguer (his government was known as the "dictablanda"). The later ruler was in turn replaced by the Admiral Aznar-Cabañas in February 1931. Urban voters had lost faith in the King and voted for republican parties in the municipal elections of April 1931, in what had been considered a referendum on the Monarchy. The king fled the country without abdicating and a republic was established.
Political ideologies were intensely polarized, as both right and left saw vast evil conspiracies on the other side that had to be stopped. The central issue was the role of the Catholic Church, which the left saw as the major enemy of modernity and the Spanish people, and the right saw as the invaluable protector of Spanish values.
Under the Second Spanish Republic, women were allowed to vote in general elections for the first time. The Republic devolved substantial self-government to Catalonia and, for a brief period in wartime, also to the Basque Provinces.
The first cabinets of the Republic were center-left, headed by Niceto Alcalá-Zamora and Manuel Azaña. Economic turmoil, substantial debt, and fractious, rapidly changing governing coalitions led to escalating political violence and attempted coups by right and left.
In 1933, the right-wing Spanish Confederation of the Autonomous Right (CEDA), based on the Catholic vote, won power. An armed rising of workers in October 1934, which reached its greatest intensity in Asturias and Catalonia, was forcefully put down by the CEDA government. This in turn energized political movements across the spectrum in Spain, including a revived anarchist movement and new reactionary and fascist groups, including the Falange and a revived Carlist movement.
A devastating 1936–39 civil war was won by the rebel forces supported by Nazi Germany and Fascist Italy, which General Francisco Franco got to lead some months after the beginning of the conflict once other possible challengers to the rebel leadership died. The rebels (backed among other by traditionalist Carlists, Fascist falangists and Far-right alfonsists) defeated the Republican loyalists (with a variable support of Socialists, Liberals, Communists, Anarchists and Catalan and Basque nationalists), who were backed by the Soviet Union.
The Spanish Civil War was marked by numerous small battles and sieges, and many atrocities, until the rebels (the "Nationalists"), led by Francisco Franco, won in 1939. There was military intervention as Italy sent land forces, and Germany sent smaller elite air force and armored units to the rebel side (the Nationalists). The Soviet Union sold armaments to the "Loyalists" ("Republicans"), while the Communist parties in numerous countries sent soldiers to the "International Brigades." The civil war did not escalate into a larger conflict, but did become a worldwide ideological battleground that pitted the left and many liberals against Catholics and conservatives. Britain, France and the United States remained neutral and refused to sell military supplies. Worldwide there was a decline in pacifism and a growing sense that another world war was imminent, and that it would be worth fighting for.
In the 1930s, Spanish politics were polarized at the left and right extremes of the political spectrum. The left-wing favored class struggle, land reform to overthrow the land owners, autonomy to the regions, and the destruction of the Catholic Church. The right-wing groups, the largest of which was CEDA, a Catholic coalition, believed in tradition, stability and hierarchy. Religion was the main dividing line between right and left, but there were regional variations. The Basques were devoutly Catholic but they put a high priority on regional autonomy. The Left offered a better deal so in 1936–37 they fought for the Republicans. In 1937 they pulled out of the war.
The Spanish Republican government moved to Valencia, to escape Madrid, which was under siege by the Nationalists. It had some military strength in the Air Force and Navy, but it had lost nearly all of the regular Army. After opening the arsenals to give rifles, machine guns and artillery to local militias, it had little control over the Loyalist ground forces. Republican diplomacy proved ineffective, with only two useful allies, the Soviet Union and Mexico. Britain, France and 27 other countries had agreed to an arms embargo on Spain, and the United States went along. Nazi Germany and Fascist Italy both signed that agreement, but ignored it and sent supplies and vital help, including a powerful air force under German command, the Condor Legion. Tens of thousands of Italians arrived under Italian command. Portugal supported the Nationalists, and allowed the trans-shipment of supplies to Franco's forces. The Soviets sold tanks and other armaments for Spanish gold, and sent well-trained officers and political commissars. It organized the mobilization of tens of thousands of mostly communist volunteers from around the world, who formed the International Brigades.
In 1936, the Left united in the Popular Front and were elected to power. However, this coalition, dominated by the centre-left, was undermined both by the revolutionary groups such as the anarchist Confederación Nacional del Trabajo (CNT) and Federación Anarquista Ibérica (FAI) and by anti-democratic far-right groups such as the Falange and the Carlists. The political violence of previous years began to start again. There were gunfights over strikes; landless labourers began to seize land, church officials were killed and churches burnt. On the other side, right wing militias (such as the Falange) and gunmen hired by employers assassinated left wing activists. The Republican democracy never generated the consensus or mutual trust between the various political groups that it needed to function peacefully. As a result, the country slid into civil war. The right wing of the country and high ranking figures in the army began to plan a coup, and when Falangist politician José Calvo-Sotelo was shot by Republican police, they used it as a signal to act while the Republican leadership was confused and inert.
The Nationalists under Franco won the war, and historians continue to debate the reasons. The Nationalists were much better unified and led than the Republicans, who squabbled and fought amongst themselves endlessly and had no clear military strategy. The Army went over to the Nationalists, but it was very poorly equipped – there were no tanks or modern airplanes. The small navy supported the Republicans, but their armies were made up of raw recruits and they lacked both equipment and skilled officers and sergeants. Nationalist senior officers were much better trained and more familiar with modern tactics than the Republicans.
On 17 July 1936, General Francisco Franco brought the colonial army stationed in Morocco to the mainland, while another force from the north under General Mola moved south from Navarre. Another conspirator, General Sanjurjo, who was in exile in Portugal, was killed in a plane crash while being brought to join the other military leaders. Military units were also mobilised elsewhere to take over government institutions. Franco intended to seize power immediately, but successful resistance by Republicans in the key centers of Madrid, Barcelona, Valencia, the Basque country, and other points meant that Spain faced a prolonged civil war. By 1937 much of the south and west was under the control of the Nationalists, whose Army of Africa was the most professional force available to either side. Both sides received foreign military aid: the Nationalists from Nazi Germany and Italy, while the Republicans were supported by organised far-left volunteers from the Soviet Union.
The Siege of the Alcázar at Toledo early in the war was a turning point, with the Nationalists successfully resisting after a long siege. The Republicans managed to hold out in Madrid, despite a Nationalist assault in November 1936, and frustrated subsequent offensives against the capital at Jarama and Guadalajara in 1937. Soon, though, the Nationalists began to erode their territory, starving Madrid and making inroads into the east. The North, including the Basque country fell in late 1937 and the Aragon front collapsed shortly afterwards. The bombing of Guernica on the afternoon of 26 April 1937 – a mission used as a testing ground for the German Luftwaffe's Condor Legion – was probably the most infamous event of the war and inspired Picasso's painting. The Battle of the Ebro in July–November 1938 was the final desperate attempt by the Republicans to turn the tide. When this failed and Barcelona fell to the Nationalists in early 1939, it was clear the war was over. The remaining Republican fronts collapsed, as civil war broke out inside the Left, as the Republicans suppressed the Communists. Madrid fell in March 1939.
The war cost between 300,000 and 1,000,000 lives. It ended with the total collapse of the Republic and the accession of Francisco Franco as dictator of Spain. Franco amalgamated all right wing parties into a reconstituted fascist party Falange and banned the left-wing and Republican parties and trade unions. The Church was more powerful than it had been in centuries.
The conduct of the war was brutal on both sides, with widespread massacres of civilians and prisoners. After the war, many thousands of Republicans were imprisoned and up to 150,000 were executed between 1939 and 1943. Some 500,000 refugees escaped to France; they remained in exile for years or decades.
The Francoist regime resulted in the deaths and arrests of hundreds of thousands of people who were either supporters of the previous Second Republic of Spain or potential threats to Franco's state. They were executed, sent to prisons or concentration camps. According to Gabriel Jackson, the number of victims of the White Terror (executions and hunger or illness in prisons) just between 1939 and 1943 was 200,000. Child abduction was also a wide-scale practice. The lost children of Francoism may reach 300,000.
During Franco's rule, Spain was officially neutral in World War II and remained largely economically and culturally isolated from the outside world. Under a military dictatorship, Spain saw its political parties banned, except for the official party (Falange). Labour unions were banned and all political activity using violence or intimidation to achieve its goals was forbidden.
Under Franco, Spain actively sought the return of Gibraltar by the United Kingdom, and gained some support for its cause at the United Nations. During the 1960s, Spain began imposing restrictions on Gibraltar, culminating in the closure of the border in 1969. It was not fully reopened until 1985.
Spanish rule in Morocco ended in 1967. Though militarily victorious in the 1957–58 Moroccan invasion of Spanish West Africa, Spain gradually relinquished its remaining African colonies. Spanish Guinea was granted independence as Equatorial Guinea in 1968, while the Moroccan enclave of Ifni had been ceded to Morocco in 1969. Two cities in Africa, Ceuta and Melilla remain under Spanish rule and sovereignty.
The latter years of Franco's rule saw some economic and political liberalization (the Spanish miracle), including the birth of a tourism industry. Spain began to catch up economically with its European neighbors.
Franco ruled until his death on 20 November 1975, when control was given to King Juan Carlos. In the last few months before Franco's death, the Spanish state went into a paralysis. This was capitalized upon by King Hassan II of Morocco, who ordered the 'Green March' into Western Sahara, Spain's last colonial possession.
The Spanish transition to democracy or new Bourbon restoration was the era when Spain moved from the dictatorship of Francisco Franco to a liberal democratic state. The transition is usually said to have begun with Franco's death on 20 November 1975, while its completion is marked by the electoral victory of the socialist PSOE on 28 October 1982.
Under its current (1978) constitution, Spain is a constitutional monarchy. It comprises 17 autonomous communities (Andalusia, Aragon, Asturias, Balearic Islands, Canary Islands, Cantabria, Castile and León, Castile–La Mancha, Catalonia, Extremadura, Galicia, La Rioja, Community of Madrid, Region of Murcia, Basque Country, Valencian Community, and Navarre) and 2 autonomous cities (Ceuta and Melilla).
Between 1978 and 1982, Spain was led by the "Unión del Centro Democrático" governments.
In 1981 the 23-F coup d'état attempt took place. On 23 February Antonio Tejero, with members of the Guardia Civil entered the Congress of Deputies, and stopped the session, where Leopoldo Calvo Sotelo was about to be named prime minister of the government. Officially, the coup d'état failed thanks to the intervention of King Juan Carlos. Spain joined NATO before Calvo-Sotelo left office.
Along with political change came radical change in Spanish society. Spanish society had been extremely conservative under Franco, but the transition to democracy also began a liberalization of values and social mores.
From 1982 until 1996, the social democratic PSOE governed the country, with Felipe González as prime minister. In 1986, Spain joined the European Economic Community (EEC, now European Union), and the country hosted the 1992 Summer Olympics in Barcelona and Seville Expo '92.
In 1996, the centre-right "Partido Popular" government came to power, led by José María Aznar. On 1 January 1999, Spain exchanged the "peseta" for the new Euro currency. The peseta continued to be used for cash transactions until January 1, 2002. On 11 March 2004 a number of terrorist bombs exploded on busy commuter trains in Madrid by Islamic extremists linked to Al-Qaeda, killing 191 persons and injuring thousands.
The election, held three days after the attacks, was won by the PSOE, and José Luis Rodríguez Zapatero replaced Aznar as prime minister. As José María Aznar and his ministers at first accused ETA of the atrocity, it has been argued that the outcome of the election has been influenced by this event.
In the wake of its joining the EEC, Spain experienced an economic boom during two decades, cut painfully short by the financial crisis of 2008.
During the boom years, Spain attracted a large number of immigrants, especially from the United Kingdom, but also including unknown but substantial illegal immigration, mostly from Latin America, eastern Europe and north Africa.
Spain had the fourth largest economy in the Eurozone, but after 2008 the global economic recession hit Spain hard, with the bursting of the housing bubble and unemployment reaching over 25%, sharp budget cutbacks were needed to stay in the Euro zone. The GDP shrank 1.2% in 2012. Although interest rates were historically low, investments were not encouraged sufficiently by entrepreneurs. Losses were especially high in real estate, banking, and construction. Economists concluded in early 2013 that, "Where once Spain's problems were acute, now they are chronic: entrenched unemployment, a large mass of small and medium-sized enterprises with low productivity, and, above all, a constriction in credit."
With the financial crisis and high unemployment, Spain is now suffering from a combination of continued illegal immigration paired with a massive emigration of workers, forced to seek employment elsewhere under the EU's "Freedom of Movement", with an estimated 700,000, or 1.5% of total population, leaving the country between 2008 and 2013.
Spain is ranked as a middle power able to exert modest regional influence. It has a small voice in international organizations; it is not part of the G8 and participates in the G20 only as a guest. Spain is part of the G6 (EU). | https://en.wikipedia.org/wiki?curid=13299 |
History of the Republic of Turkey
The Republic of Turkey was created after the overthrow of Sultan Mehmet VI Vahdettin by the new Republican Parliament in 1922. This new regime delivered the "coup de grâce" to the Ottoman state which had been practically wiped away from the world stage following the First World War.
The Ottoman Empire was since its foundation in , ruled as an absolute monarchy. Between 1839 and 1876 the Empire went through a period of reform. The Young Ottomans who were dissatisfied with these reforms worked together with Sultan Abdülhamid II to realize some form of constitutional arrangement in 1876. After the short-lived attempt of turning the Empire into a constitutional monarchy, Sultan Abdülhamid II turned it back into an absolute monarchy by 1878 by suspending the constitution and parliament.
A couple decades later a new reform movement under the name of the Young Turks conspired against Sultan Abdülhamid II, who was still in charge of the Empire, by starting the Young Turk Revolution. They forced the sultan to reintroduce the constitutional rule in 1908. This led to a rise of active participation of the military in politics. In 1909 they deposed the sultan and in 1913 seized power in a coup. In 1914 the Ottoman Empire entered World War I on the side of the Central Powers as an ally of the German Empire and subsequently lost the war. The goal was to win territory in the East to compensate for the loses in the West in previous years during the Italo-Turkish War and the Balkan Wars. In 1918 the leaders of the Young Turks took full responsibility for the lost war and fled the country into exile leaving the country in chaos.
The Armistice of Mudros was signed which granted the Allies, in a broad and vaguely worded clause, the right to further occupy Anatolia "in case of disorder". Within days French and British troops started occupying the remaining territory controlled by the Ottoman Empire. These occupations and the persecution of Muslims in the rest of Anatolia by primarily Greek and Armenian irregulars motivated the Turkish revolutionaries to start a resistance movement led by Mustafa Kemal Atatürk and other army officers. Shortly after the Greek occupation of Western Anatolia in 1919, Mustafa Kemal Pasha set foot in Samsun to start the Turkish War of Independence against the occupations and persecutions of Muslims in Anatolia. He and the other army officers alongside him dominated the polity that finally established the Republic of Turkey out of what was left of the Ottoman Empire. Turkey was established based on the ideology found in the country's pre-Ottoman history and was also steered towards a secular political system to diminish the influence of religious groups such as the Ulema.
The history of modern Turkey begins with the foundation of the republic on October 29, 1923, with Kemal as its first president. The government was formed from the Ankara-based revolutionary group, led by Mustafa Kemal Atatürk and his colleagues. The second constitution was ratified by the Grand National Assembly on April 20, 1924.
For about the next 10 years, the country saw a steady process of secular Westernization through Atatürk's Reforms, which included the unification of education; the discontinuation of religious and other titles; the closure of Islamic courts and the replacement of Islamic canon law with a secular civil code modeled after Switzerland's and a penal code modeled after the Italian Penal Code; recognition of the equality between the sexes and the granting of full political rights to women on 5 December 1934; the language reform initiated by the newly founded Turkish Language Association; replacement of the Ottoman Turkish alphabet with the new Turkish alphabet derived from the Latin alphabet; the dress law (the wearing of a fez, is outlawed); the law on family names; and many others.
Chronology of Major Kemalist Reforms:
The first party to be established in the newly formed republic was the Women's Party (Kadınlar Halk Fırkası). It was founded by Nezihe Muhiddin and several other women but was stopped from its activities, since during the time women were not yet legally allowed to engage in politics. The actual passage to multi-party period was first attempted with the Liberal Republican Party by Ali Fethi Okyar. The Liberal Republican Party was dissolved on 17 November 1930 and no further attempt for a multi-party democracy was made until 1945. Turkey was admitted to the League of Nations in July 1932.
Historically, Turkey continued the Foreign relations of the Ottoman Empire to balance regional and global powers off against one another, forming alliances that best protected the interests of the incumbent regime. The Soviet Union played a major role in supplying weapons to and financing Mustafa Kemal Atatürk's faction during the Turkish War of Independence but Turkey's followed a course of relative international isolation during the period of Atatürk's Reforms in 1920s and 1930s. International conferences gave Turkey full control of the strategic straits linking the Black Sea and the Mediterranean, though the Treaty of Lausanne in 1923 and the Montreux Convention of 1936.
Atatürk's successor after his death on November 10, 1938 was İsmet İnönü. He started his term in the office as a respected figure of the Independence War but because of internal fights between power groups and external events like the World War which caused a lack of goods in the country, he lost some of his popularity and support.
In the late 1930s Nazi Germany made a major effort to promote anti-Soviet propaganda in Turkey and exerted economic pressure. Britain and France, eager to outmaneuver Germany, negotiated a tripartite treaty in 1939. They gave Turkey a line of credit to purchase war materials from the West and a loan to facilitate the purchase of commodities. Afraid of threats from Germany and Russia, Turkey maintained neutrality. It sold chrome--an important war material--to both sides. It was clear by 1944 that Germany would be defeated and the chrome sales to Germany stopped.
Turkey's goal was to maintained neutrality during the war. Ambassadors from the Axis powers and Allies intermingled in Ankara. İnönü signed a non-aggression treaty with Nazi Germany on June 18, 1941, 4 days before the Axis powers invaded the Soviet Union. Nationalist magazines Bozrukat and Chinar Altu called for the declaration of war against the Soviet Union. In July 1942, Bozrukat published a map of Greater Turkey, which included Soviet controlled Caucasus and central Asian republics. In the summer of 1942, Turkish high command considered war with the Soviet Union almost unavoidable. An operation was planned, with Baku being the initial target.
Turkey traded with both sides and purchased arms from both sides. The Allies tried to stop German purchases of chrome (used in making better steel). Inflation was high as prices doubled.
By August 1944, the Axis was clearly losing the war and Turkey broke off relations. Only in February 1945, Turkey declared war on Germany and Japan, a symbolic move that allowed Turkey to join the future United Nations.
On October 24, 1945 Turkey signed the United Nations Charter as one of the fifty-one original members.
In 1945, the first opposition party in the multi-party system in Turkey, the National Development Party, was established by industrialist Nuri Demirağ. In 1946, İnönü's government organized multi-party elections, which were won by his party. He remained as the president of the country until 1950. He is still remembered as one of the key figures of Turkey.
Although the multi-party period began in 1945, the election of the Democratic Party government in May 1950 marked the first victory by a non-CHP party.
The government of Adnan Menderes (1950-1960) proved very popular at first, relaxing the restrictions on Islam and presiding over a booming economy. In the latter half of the 1950s, however, the economy began to fail and the government introduced censorship laws limiting dissent. The government became plagued by high inflation and a massive debt.
On May 27, 1960, General Cemal Gürsel led a military coup d'état, removing President Celal Bayar and Prime Minister Menderes, the second of whom was executed. The system returned to civilian control in October 1961. A fractured political system emerged in the wake of the 1960 coup, producing a series of unstable government coalitions in parliament alternating between the Justice Party of Süleyman Demirel on the right and the Republican People's Party of İsmet İnönü and Bülent Ecevit on the left.
The army issued a memorandum warning the civilian government in 1971, leading to another coup which resulted in the fall of the Demirel government and the establishment of interim governments.
In July 1974, under Prime Minister Ecevit in coalition with the religious National Salvation Party, Turkey carried out the invasion of Cyprus.
The governments of the National Front, a series of coalitions between rightist parties, followed as Ecevit was not able to remain in office despite ranking first in the elections. The fractured political scene and poor economy led to mounting violence between ultranationalists and communists in the streets of Turkey's cities, resulting in some 5,000 deaths during the late 1970s.
A military coup d'état, headed by General Kenan Evren, took place in 1980. Martial law was extended from 20 to all then existing 67 provinces of Turkey. Within two years, the military returned the government to civilian hands, although retaining close control of the political scene. The political system came under one-party governance under the Motherland Party (ANAP) of Turgut Özal (Prime Minister from 1983 to 1989). The ANAP combined a globally oriented economic program with the promotion of conservative social values. Under Özal, the economy boomed, converting towns like Gaziantep from small provincial capitals into mid-sized economic boomtowns. Military rule began to be phased out at the end of 1983. In particular in provinces in the south-east of Turkey it was replaced by a state of emergency. In 1985 the government established village guards (local paramilitary militias) to oppose separatist Kurdish groups.
Starting in July 1987, the South-East was submitted to state of emergency legislation, a measure which lasted until November 2002. With the turn of the 1990s, political instability returned. The 1995 elections brought a short-lived coalition between Mesut Yılmaz's ANAP and the True Path Party, now with Tansu Çiller at the helm.
In 1997, the military, citing his government's support for religious policies deemed dangerous to Turkey's secular nature, sent a memorandum to Prime Minister Necmettin Erbakan requesting that he resign, which he did. The event has been famously labelled a "postmodern coup" by the Turkish admiral Salim Dervişoğlu. Shortly thereafter, the Welfare Party (RP) was banned and reborn as the Virtue Party (FP). A new government was formed by ANAP and Ecevit's Democratic Left Party (DSP) supported from the outside by the center-left Republican People's Party (CHP), led by Deniz Baykal. The DSP became the largest parliamentary party in the 1999 elections. Second place went to the far-right Nationalist Movement Party (MHP). These two parties, alongside Yılmaz's ANAP formed a government. The government was somewhat effective, if not harmonious, bringing about much-needed economic reform, instituting human rights legislation, and bringing Turkey ever closer to the European Union.
A series of economic shocks led to new elections in 2002, bringing into power the conservative Justice and Development Party (AKP) It was headed by the former mayor of Istanbul, Recep Tayyip Erdoğan. The political reforms of the AKP have ensured the beginning of the negotiations with the European Union. The AKP again won the 2007 elections, which followed the controversial August 2007 presidential election, during which AKP member Abdullah Gül was elected President at the third round. Recent developments in Iraq (explained under positions on terrorism and security), secular and religious concerns, the intervention of the military in political issues, relations with the EU, the United States, and the Muslim world were the main issues. The outcome of this election, which brought the Turkish and Kurdish ethnic/nationalist parties (MHP and DTP) into the parliament, affected Turkey's bid for the European Union membership.
AKP is the only government in Turkish political history that has managed to win three general elections in a row with an increasing number of votes received in each one. The AKP has positioned itself in the midpoint of the Turkish political scene, much thanks to the stability brought by steady economic growth since they came to power in 2002. A large part of the population have welcomed the end of the political and economic instability of the 1990s, often associated with coalition governments - see Economic history of Turkey. 2011 figures showed a 9% GDP growth for Turkey.
Alleged members of a clandestine group called Ergenekon were detained in 2008 as part of a long and complex trial. Members are accused of terrorism and of plotting to overthrow the civilian government. On 22 February 2010 more than 40 officers were arrested and formally charged with attempting to overthrow the government with respect to so-called "Sledgehammer" plot. The accused included four admirals, a general and two colonels, some of them retired, including former commanders of the Turkish navy and air force (three days later, the former commanders of the navy and air force were released). Although the 2013 protests in Turkey started as a response against the removal of Taksim Gezi Park in Istanbul, they have sparked riots across the country in cities such as Izmir and Ankara as well.
In the Turkish parliamentary elections of 1 November 2015, the Justice and Development Party (AKP) won back the absolute majority in parliament: 317 of the 550 seats. CHP won 134 seats, HDP 59 seats, MHP 40 seats.
On 15 July 2016 factions within the Turkish Military attempted to overthrow President Recep Tayyip Erdoğan, citing growing non-secularism and censorship as motivation for the attempted coup. The coup was blamed on the influence of the vast network led by U.S.-based Muslim cleric Fethullah Gülen.
In the aftermath of the failed coup, major purges have occurred, including that of military officials, police officers, judges, governors and civil servants. There has also been significant media purge in the aftermath of the failed coup. There has been allegations of torture in connection with these purges.
As opposed to previous political interventions by the Turkish military, Turkey's AKP government and pro-government media maintain that the 15 July 2016 coup attempt was not motivated by allegiance to Kemalist ideology, but rather to the vast political, economic, and religious network led by U.S.-based Muslim cleric Fethullah Gülen.
A purge has seen over 45,000 military officials, police officers, judges, governors and civil servants arrested or suspended, including 2,700 judges, 15,000 teachers, and every university dean in the country. 163 generals and admirals were detained, around 45% of the Turkish military's total.
The sheer number of these arrests made at such a speed could only be done so if the "Turkish government had all those lists ready", as suggested by Johannes Hahn, European Commissioner for Enlargement and European Neighbourhood Policy, on 18 July 2016. Hahn also claimed that because these lists were already available immediately after the coup, the "event was prepared" and the lists were to be used "at a certain stage".
Turkey's media purge after the coup d'état attempt resulted in the shutdown of at least 131 media outlets and the arrest of 117 journalists – at least 35 of whom have been indicted for "membership in a terror group".
According to Amnesty International, detainees in Turkey have been denied access to legal counsel, have been beaten and tortured, and have not been provided with adequate food, water, or medical care, in the aftermath of the failed coup. At least one has attempted suicide. Amnesty International wanted the European Committee for the Prevention of Torture to send people to check on detainees conditions.
On April 16, 2017, the Turkey constitutional referendum was voted in, although narrowly and divided. The referendum creates a Presidential Republic. Many observers and European states view the referendum as an "enabling act" and see it as "democratically backsliding".
On June 24, 2018, Recep Tayyip Erdogan won the presidential election in Turkey again. | https://en.wikipedia.org/wiki?curid=13305 |
History of Islam
The history of Islam concerns the political, social, economic and cultural developments of Islamic civilization. Most historians accept that Islam originated in Mecca and Medina at the start of the 7th century CE, approximately 600 years after the founding of Christianity, with the revelations received by the prophet Muhammad. Muslims regard Islam as a return to the original faith of the prophets, such as Jesus, Solomon, David, Moses, Abraham, Noah and Adam, with the submission ("islam") to the will of Allah, God.
According to tradition, in 610 CE, the Islamic Prophet Muhammad began receiving what Muslims consider to be divine revelations, calling for submission to the one God, the expectation of the imminent Last Judgement and taking care for the poor and needy. Muhammad's message won over a handful of followers and was met with increasing opposition from Meccan notables. In 622, a few years after losing protection with the death of his influential uncle Abu Talib, Muhammad migrated to the city of Yathrib (now known as Medina). With Muhammad's death in 632, disagreement broke out over who would succeed him as leader of the Muslim community during the Rashidun Caliphate.
By the 8th century, the Umayyad Caliphate extended from Iberia in the west to the Indus River in the east. Polities such as those ruled by the Umayyads and Abbasid Caliphate (in the Middle East and later in Spain and Southern Italy), Fatimids, Seljuks, Ayyubids and Mamluks were among the most influential powers in the world. Highly persianized empires built by the Samanids, Ghaznavids, Ghurids made significant developments. The Islamic Golden Age gave rise to many centers of culture and science and produced notable polymaths, astronomers, mathematicians, physicians and philosophers during the Middle Ages.
By the early 13th century, the Delhi Sultanate conquered the northern Indian subcontinent, while Turkic dynasties like the Sultanate of Rum and Artuqids conquered much of Anatolia from the Byzantine Empire throughout the 11th and 12th centuries. In the 13th and 14th centuries, destructive Mongol invasions and those of Tamerlane (Timur) from the East, along with the loss of population in the Black Death, greatly weakened the traditional centers of the Muslim world, stretching from Persia to Egypt, but saw the emergence of the Timurid Renaissance and major global economic powers such as West Africa's Mali Empire and South Asia's Bengal Sultanate. Following the deportation and enslavement of the Muslim Moors from the Emirate of Sicily and other Italian territories, the Islamic Spain was gradually conquered by Christian forces during the Reconquista. Nonetheless, in the Early Modern period, the states of the Age of the Islamic Gunpowders—the Ottoman Turkey, Safavid Iran and Mughal India—emerged as great world powers.
During the 19th and early 20th centuries, most of the Islamic world fell under the influence or direct control of European "Great Powers." Their efforts to win independence and build modern nation-states over the course of the last two centuries continue to reverberate to the present day, as well as fuel conflict-zones in regions such as Palestine, Kashmir, Xinjiang, Chechnya, Central Africa, Bosnia and Myanmar. The Oil boom stabilized the Arab states of the Persian Gulf, making them the world's largest oil producers and exporters, which focuses on free trade and tourism.
The following timeline can serve as a rough visual guide to the most important polities in the Islamic world prior to the First World War. It covers major historical centers of power and culture, including Arabia, Mesopotamia (modern Iraq), Persia (modern Iran), Levant (modern Syria, Lebanon, Jordan and Israel/Palestine), Egypt, Maghreb (north-west Africa), al-Andalus (Iberia), Transoxania (Central Asia), Hindustan (including modern Pakistan, North India and Bangladesh), and Anatolia (modern Turkey). It is necessarily an approximation, since rule over some regions was sometimes divided among different centers of power, and authority in larger polities was often distributed among several dynasties. For example, during the later stages of the Abbasid Caliphate, even the capital city of Baghdad was effectively ruled by other dynasties such as the Buyyids and the Seljuks, while the Ottomans commonly delegated executive authority over outlying provinces to local potentates, such as the Deys of Algiers, the Beys of Tunis, and the Mamluks of Iraq.
The study of the earliest periods in Islamic history is made difficult by a lack of sources. For example, the most important historiographical source for the origins of Islam is the work of al-Tabari. While al-Tabari is considered an excellent historian by the standards of his time and place, he made liberal use of mythical, legendary, stereotyped, distorted, and polemical presentations of subject matter—which are however considered to be Islamically acceptable—and his descriptions of the beginning of Islam post-date the events by several generations, al-Tabari having died in 923.
Differing views about how to deal with the available sources has led to the development of four different approaches to the history of early Islam. All four methods have some level of support today.
Nowadays, the popularity of the different methods employed varies on the scope of the works under consideration. For overview treatments of the history of early Islam, the descriptive approach is more popular. For scholars who look at the beginnings of Islam in depth, the source critical and tradition critical methods are more often followed.
After the 8th century, the quality of sources improves. Those sources which treated earlier times with a large temporal and cultural gap now begin to give accounts which are more contemporaneous, the quality of genre of available historical accounts improves, and new documentary sources—such as official documents, correspondence and poetry—appear. For the time prior to the beginning of Islam—in the 6th century—sources are superior as well, if still of mixed quality. In particular, the sources covering the Sasanian realm of influence in the 6th century are poor, while the sources for Byzantine areas at the time are of a respectable quality, and complemented by Syriac Christian sources for Syria and Iraq.
Islam arose within the context of Late Antiquity. The second half of the sixth century saw political disorder in Arabia, and communication routes were no longer secure. Religious divisions played an important role in the crisis. Judaism became the dominant religion of the Himyarite Kingdom in Yemen after about 380, while Christianity took root in the Persian Gulf. There was also a yearning for a more spiritual form of religion," and "the choice of religion increasingly became an individual rather than a collective issue." While some were reluctant to convert to a foreign faith, those faiths provided "the principal intellectual and spiritual reference points," and Jewish and Christian loanwords from Aramaic began to replace the old pagan vocabulary of Arabic throughout the peninsula. Hanif, "seekers," searched for a new religious outlook to replace polytheism, focusing on "the all-encompassing father god Allah whom they freely equated with the Jewish Yahweh and the Christian Jehovah." In their view, Mecca was originally dedicated to this one true religion, established by Abraham.
According to tradition, the Islamic prophet Muhammad was born in Mecca around the year 570. His family belonged to the Quraysh, which was the chief tribe of Mecca and a dominant force in western Arabia. To counter the effects of anarchy, they upheld the institution of "sacred months" when all violence was forbidden and travel was safe. The polytheistic Kaaba shrine in Mecca and the surrounding area was a popular pilgrimage destination, which had significant economic consequences for the city.
Most likely Muhammad was "intimately aware of Jewish belief and practices," and acquainted with the "Hanif". Like the "Hanif", Muhammad practiced "Taḥannuth", spending time in seclusion at mount Hira and "turning away from paganism." When he was about forty years old he began receiving at mount Hira' what Muslims regard as divine revelations delivered through the angel Gabriel, which would later form the Quran. These inspirations urged him to proclaim a strict monotheistic faith, as the final expression of the prophetic tradition earlier codified in Judaism and Christianity; to warn his compatriots of the impending Judgement Day; and to castigate social injustices of his city. Muhammad's message won over a handful of faithfull, but was met with increasing opposition from notables of Mecca. In 622, a few years after losing protection with the death of his influential uncle Abu Talib, Muhammad migrated to the city of Yathrib (subsequently called Medina) where he was joined by his followers. Later generations would count this event, known as the hijra, as the start of the Islamic era.
In Yathrib, where he was accepted as an arbitrator among the different communities of the city under the terms of the Constitution of Medina, Muhammad began to lay the foundations of the new Islamic society, with the help of new Quranic verses which provided guidance on matters of law and religious observance. The surahs of this period emphasized his place among the long line of Biblical prophets, but also differentiated the message of the Quran from Christianity and Judaism. Armed conflict with Meccans and Jewish tribes of the Yathrib area soon broke out. After a series of military confrontations and political manoeuvres, Muhammad was able to secure control of Mecca and allegiance of the Quraysh in 629. In the time remaining until his death in 632, tribal chiefs across the peninsula entered into various agreements with him, some under terms of alliance, others acknowledging his prophethood and agreeing to follow Islamic practices, including paying the alms levy to his government, which consisted of a number of deputies, an army of believers, and a public treasury.
After Muhammad died, a series of four Caliphs governed the Islamic state: Abu Bakr (632–634), Umar ibn al-Khattab (Umar І, 634–644), Uthman ibn Affan, (644–656), and Ali ibn Abi Talib (656–661). These leaders are known as the "Rashidun" or "rightly guided" Caliphs in Sunni Islam. They oversaw the initial phase of the Muslim conquests, advancing through Persia, Levant, Egypt, and North Africa.
After Muhammad's death, Abu Bakr, one of his closest associates, was chosen as the first caliph ( "", lit. successor). Although the office of caliph retained an aura of religious authority, it laid no claim to prophecy. A number of tribal leaders refused to extend agreements made with Muhammad to Abu Bakr, ceasing payments of the alms levy and in some cases claiming to be prophets in their own right. Abu Bakr asserted his authority in a successful military campaign known as the Ridda wars, whose momentum was carried into the lands of the Byzantine and Sasanian empires. By the end of the reign of the second caliph, Umar I, Arab armies, whose battle-hardened ranks were now swelled by the defeated rebels and former imperial auxiliary troops, conquered the Byzantine provinces of Syria and Egypt, while the Sassanids lost their western territories, with the rest to follow soon afterwards.
Umar improved administration of the fledgling empire, ordering improvement of irrigation networks and playing a role in foundation of cities like Basra. To be close to the poor, he lived in a simple mud hut without doors and walked the streets every evening. After consulting with the poor, Umar established the Bayt al-mal, a welfare institution for the Muslim and non-Muslim poor, needy, elderly, orphans, widows, and the disabled. The Bayt al-mal ran for hundreds of years under the Rashidun Caliphate in the 7th century and continued through the Umayyad period and well into the Abbasid era. Umar also introduced child benefit for the children and pensions for the elderly. When he felt that a governor or a commander was becoming attracted to wealth or did not meet the required administrative standards, he had him removed from his position. The expansion was partially halted between 638–639 during the years of great famine and plague in Arabia and Levant, respectively, but by the end of Umar's reign, Syria, Egypt, Mesopotamia, and much of Persia were incorporated into the Islamic State.
Local populations of Jews and indigenous Christians, who lived as religious minorities and were taxed (while Muslims paid "Zakat") to finance the Byzantine–Sassanid Wars, often aided Muslims to take over their lands from the Byzantines and Persians, resulting in exceptionally speedy conquests. As new areas were conquered, they also benefited from free trade with other areas of the growing Islamic state, where, to encourage commerce, taxes were applied to wealth rather than trade. The Muslims paid Zakat on their wealth for the benefit of the poor. Since the Constitution of Medina, drafted by the Islamic prophet Muhammad, the Jews and the Christians continued to use their own laws and had their own judges. To assist in the quick expansion of the state, the Byzantine and the Persian tax collection systems were maintained and the people paid a poll tax lower than the one imposed under the Byzantines and the Persians.
In 639, Umar appointed Muawiyah ibn Abi Sufyan as the governor of Syria after the previous governor died in a plague along with 25,000 other people. To stop the Byzantine harassment from the sea during the Arab–Byzantine wars, in 649 Muawiyah set up a navy, manned by Monophysitise Christians, Copts and Jacobite Syrian Christians sailors and Muslim troops, which defeated the Byzantine navy at the Battle of the Masts in 655, opening up the Mediterranean to Muslim ships.
Early Muslim armies stayed in encampments away from cities because Umar feared that they may get attracted to wealth and luxury, moving away from the worship of God, accumulating wealth and establishing dynasties. Staying in these encampments away from the cities also ensured that there was no stress on the local populations which could remain autonomous. Some of these encampments later grew into cities like Basra and Kufa in Iraq and Fustat in Egypt.
When Umar was assassinated in 644, Uthman ibn Affan second cousin and twice son-in-law of Muhammad became the next caliph. As the Arabic language is written without vowels, speakers of different Arabic dialects and other languages recited the Quran with phonetic variations that could alter the meaning of the text. When Uthman ibn Affan became aware of this, he ordered a standard copy of the Quran to be prepared. Begun during his reign, the compilation of the Quran was finished some time between 650 and 656, and copies were sent out to the different centers of the expanding Islamic empire.
After Muhammad's death, the old tribal differences between the Arabs started to resurface. Following the Roman–Persian Wars and the Byzantine–Sassanid Wars deep-rooted differences between Iraq (formerly under the Persian Sassanid Empire) and Syria (formerly under the Byzantine Empire) also existed. Each wanted the capital of the newly established Islamic State to be in their area.
As Uthman ibn Affan became very old, Marwan I, a relative of Muawiyah, slipped into the vacuum, becoming his secretary and slowly assuming more control. When Uthman was assassinated in 656, Ali ibn Abi Talib, a cousin and son-in-law of Muhammad, assumed the position of caliph and moved the capital to Kufa in Iraq. Muawiyah I, the governor of Syria, and Marwan I demanded arrest of the culprits. Marwan I manipulated every one and created conflict, which resulted in the first civil war (the "First Fitna"). Ali was assassinated by Kharijites in 661. Six months later in 661, in the interest of peace, Ali's son Hasan, made a peace treaty with Muawiyah I. In the Hasan–Muawiya treaty, Hasan ibn Ali handed over power to Muawiya on the condition that he would be just to the people and not establish a dynasty after his death. Muawiyah subsequently broke the conditions of the agreement and established the Umayyad dynasty, with a capital in Damascus. Husayn ibn Ali, by then Muhammad's only living grandson, refused to swear allegiance to the Umayyads. He was killed in the Battle of Karbala the same year, in an event still mourned by Shia on the Day of Ashura. Unrest, called the Second Fitna continued, but Muslim rule was extended under Muawiyah to Rhodes, Crete, Kabul, Bukhara, and Samarkand, and expanded in North Africa. In 664, Arab armies conquered Kabul, and in 665 pushed into the Maghreb.
The Umayyad dynasty (or "Ommiads"), whose name derives from Umayya ibn Abd Shams, the great-grandfather of the first Umayyad caliph, ruled from 661 to 750. Although the Umayyad family came from the city of Mecca, Damascus was the capital. After the death of Abdu'l-Rahman ibn Abu Bakr in 666, Muawiyah I consolidated his power. Muawiyah I moved his capital to Damascus from Medina, which led to profound changes in the empire. In the same way, at a later date, the transfer of the Caliphate from Damascus to Baghdad marked the accession of a new family to power.
As the state grew, the state expenses increased. Additionally the Bayt al-mal and the Welfare State expenses to assist the Muslim and the non-Muslim poor, needy, elderly, orphans, widows, and the disabled, increased, the Umayyads asked the new converts (mawali) to continue paying the poll tax. The Umayyad rule, with its wealth and luxury also seemed at odds with the Islamic message preached by Muhammad. All this increased discontent. The descendants of Muhammad's uncle Abbas ibn Abd al-Muttalib rallied discontented "mawali", poor Arabs, and some Shi'a against the Umayyads and overthrew them with the help of the general Abu Muslim, inaugurating the Abbasid dynasty in 750, which moved the capital to Baghdad. A branch of the Ummayad family fled across North Africa to Al-Andalus, where they established the Caliphate of Córdoba, which lasted until 1031 before falling due to the Fitna of al-Andalus. The Bayt al-mal, the Welfare State then continued under the Abbasids.
At its largest extent, the Umayyad dynasty covered more than making it one of the largest empires the world had yet seen, and the fifth largest contiguous empire ever.
Muawiyah beautified Damascus, and developed a court to rival that of Constantinople. He expanded the frontiers of the empire, reaching the edge of Constantinople at one point, though the Byzantines drove him back and he was unable to hold any territory in Anatolia. Sunni Muslims credit him with saving the fledgling Muslim nation from post-civil war anarchy. However, Shia Muslims accuse him of instigating the war, weakening the Muslim nation by dividing the Ummah, fabricating self-aggrandizing heresies slandering the Prophet's family and even selling his Muslim critics into slavery in the Byzantine empire. One of Muawiyah's most controversial and enduring legacies was his decision to designate his son Yazid as his successor. According to Shi'a doctrine, this was a clear violation of the treaty he made with Hasan ibn Ali.
In 682, Yazid restored Uqba ibn Nafi as the governor of North Africa. Uqba won battles against the Berbers and Byzantines. From there Uqba marched thousands of miles westward towards Tangier, where he reached the Atlantic coast, and then marched eastwards through the Atlas Mountains. With about 300 cavalrymen, he proceeded towards Biskra where he was ambushed by a Berber force under Kaisala. Uqba and all his men died fighting. The Berbers attacked and drove Muslims from north Africa for a period. Weakened by the civil wars, the Umayyad lost supremacy at sea, and had to abandon the islands of Rhodes and Crete. Under the rule of Yazid I, some Muslims in Kufa began to think that if Husayn ibn Ali the descendant of Muhammad was their ruler, he would have been more just. He was invited to Kufa but was later betrayed and killed. Imam Husain's son, Imam Ali ibn Husain, was imprisoned along with Husain's sister and other ladies left in Karbala war. Due to opposition by public they were later released and allowed to go to their native place Medina. One Imam after another continued in the generation of Imam Husain but they were opposed by the Caliphs of the day as their rivals till Imam Abdullah al-Mahdi Billah came in power as first Caliph of Fatimid in North Africa when Caliphate and Imamate came to same person again after Imam Ali. These Imams were recognized by Shia Islam taking Imam Ali as first Caliph/ Imam and the same is institutionalized by the Safavids and many similar institutions named now as Ismaili, Twelver etc.
The period under Muawiya II was marked by civil wars (Second Fitna). This would ease in the reign of Abd al-Malik ibn Marwan, a well-educated and capable ruler. Despite the many political problems that impeded his rule, all important records were translated into Arabic. In his reign, a currency for the Muslim world was minted. This led to war with the Byzantine Empire under Justinian II (Battle of Sebastopolis) in 692 in Asia Minor. The Byzantines were decisively defeated by the Caliph after the defection of a large contingent of Slavs. The Islamic currency was then made the exclusive currency in the Muslim world. He reformed agriculture and commerce. Abd al-Malik consolidated Muslim rule and extended it, made Arabic the state language, and organized a regular postal service.
Sulayman ibn Abd al-Malik was hailed as caliph the day al-Walid died. He appointed Yazid ibn al-Muhallab governor of Mesopotamia. Sulayman ordered the arrest and execution of the family of al-Hajjaj, one of two prominent leaders (the other was Qutayba ibn Muslim) who had supported the succession of al-Walid's son Yazid, rather than Sulayman. Al-Hajjaj had predeceased al-Walid, so he posed no threat. Qutaibah renounced allegiance to Sulayman, though his troops rejected his appeal to revolt. They killed him and sent his head to Sulayman. Sulayman did not move to Damascus on becoming Caliph, remaining in Ramla. Sulayman sent Maslama ibn Abd al-Malik to attack the Byzantine capital (siege of Constantinople). The intervention of Bulgaria on the Byzantine side proved decisive. The Muslims sustained heavy losses. Sulayman died suddenly in 717.
Yazid II came to power on the death of Umar II. Yazid fought the Kharijites, with whom Umar had been negotiating, and killed the Kharijite leader Shawdhab. In Yazid's reign, civil wars began in different parts of the empire. Yazid expanded the Caliphate's territory into the Caucasus, before dying in 724. Inheriting the caliphate from his brother, Hisham ibn Abd al-Malik ruled an empire with many problems. He was effective in addressing these problems, and in allowing the Umayyad empire to continue as an entity. His long rule was an effective one, and renewed reforms introduced by Umar II. Under Hisham's rule, regular raids against the Byzantines continued. In North Africa, Kharijite teachings combined with local restlessness to produce the Berber Revolt. He was also faced with a revolt by Zayd ibn Ali. Hisham suppressed both revolts. The Abbasids continued to gain power in Khurasan and Iraq. However, they were not strong enough to make a move yet. Some were caught and punished or executed by eastern governors. The Battle of Akroinon, a decisive Byzantine victory, was during the final campaign of the Umayyad dynasty. Hisham died in 743.
Al-Walid II saw political intrigue during his reign. Yazid III spoke out against his cousin Walid's "immorality" which included discrimination on behalf of the Banu Qays Arabs against Yemenis and non-Arab Muslims, and Yazid received further support from the Qadariya and Murji'iya (believers in human free will). Walid was shortly thereafter deposed in a coup. Yazid disbursed funds from the treasury and acceded to the Caliph. He explained that he had rebelled on behalf of the Book of God and the Sunna. Yazid reigned for only six months, while various groups refused allegiance and dissident movements arose, after which he died. Ibrahim ibn al-Walid, named heir apparent by his brother Yazid III, ruled for a short time in 744, before he abdicated. Marwan II ruled from 744 until he was killed in 750. He was the last Umayyad ruler to rule from Damascus. Marwan named his two sons Ubaydallah and Abdallah heirs. He appointed governors and asserted his authority by force. Anti-Umayyad feeling was very prevalent, especially in Iran and Iraq. The Abbasids had gained much support. Marwan's reign as caliph was almost entirely devoted to trying to keep the Umayyad empire together. His death signalled the end of Umayyad rule in the East, and was followed by the massacre of Umayyads by the Abbasids. Almost the entire Umayyad dynasty was killed, except for the talented prince Abd al-Rahman who escaped to the Iberian Peninsula and founded a dynasty there.
The Abbasid dynasty rose to power in 750, consolidating the gains of the earlier Caliphates. Initially, they conquered Mediterranean islands including the Balearics and, after, in 827 the Southern Italy. The ruling party had come to power on the wave of dissatisfaction with the Umayyads, cultivated by the Abbasid revolutionary Abu Muslim. Under the Abbasids Islamic civilization flourished. Most notable was the development of Arabic prose and poetry, termed by "The Cambridge History of Islam" as its "golden age". Commerce and industry (considered a Muslim Agricultural Revolution) and the arts and sciences (considered a Muslim Scientific Revolution) also prospered under Abbasid caliphs al-Mansur (ruled 754–775), Harun al-Rashid (ruled 786–809), al-Ma'mun (ruled 809–813) and their immediate successors.
The capital was moved from Damascus to Baghdad, due to the importance placed by the Abbasids upon eastern affairs in Persia and Transoxania. At this time the caliphate showed signs of fracture amid the rise of regional dynasties. Although the Umayyad family had been killed by the revolting Abbasids, one family member, Abd ar-Rahman I, escaped to Spain and established an independent caliphate there in 756. In the Maghreb, Harun al-Rashid appointed the Arab Aghlabids as virtually autonomous rulers, although they continued to recognise central authority. Aghlabid rule was short-lived, and they were deposed by the Shiite Fatimid dynasty in 909. By around 960, the Fatimids had conquered Abbasid Egypt, building a capital there in 973 called ""al-Qahirah"" (meaning "the planet of victory", known today as Cairo). In Persia the Turkic Ghaznavids snatched power from the Abbasids. Abbasid influence had been consumed by the Great Seljuq Empire (a Muslim Turkish clan which had migrated into mainland Persia) by 1055.
Expansion continued, sometimes by force, sometimes by peaceful proselytising. The first stage in the conquest of India began just before the year 1000. By some 200 (from 1193–1209) years later, the area up to the Ganges river had fallen. In sub-Saharan West Africa, Islam was established just after the year 1000. Muslim rulers were in Kanem starting from sometime between 1081 and 1097, with reports of a Muslim prince at the head of Gao as early as 1009. The Islamic kingdoms associated with Mali reached prominence in the 13th century.
The Abbasids developed initiatives aimed at greater Islamic unity. Different sects of the Islamic faith and mosques, separated by doctrine, history, and practice, were pushed to cooperate. The Abbasids also distinguished themselves from the Umayyads by attacking the Umayyads' moral character and administration. According to Ira Lapidus, "The Abbasid revolt was supported largely by Arabs, mainly the aggrieved settlers of Marw with the addition of the Yemeni faction and their Mawali". The Abbasids also appealed to non-Arab Muslims, known as "mawali", who remained outside the kinship-based society of the Arabs and were perceived as a lower class within the Umayyad empire. Islamic ecumenism, promoted by the Abbasids, refers to the idea of unity of the "Ummah" in the literal meaning: that there was a single faith. Islamic philosophy developed as the Shariah was codified, and the four Madhabs were established. This era also saw the rise of classical Sufism. Religious achievements included completion of the canonical collections of Hadith of Sahih Bukhari and others. Islam recognized to a certain extent the validity of the Abrahamic religions, the Quran identifying Jews, Christians, Zoroastrians, and "Sabi'un" or "baptists" (usually taken as a reference to the Mandeans and related Mesopotamian groups) as "people of the book". Toward the beginning of the high Middle Ages, the doctrines of the Sunni and Shia, two major denominations of Islam, solidified and the divisions of the world theologically would form. These trends would continue into the Fatimid and Ayyubid periods.
Politically, the Abbasid Caliphate evolved into an Islamic monarchy (unitary system of government.) The regional Sultanate and Emirate governors' existence, validity, or legality were acknowledged for unity of the state. In the early Islamic philosophy of the Iberian Umayyads, Averroes presented an argument in "The Decisive Treatise", providing a justification for the emancipation of science and philosophy from official Ash'ari theology; thus, Averroism has been considered a precursor to modern secularism.
"Early Middle Ages"
According to Arab sources in the year 750, Al-Saffah, the founder of the Abbasid Caliphate, launched a massive rebellion against the Umayyad Caliphate from the province of Khurasan near Talas. After eliminating the entire Umayyad family and achieving victory at the Battle of the Zab, Al-Saffah and his forces marched into Damascus and founded a new dynasty. His forces confronted many regional powers and consolidated the realm of the Abbasid Caliphate.
In Al-Mansur's time, Persian scholarship emerged. Many non-Arabs converted to Islam. The Umayyads actively discouraged conversion in order to continue the collection of the jizya, or the tax on non-Muslims. Islam nearly doubled within its territory from 8% of residents in 750 to 15% by the end of Al-Mansur's reign. Al-Mahdi, whose name means "Rightly-guided" or "Redeemer", was proclaimed caliph when his father was on his deathbed. Baghdad blossomed during Al-Mahdi's reign, becoming the world's largest city. It attracted immigrants from Arabia, Iraq, Syria, Persia and as far away as India and Spain. Baghdad was home to Christians, Jews, Hindus, and Zoroastrians, in addition to the growing Muslim population. Like his father, Al-Hadi was open to his people and allowed citizens to address him in the palace at Baghdad. He was considered an "enlightened ruler", and continued the policies of his Abbasid predecessors. His short rule was plagued by military conflicts and internal intrigue.
The military conflicts subsided as Harun al-Rashid ruled. His reign was marked by scientific, cultural and religious prosperity. He established the library Bayt al-Hikma ("House of Wisdom"), and the arts and music flourished during his reign. The Barmakid family played a decisive advisorial role in establishing the Caliphate, but declined during Rashid's rule.
Al-Amin received the Caliphate from his father Harun Al-Rashid, but failed to respect the arrangements made for his brothers, leading to the Fourth Fitna. Al-Ma'mun's general Tahir ibn Husayn took Baghdad, executing Al-Amin. The war led to a loss of prestige for the dynasty.
The Abbasids soon became caught in a three-way rivalry among Coptic Arabs, Indo-Persians, and immigrant Turks. In addition, the cost of running a large empire became too great. The Turks, Egyptians, and Arabs adhered to the Sunnite sect; the Persians, a great portion of the Turkic groups, and several of the princes in India were Shia. The political unity of Islam began to disintegrate. Under the influence of the Abbasid caliphs, independent dynasties appeared in the Muslim world and the caliphs recognized such dynasties as legitimately Muslim. The first was the Tahirid dynasty in Khorasan, which was founded during the caliph Al-Ma'mun's reign. Similar dynasties included the Saffarids, Samanids, Ghaznavids and Seljuqs. During this time, advancements were made in the areas of astronomy, poetry, philosophy, science, and mathematics.
"Early Middle Ages"
Upon Al-Amin's death, Al-Ma'mun became Caliph. Al-Ma'mun extended the Abbasid empire's territory during his reign and dealt with rebellions. Al-Ma'mun had been named governor of Khurasan by Harun, and after his ascension to power, the caliph named Tahir as governor of his military services in order to assure his loyalty. Tahir and his family became entrenched in Iranian politics and became powerful, frustrating Al-Ma'mun's desire to centralize and strengthen Caliphal power. The rising power of the Tahirid dynasty became a threat as Al-Ma'mun's own policies alienated them and other opponents.
Al-Ma'mun worked to centralize power and ensure a smooth succession. Al-Mahdi proclaimed that the caliph was the protector of Islam against heresy, and also claimed the ability to declare orthodoxy. Religious scholars averred that Al-Ma'mun was overstepping his bounds in the "Mihna", the Abbasid inquisition which he introduced in 833 four months before he died. The "Ulama" emerged as a force in Islamic politics during Al-Ma'mun's reign for opposing the inquisitions. The "Ulema" and the major Islamic law schools took shape in the period of Al-Ma'mun. In parallel, Sunnism became defined as a religion of laws. Doctrinal differences between Sunni and Shi'a Islam became more pronounced.
During the Al-Ma'mun regime, border wars increased. Al-Ma'mun made preparations for a major campaign, but died while leading an expedition in Sardis. Al-Ma'mun gathered scholars of many religions at Baghdad, whom he treated well and with tolerance. He sent an emissary to the Byzantine Empire to collect the most famous manuscripts there, and had them translated into Arabic. His scientists originated alchemy. Shortly before his death, during a visit to Egypt in 832, the caliph ordered the breaching of the Great Pyramid of Giza to search for knowledge and treasure. Workers tunnelled in near where tradition located the original entrance. Al-Ma'mun later died near Tarsus under questionable circumstances and was succeeded by his half-brother, Al-Mu'tasim, rather than his son, Al-Abbas ibn Al-Ma'mun.
As Caliph, Al-Mu'tasim promptly ordered the dismantling of al-Ma'mun's military base at Tyana. He faced Khurramite revolts. One of the most difficult problems facing this Caliph was the ongoing uprising of Babak Khorramdin. Al-Mu'tasim overcame the rebels and secured a significant victory. Byzantine emperor Theophilus launched an attack against Abbasid fortresses. Al-Mu'tasim sent Al-Afshin, who met and defeated Theophilus' forces at the Battle of Anzen. On his return he became aware of a serious military conspiracy which forced him and his successors to rely upon Turkish commanders and ghilman slave-soldiers (foreshadowing the Mamluk system). The Khurramiyyah were never fully suppressed, although they slowly declined during the reigns of succeeding Caliphs. Near the end of al-Mu'tasim's life there was an uprising in Palestine, but he defeated the rebels.
During Al-Mu'tasim's reign, the Tahirid dynasty continued to grow in power. The Tahirids were exempted from many tribute and oversight functions. Their independence contributed to Abbasid decline in the east. Ideologically, al-Mu'tasim followed his half-brother al-Ma'mun. He continued his predecessor's support for the Islamic Mu'tazila sect, applying brutal torture against the opposition. Arab mathematician Al-Kindi was employed by Al-Mu'tasim and tutored the Caliph's son. Al-Kindi had served at the House of Wisdom and continued his studies in Greek geometry and algebra under the caliph's patronage.
Al-Wathiq succeeded his father. Al-Wathiq dealt with opposition in Arabia, Syria, Palestine and in Baghdad. Using a famous sword he personally joined the execution of the Baghdad rebels. The revolts were the result of an increasingly large gap between Arab populations and the Turkish armies. The revolts were put down, but antagonism between the two groups grew, as Turkish forces gained power. He also secured a captive exchange with the Byzantines. Al-Wathiq was a patron of scholars, as well as artists. He personally had musical talent and is reputed to have composed over one hundred songs.
When Al-Wathiq died of high fever, Al-Mutawakkil succeeded him. Al-Mutawakkil's reign is remembered for many reforms and is viewed as a golden age. He was the last great Abbasid caliph; after his death the dynasty fell into decline. Al-Mutawakkil ended the Mihna. Al-Mutawakkil built the Great Mosque of Samarra as part of an extension of Samarra eastwards. During his reign, Al-Mutawakkil met famous Byzantine theologian Constantine the Philosopher, who was sent to strengthen diplomatic relations between the Empire and the Caliphate by Emperor Michael III. Al-Mutawakkil involved himself in religious debates, as reflected in his actions against minorities. The Shīʻi faced repression embodied in the destruction of the shrine of Hussayn ibn ʻAlī, an action that was ostensibly carried out to stop pilgrimages. Al-Mutawakkil continued to rely on Turkish statesmen and slave soldiers to put down rebellions and lead battles against foreign empires, notably capturing Sicily from the Byzantines. Al-Mutawakkil was assassinated by a Turkish soldier.
Al-Muntasir succeeded to the Caliphate on the same day with the support of the Turkish faction, though he was implicated in the murder. The Turkish party had al-Muntasir remove his brothers from the line of succession, fearing revenge for the murder of their father. Both brothers wrote statements of abdication. During his reign, Al-Muntasir removed the ban on pilgrimage to the tombs of Hassan and Hussayn and sent Wasif to raid the Byzantines. Al-Muntasir died of unknown causes. The Turkish chiefs held a council to select his successor, electing Al-Musta'in. The Arabs and western troops from Baghdad were displeased at the choice and attacked. However, the Caliphate no longer depended on Arabian choice, but depended on Turkish support. After the failed Muslim campaign against the Christians, people blamed the Turks for bringing disaster on the faith and murdering their Caliphs. After the Turks besieged Baghdad, Al-Musta'in planned to abdicate to Al-Mu'tazz but was put to death by his order. Al-Mu'tazz was enthroned by the Turks, becoming the youngest Abbasaid Caliph to assume power.
Al-Mu'tazz proved too apt a pupil of his Turkish masters, but was surrounded by parties jealous of each other. At Samarra, the Turks were having problems with the "Westerns" (Berbers and Moors), while the Arabs and Persians at Baghdad, who had supported al-Musta'in, regarded both with equal hatred. Al-Mu'tazz put his brothers Al-Mu'eiyyad and Abu Ahmed to death. The ruler spent recklessly, causing a revolt of Turks, Africans, and Persians for their pay. Al-Mu'tazz was brutally deposed shortly thereafter. Al-Muhtadi became the next Caliph. He was firm and virtuous compared to the earlier Caliphs, though the Turks held the power. The Turks killed him soon after his ascension. Al-Mu'tamid followed, holding on for 23 years, though he was largely a ruler in name only. After the Zanj Rebellion, Al-Mu'tamid summoned al-Muwaffak to help him. Thereafter, Al-Muwaffaq ruled in all but name. The Hamdanid dynasty was founded by Hamdan ibn Hamdun when he was appointed governor of Mardin in Anatolia by the Caliphs in 890. Al-Mu'tamid later transferred authority to his son, al-Mu'tadid, and never regained power. The Tulunids became the first independent state in Islamic Egypt, when they broke away during this time.
Al-Mu'tadid ably administered the Caliphate. Egypt returned to allegiance and Mesopotamia was restored to order. He was tolerant towards Shi'i, but toward the Umayyad community he was not so just. Al-Mu'tadid was cruel in his punishments, some of which are not surpassed by those of his predecessors. For example, the Kharijite leader at Mosul was paraded about Baghdad clothed in a robe of silk, of which Kharijites denounced as sinful, and then crucified. Upon Al-Mu'tadid's death, his son by a Turkish slave-girl, Al-Muktafi, succeeded to the throne.
Al-Muktafi became a favourite of the people for his generosity, and for abolishing his father's secret prisons, the terror of Baghdad. During his reign, the Caliphate overcame threats such as the Carmathians. Upon Al-Muktafi's death, the vazir next chose Al-Muqtadir. Al-Muqtadir's reign was a constant succession of thirteen Vazirs, one rising on the fall or assassination of another. His long reign brought the Empire to its lowest ebb. Africa was lost, and Egypt nearly. Mosul threw off its dependence, and the Greeks raided across the undefended border. The East continued to formally recognise the Caliphate, including those who virtually claimed independence.
At the end of the Early Baghdad Abbasids period, Empress Zoe Karbonopsina pressed for an armistice with Al-Muqtadir and arranged for the ransom of the Muslim prisoner while the Byzantine frontier was threatened by Bulgarians. This only added to Baghdad's disorder. Though despised by the people, Al-Muqtadir was again placed in power after upheavals. Al-Muqtadir was eventually slain outside the city gates, whereupon courtiers chose his brother al-Qahir. He was even worse. Refusing to abdicate, he was blinded and cast into prison.
His son Ar-Radi took over only to experience a cascade of misfortune. Praised for his piety, he became the tool of the de facto ruling Minister, Ibn Raik ("amir al-umara"; 'Amir of the Amirs'). Ibn Raik held the reins of government and his name was joined with the Caliph's in public prayers. Around this period, the Hanbalis, supported by popular sentiment, set up in fact a kind of 'Sunni inquisition'. Ar-Radi is commonly regarded as the last of the real Caliphs: the last to deliver orations at the Friday service, to hold assemblies, to commune with philosophers, to discuss the questions of the day, to take counsel on the affairs of State; to distribute alms, or to temper the severity of cruel officers. Thus ended the Early Baghdad Abbasids.
In the late mid-930s, the Ikhshidids of Egypt carried the Arabic title "Wali" reflecting their position as governors on behalf of the Abbasids, The first governor (Muhammad bin Tughj Al-Ikhshid) was installed by the Abbasid Caliph. They gave him and his descendants the Wilayah for 30 years. The last name Ikhshid is Soghdian for "prince".
Also in the 930s, 'Alī ibn Būyah and his two younger brothers, al-Hassan and Aḥmad founded the Būyid confederation. Originally a soldier in the service of the Ziyārīds of Ṭabaristān, 'Alī was able to recruit an army to defeat a Turkish general from Baghdad named Yāqūt in 934. Over the next nine years the three brothers gained control of the remainder of the caliphate, while accepting the titular authority of the caliph in Baghdad. The Būyids made large territorial gains. Fars and Jibal were conquered. Central Iraq submitted in 945, before the Būyids took Kermān (967), Oman (967), the Jazīra (979), Ṭabaristān (980), and Gorgan (981). After this the Būyids went into slow decline, with pieces of the confederation gradually breaking off and local dynasties under their rule becoming "de facto" independent.
"Early High Middle Ages"
At the beginning of the Middle Baghdad Abbasids, the Caliphate had become of little importance. The "amir al-umara" Bajkam contented himself with dispatching his secretary to Baghdad to assemble local dignitaries to elect a successor. The choice fell on Al-Muttaqi. Bajkam was killed on a hunting party by marauding Kurds. In the ensuing anarchy in Baghdad, Ibn Raik persuaded the Caliph to flee to Mosul where he was welcomed by the Hamdanids. They assassinated Ibn Raik. Hamdanid Nasir al-Dawla advanced on Baghdad, where mercenaries and well-organised Turks repelled them. Turkish general Tuzun became "amir al-umara". The Turks were staunch Sunnis. A fresh conspiracy placed the Caliph in danger. Hamdanid troops helped ad-Daula escape to Mosul and then to Nasibin. Tuzun and the Hamdanid were stalemated. Al-Muttaqi was at Raqqa, moving to Tuzun where he was deposed. Tuzun installed the blinded Caliph's cousin as successor, with the title of Al-Mustakfi. With the new Caliph, Tuzun attacked the Buwayhid dynasty and the Hamdanids. Soon after, Tuzun died, and was succeeded by one of his generals, Abu Ja'far. The Buwayhids then attacked Baghdad, and Abu Ja'far fled into hiding with the Caliph. Buwayhid Sultan Muiz ud-Daula assumed command forcing the Caliph into abject submission to the Amir. Eventually, Al-Mustakfi was blinded and deposed. The city fell into chaos, and the Caliph's palace was looted.
Once the Buwayhids controlled Baghdad, Al-Muti became caliph. The office was shorn of real power and Shi'a observances were established. The Buwayhids held on Baghdad for over a century. Throughout the Buwayhid reign the Caliphate was at its lowest ebb, but was recognized religiously, except in Iberia. Buwayhid Sultan Mu'izz al-Dawla was prevented from raising a Shi'a Caliph to the throne by fear for his own safety, and fear of rebellion, in the capital and beyond.
The next Caliph, Al-Ta'i, reigned over factional strife in Syria among the Fatimids, Turks, and Carmathians. The Hideaway dynasty also fractured. The Abbasid borders were the defended only by small border states. Baha' al-Dawla, the Buyid amir of Iraq, deposed al-Ta'i in 991 and proclaimed al-Qadir the new caliph.
During al-Qadir's Caliphate, Mahmud of Ghazni looked after the empire. Mahmud of Ghazni, of Eastern fame, was friendly towards the Caliphs, and his victories in the Indian Empire were accordingly announced from the pulpits of Baghdad in grateful and glowing terms. Al-Qadir fostered the Sunni struggle against Shiʿism and outlawed heresies such as the Baghdad Manifesto and the doctrine that the Quran was created. He outlawed the Muʿtazila, bringing an end to the development of rationalist Muslim philosophy. During this and the next period, Islamic literature, especially Persian literature, flourished under the patronage of the Buwayhids. By 1000, the global Muslim population had climbed to about 4 percent of the world, compared to the Christian population of 10 percent.
During Al-Qa'im's reign, the Buwayhid ruler often fled the capital and the Seljuq dynasty gained power. Toghrül overran Syria and Armenia. He then made his way into the Capital, where he was well-received both by chiefs and people. In Bahrain, the Qarmatian state collapsed in Al-Hasa. Arabia recovered from the Fatimids and again acknowledged the spiritual jurisdiction of the Abbasids. Al-Muqtadi was honoured by the Seljuq Sultan Malik-Shah I, during whose reign the Caliphate was recognized throughout the extending range of Seljuq conquest. The Sultan was critical of the Caliph's interference in affairs of state, but died before deposing the last of the Middle Baghdad Abbasids.
"Late High Middle Ages"
The Late Baghdad Abbasids reigned from the beginning of the Crusades to the Seventh Crusade. The first Caliph was Al-Mustazhir. He was politically irrelevant, despite civil strife at home and the First Crusade in Syria. Raymond IV of Toulouse attempted to attack Baghdad, losing at the Battle of Manzikert. The global Muslim population climbed to about 5 per cent as against the Christian population of 11 per cent by 1100. Jerusalem was captured by crusaders who massacred its inhabitants. Preachers travelled throughout the caliphate proclaiming the tragedy and rousing men to recover the Al-Aqsa Mosque from the "Franks" (European Crusaders). Crowds of exiles rallied for war against the infidel. Neither the Sultan nor the Caliph sent an army west.
Al-Mustarshid achieved more independence while the sultan Mahmud II of Great Seljuq was engaged in war in the East. The Banu Mazyad (Mazyadid State) general, Dubays ibn Sadaqa (emir of Al-Hilla), plundered Bosra and attacked Baghdad together with a young brother of the sultan, Ghiyath ad-Din Mas'ud. Dubays was crushed by a Seljuq army under Zengi, founder of the Zengid dynasty. Mahmud's death was followed by a civil war between his son Dawud, his nephew Mas'ud and the atabeg Toghrul II. Zengi was recalled to the East, stimulated by the Caliph and Dubays, where he was beaten. The Caliph then laid siege to Mosul for three months without success, resisted by Mas'ud and Zengi. It was nonetheless a milestone in the caliphate's military revival.
After the siege of Damascus (1134), Zengi undertook operations in Syria. Al-Mustarshid attacked sultan Mas'ud of western Seljuq and was taken prisoner. He was later found murdered. His son, Al-Rashid failed to gain independence from Seljuq Turks. Zengi, because of the murder of Dubays, set up a rival Sultanate. Mas'ud attacked; the Caliph and Zengi, hopeless of success, escaped to Mosul. The Sultan regained power, a council was held, the Caliph was deposed, and his uncle, son of Al-Muqtafi, appointed as the new Caliph. Ar-Rashid fled to Isfahan and was killed by Hashshashins.
Continued disunion and contests between Seljuq Turks allowed al-Muqtafi to maintain control in Baghdad and to extend it throughout Iraq. In 1139, al-Muqtafi granted protection to the Nestorian patriarch Abdisho III. While the Crusade raged, the Caliph successfully defended Baghdad against Muhammad II of Seljuq in the Siege of Baghdad (1157). The Sultan and the Caliph dispatched men in response to Zengi's appeal, but neither the Seljuqs, nor the Caliph, nor their Amirs, dared resist the Crusaders.
The next caliph, Al-Mustanjid, saw Saladin extinguish the Fatimid dynasty after 260 years, and thus the Abbasids again prevailed. Al-Mustadi reigned when Saladin become the sultan of Egypt and declared allegiance to the Abbasids.
An-Nasir, ""The Victor for the Religion of God"", attempted to restore the Caliphate to its ancient dominant role. He consistently held Iraq from Tikrit to the Gulf without interruption. His forty-seven-year reign was chiefly marked by ambitious and corrupt dealings with the Tartar chiefs, and by his hazardous invocation of the Mongols, which ended his dynasty. His son, Az-Zahir, was Caliph for a short period before his death and An-Nasir's grandson, Al-Mustansir, was made caliph.
Al-Mustansir founded the Mustansiriya Madrasah. In 1236 Ögedei Khan commanded to raise up Khorassan and populated Herat. The Mongol military governors mostly made their camp in Mughan plain, Azerbaijan. The rulers of Mosul and Cilician Armenia surrendered. Chormaqan divided the Transcaucasia region into three districts based on military hierarchy. In Georgia, the population were temporarily divided into eight tumens. By 1237 the Mongol Empire had subjugated most of Persia, excluding Abbasid Iraq and Ismaili strongholds, and all of Afghanistan and Kashmir.
Al-Musta'sim was the last Abbasid Caliph in Baghdad and is noted for his opposition to the rise of Shajar al-Durr to the Egyptian throne during the Seventh Crusade. To the east, Mongol forces under Hulagu Khan swept through the Transoxiana and Khorasan. Baghdad was sacked and the caliph deposed soon afterwards. The Mamluk sultans and Syria later appointed a powerless Abbasid Caliph in Cairo.
"Abbasid "shadow" caliph of Cairo"
"Late Middle Ages"
The Abbasid "shadow" caliph of Cairo reigned under the tutelage of the Mamluk sultans and nominal rulers used to legitimize the actual rule of the Mamluk sultans. All the Cairene Abbasid caliphs who preceded or succeeded Al-Musta'in were spiritual heads lacking any temporal power. Al-Musta'in was the only Cairo-based Abbasid caliph to even briefly hold political power. Al-Mutawakkil III was the last "shadow" caliph. In 1517, Ottoman sultan Selim I defeated the Mamluk Sultanate, and made Egypt part of the Ottoman Empire.
The Fatimids originated in Ifriqiya (modern-day Tunisia and eastern Algeria). The dynasty was founded in 909 by ʻAbdullāh al-Mahdī Billah, who legitimized his claim through descent from Muhammad by way of his daughter Fātima as-Zahra and her husband ʻAlī ibn-Abī-Tālib, the first Shīʻa Imām, hence the name "al-Fātimiyyūn" "Fatimid". The Fatamids and the Zaydis at the time, used the Hanafi jurisprudence, as did most Sunnis.
Abdullāh al-Mahdi's control soon extended over all of central Maghreb, an area consisting of the modern countries of Morocco, Algeria, Tunisia and Libya, which he ruled from Mahdia, his capital in Tunisia.
The Fatimids entered Egypt in the late 10th century, conquering the Ikhshidid dynasty and founding a capital at "al-Qāhira"(Cairo) in 969. The name was a reference to the planet Mars, "The Subduer", which was prominent in the sky at the moment that city construction started. Cairo was intended as a royal enclosure for the Fatimid caliph and his army, though the actual administrative and economic capital of Egypt was in cities such as Fustat until 1169. After Egypt, the Fatimids continued to conquer surrounding areas until they ruled from Tunisia to Syria and even crossed the Mediterranean into Sicily and southern Italy.
Under the Fatimids, Egypt became the center of an empire that included at its peak North Africa, Sicily, Palestine, Lebanon, Syria, the Red Sea coast of Africa, Yemen and the Hejaz. Egypt flourished, and the Fatimids developed an extensive trade network in both the Mediterranean and the Indian Ocean. Their trade and diplomatic ties extended all the way to China and its Song dynasty, which determined the economic course of Egypt during the High Middle Ages.
Unlike other governments in the area, Fatimid advancement in state offices was based more on merit than heredity. Members of other branches of Islam, including Sunnis, were just as likely to be appointed to government posts as Shiites. Tolerance covered non-Muslims such as Christians and Jews; they took high levels in government based on ability. There were, however, exceptions to this general attitude of tolerance, notably Al-Hakim bi-Amr Allah.
The Fatimid palace was in two parts. It was in the Khan el-Khalili area at Bin El-Quasryn street.
"Early and High Middle Ages"
During the beginning of the Middle Baghdad Abbasids, the Fatimid Caliphs claimed spiritual supremacy not only in Egypt, but also contested the religious leadership of Syria. At the beginning of the Abbasid realm in Baghdad, the Alids faced severe persecution by the ruling party as they were a direct threat to the Caliphate. Owing to the Abbasid inquisitions, the forefathers opted for concealment of the Dawa's existence. Subsequently, they travelled towards the Iranian Plateau and distanced themselves from the epicenter of the political world. Al Mahdi's father, Al Husain al Mastoor returned to control the Dawa's affairs. He sent two Dai's to Yemen and Western Africa. Al Husain died soon after the birth of his son, Al Mahdi. A system of government helped update Al Mahdi on the development which took place in North Africa.
Al Mahdi Abdullah al-Mahdi Billah established the first Imam of the Fatimid dynasty. He claimed genealogic origins dating as far back as Fatimah through Husayn and Ismail. Al Mahdi established his headquarters at Salamiyah and moved towards north-western Africa, under Aghlabid rule. His success of laying claim to being the precursor to the Mahdi was instrumental among the Berber tribes of North Africa, specifically the Kutamah tribe. Al Mahdi established himself at the former Aghlabid residence at Raqqadah, a suburb of Al-Qayrawan in Tunisia. In 920, Al Mahdi took up residence at the newly established capital of the empire, Al-Mahdiyyah. After his death, Al Mahdi was succeeded by his son, Abu Al-Qasim Muhammad Al-Qaim, who continued his expansionist policy. At the time of his death he had extended his reign to Morocco of the Idrisids, as well as Egypt itself.The Fatimid Caliphate grew to include Sicily and to stretch across North Africa from the Atlantic Ocean to Libya. Abdullāh al-Mahdi's control soon extended over all of central Maghreb, an area consisting of the modern countries of Morocco, Algeria, Tunisia, and Libya, which he ruled from Mahdia, in Tunisia. Newly built capital Al-Mansuriya, or Mansuriyya (), near Kairouan, Tunisia, was the capital of the Fatimid Caliphate during the rules of the Imams Al-Mansur Billah (r. 946–953) and Al-Mu'izz li-Din Allah (r. 953–975).
The Fatimid general Jawhar conquered Egypt in 969, and he built a new palace city there, near Fusṭāt, which he also called al-Manṣūriyya. Under Al-Muizz Lideenillah, the Fatimids conquered the Ikhshidid Wilayah (see Fatimid Egypt), founding a new capital at "al-Qāhira" (Cairo) in 969. The name was a reference to the planet Mars, "The Subduer", | https://en.wikipedia.org/wiki?curid=13306 |
Hittites
The Hittites () (, Latin "Hetthaei") were an Anatolian people who played an important role in establishing an empire centered on Hattusa in north-central Anatolia around 1600 BCE. This empire reached its height during the mid-14th century BCE under Šuppiluliuma I, when it encompassed an area that included most of Anatolia as well as parts of the northern Levant and Upper Mesopotamia.
Between the 15th and 13th centuries BCE, the Empire of Hattusa, conventionally called the Hittite Empire, came into conflict with the New Kingdom of Egypt, the Middle Assyrian Empire and the empire of the Mitanni for control of the Near East. The Middle Assyrian Empire eventually emerged as the dominant power and annexed much of the Hittite Empire, while the remainder was sacked by Phrygian newcomers to the region. After c. 1180 BCE, during the Late Bronze Age collapse, the Hittites splintered into several independent Syro-Hittite states, some of which survived until the eighth century BCE before succumbing to the Neo-Assyrian Empire.
The Hittite language was a distinct member of the Anatolian branch of the Indo-European language family, and along with the closely-related Luwian language, is the oldest historically-attested Indo-European language, referred to by its speakers as "in the language of Nesa". The Hittites called their country the "Kingdom of Hattusa" (Hatti in Akkadian), a name received from the Hattians, an earlier people who inhabited the region until the beginning of the second millennium BCE and spoke an unrelated language known as Hattic. The conventional name "Hittites" is due to their initial identification with the Biblical Hittites in 19th century archaeology.
The history of the Hittite civilization is known mostly from cuneiform texts found in the area of their kingdom, and from diplomatic and commercial correspondence found in various archives in Assyria, Babylonia, Egypt and the Middle East, the decipherment of which was also a key event in the history of Indo-European studies.
The development of iron smelting was once attributed to the Hittites of Anatolia during the Late Bronze Age, with their success largely based on the advantages of a monopoly on ironworking at the time. But the view of such a "Hittite monopoly" has come under scrutiny and is no longer a scholarly consensus. As part of the Late-Bronze-Age/Early-Iron-Age, the Late Bronze Age collapse saw the slow, comparatively continuous spread of iron-working technology in the region. While there are some iron objects from Bronze Age Anatolia, the number is comparable to iron objects found in Egypt and other places during the period; and only a small number of these objects are weapons. Hittites did not use smelted iron, but rather meteorites. The Hittite military made successful use of chariots.
In classical times, ethnic Hittite dynasties survived in small kingdoms scattered around what is now Syria, Lebanon and Israel. Lacking a unifying continuity, their descendants scattered and ultimately merged into the modern populations of the Levant, Turkey and Mesopotamia.
During the 1920s, interest in the Hittites increased with the founding of Turkey and attracted the attention of Turkish archaeologists such as Halet Çambel and Tahsin Özgüç. During this period, the new field of Hittitology also influenced the naming of Turkish institutions, such as the state-owned "Etibank" ("Hittite bank"), and the foundation of the Museum of Anatolian Civilizations in Ankara, 200 kilometers west of the Hittite capital and housing the most comprehensive exhibition of Hittite art and artifacts in the world.
Before the archeological discoveries that revealed the Hittite civilization, the only source of information about the Hittites had been the Old Testament. Francis William Newman expressed the critical view, common in the early 19th century, that, "no Hittite king could have compared in power to the King of Judah...".
As the discoveries in the second half of the 19th century revealed the scale of the Hittite kingdom, Archibald Sayce asserted that, rather than being compared to Judah, the Anatolian civilization "[was] worthy of comparison to the divided Kingdom of Egypt", and was "infinitely more powerful than that of Judah". Sayce and other scholars also noted that Judah and the Hittites were never enemies in the Hebrew texts; in the Book of Kings, they supplied the Israelites with cedar, chariots, and horses, and in the Book of Genesis were friends and allies to Abraham. Uriah the Hittite was a captain in King David's army and counted as one of his "mighty men" in 1 Chronicles 11.
French scholar Charles Texier found the first Hittite ruins in 1834 but did not identify them as such.
The first archaeological evidence for the Hittites appeared in tablets found at the "karum" of Kanesh (now called Kültepe), containing records of trade between Assyrian merchants and a certain "land of "Hatti"". Some names in the tablets were neither Hattic nor Assyrian, but clearly Indo-European.
The script on a monument at Boğazkale by a "People of Hattusas" discovered by William Wright in 1884 was found to match peculiar hieroglyphic scripts from Aleppo and Hama in Northern Syria. In 1887, excavations at Amarna in Egypt uncovered the diplomatic correspondence of Pharaoh Amenhotep III and his son, Akhenaten. Two of the letters from a "kingdom of "Kheta""—apparently located in the same general region as the Mesopotamian references to "land of "Hatti""—were written in standard Akkadian cuneiform, but in an unknown language; although scholars could interpret its sounds, no one could understand it. Shortly after this, Sayce proposed that "Hatti" or "Khatti" in Anatolia was identical with the "kingdom of "Kheta"" mentioned in these Egyptian texts, as well as with the biblical Hittites. Others, such as Max Müller, agreed that "Khatti" was probably "Kheta", but proposed connecting it with Biblical Kittim rather than with the Biblical Hittites. Sayce's identification came to be widely accepted over the course of the early 20th century; and the name "Hittite" has become attached to the civilization uncovered at Boğazköy.
During sporadic excavations at Boğazköy (Hattusa) that began in 1906, the archaeologist Hugo Winckler found a royal archive with 10,000 tablets, inscribed in cuneiform Akkadian and the same unknown language as the Egyptian letters from "Kheta"—thus confirming the identity of the two names. He also proved that the ruins at Boğazköy were the remains of the capital of an empire that, at one point, controlled northern Syria.
Under the direction of the German Archaeological Institute, excavations at Hattusa have been under way since 1907, with interruptions during the world wars. Kültepe was successfully excavated by Professor Tahsin Özgüç from 1948 until his death in 2005. Smaller scale excavations have also been carried out in the immediate surroundings of Hattusa, including the rock sanctuary of Yazılıkaya, which contains numerous rock reliefs portraying the Hittite rulers and the gods of the Hittite pantheon.
The Hittites used a variation of cuneiform called Hittite cuneiform. Archaeological expeditions to Hattusa have discovered entire sets of royal archives on cuneiform tablets, written either in Akkadian, the diplomatic language of the time, or in the various dialects of the Hittite confederation.
The Museum of Anatolian Civilizations in Ankara, Turkey houses the richest collection of Hittite and Anatolian artifacts.
The Hittite kingdom was centred on the lands surrounding Hattusa and Neša (Kültepe), known as "the land Hatti" (). After Hattusa was made capital, the area encompassed by the bend of the Kızılırmak River (Hittite "Marassantiya") was considered the core of the Empire, and some Hittite laws make a distinction between "this side of the river" and "that side of the river". For example, the reward for the capture of an escaped slave after he managed to flee beyond the Halys is higher than that for a slave caught before he could reach the river.
To the west and south of the core territory lay the region known as "Luwiya" in the earliest Hittite texts. This terminology was replaced by the names Arzawa and Kizzuwatna with the rise of those kingdoms. Nevertheless, the Hittites continued to refer to the language that originated in these areas as Luwian. Prior to the rise of Kizzuwatna, the heart of that territory in Cilicia was first referred to by the Hittites as Adaniya. Upon its revolt from the Hittites during the reign of Ammuna, it assumed the name of Kizzuwatna and successfully expanded northward to encompass the lower Anti-Taurus Mountains as well. To the north, lived the mountainous people called the Kaskians. To the southeast of the Hittites lay the Hurrian empire of Mitanni. At its peak, during the reign of Muršili II, the Hittite empire stretched from Arzawa in the west to Mitanni in the east, many of the Kaskian territories to the north including Hayasa-Azzi in the far north-east, and on south into Canaan approximately as far as the southern border of Lebanon, incorporating all of these territories within its domain.
It is generally assumed that the Hittites came into Anatolia some time before 2000 BC. While their earlier location is disputed, it has been speculated by scholars for more than a century that the Yamnaya culture of the Pontic–Caspian steppe, in present-day Ukraine, around the Sea of Azov, spoke an early Indo-European language during the third and fourth millennia BC.
The arrival of the Hittites in Anatolia in the Bronze Age was one of a superstrate imposing itself on a native culture (in this case over the pre-existing Hattians and Hurrians), either by means of conquest or by gradual assimilation. In archaeological terms, relationships of the Hittites to the Ezero culture of the Balkans and Maykop culture of the Caucasus have been considered within the migration framework. The Indo-European element at least establishes Hittite culture as intrusive to Anatolia in scholarly mainstream (excepting the opinions of Colin Renfrew, whose Anatolian hypothesis assumes that Indo-European is indigenous to Anatolia, and, more recently, Quentin Atkinson).
According to Anthony, steppe herders, archaic Proto-Indo-European speakers, spread into the lower Danube valley about 4200–4000 BC, either causing or taking advantage of the collapse of Old Europe. Their languages "probably included archaic Proto-Indo-European dialects of the kind partly preserved later in Anatolian." Their descendants later moved into Anatolia at an unknown time but maybe as early as 3000 BC. According to J. P. Mallory it is likely that the Anatolians reached the Near East from the north either via the Balkans or the Caucasus in the 3rd millennium BC. According to Parpola, the appearance of Indo-European speakers from Europe into Anatolia, and the appearance of Hittite, is related to later migrations of Proto-Indo-European speakers from the Yamnaya culture into the Danube Valley at c. 2800 BC, which is in line with the "customary" assumption that the Anatolian Indo-European language was introduced into Anatolia sometime in the third millennium BC.
Their movement into the region may have set off a Near East mass migration sometime around 1900 BC. The dominant indigenous inhabitants in central Anatolia at the time were Hurrians and Hattians who spoke non-Indo-European languages. Some have argued that Hattic was a Northwest Caucasian language, but its affiliation remains uncertain, whilst the Hurrian language was a near-isolate (i.e. it was one of only two or three languages in the Hurro-Urartian family). There were also Assyrian colonies in the region during the Old Assyrian Empire (2025–1750 BC); it was from the Assyrian speakers of Upper Mesopotamia that the Hittites adopted the cuneiform script. It took some time before the Hittites established themselves following the collapse of the Old Assyrian Empire in the mid-18th century BC, as is clear from some of the texts included here. For several centuries there were separate Hittite groups, usually centered on various cities. But then strong rulers with their center in Hattusa (modern Boğazkale) succeeded in bringing these together and conquering large parts of central Anatolia to establish the Hittite kingdom.
The early history of the Hittite kingdom is known through tablets that may first have been written in the 17th century BC, possibly in Hittite; but survived only as Akkadian copies made in the 14th and 13th centuries BC. These reveal a rivalry within two branches of the royal family up to the Middle Kingdom; a northern branch first based in Zalpuwa and secondarily Hattusa, and a southern branch based in Kussara (still not found) and the former Assyrian colony of Kanesh. These are distinguishable by their names; the northerners retained language isolate Hattian names, and the southerners adopted Indo-European Hittite and Luwian names.
Zalpuwa first attacked Kanesh under Uhna in 1833 BC.
One set of tablets, known collectively as the Anitta text, begin by telling how Pithana the king of Kussara conquered neighbouring Neša (Kanesh). However, the real subject of these tablets is Pithana's son Anitta ( BC), who continued where his father left off and conquered several northern cities: including Hattusa, which he cursed, and also Zalpuwa. This was likely propaganda for the southern branch of the royal family, against the northern branch who had fixed on Hattusa as capital. Another set, the Tale of Zalpuwa, supports Zalpuwa and exonerates the later Ḫattušili I from the charge of sacking Kanesh.
Anitta was succeeded by Zuzzu ( BC); but sometime in 1710–1705 BC, Kanesh was destroyed, taking the long-established Assyrian merchant trading system with it. A Kussaran noble family survived to contest the Zalpuwan/Hattusan family, though whether these were of the direct line of Anitta is uncertain.
Meanwhile, the lords of Zalpa lived on. Huzziya I, descendant of a Huzziya of Zalpa, took over Hatti. His son-in-law Labarna I, a southerner from Hurma (now Kalburabastı) usurped the throne but made sure to adopt Huzziya's grandson Ḫattušili as his own son and heir.
The founding of the Hittite Kingdom is attributed to either Labarna I or Hattusili I (the latter might also have had Labarna as a personal name), who conquered the area south and north of Hattusa. Hattusili I campaigned as far as the Semitic Amorite kingdom of Yamkhad in Syria, where he attacked, but did not capture, its capital of Aleppo. Hattusili I did eventually capture Hattusa and was credited for the foundation of the Hittite Empire. According to "The Edict of Telepinu", dating to the 16th century BC, "Hattusili was king, and his sons, brothers, in-laws, family members, and troops were all united. Wherever he went on campaign he controlled the enemy land with force. He destroyed the lands one after the other, took away their power, and made them the borders of the sea. When he came back from campaign, however, each of his sons went somewhere to a country, and in his hand the great cities prospered. But, when later the princes’ servants became corrupt, they began to devour the properties, conspired constantly against their masters, and began to shed their blood.” This excerpt from the edict is supposed to illustrate the unification, growth, and prosperity of the Hittites under his rule. It also illustrates the corruption of "the princes", believed to be his sons. The lack of sources leads to uncertainty of how the corruption was addressed. On Hattusili I's deathbed, he chose his grandson, Mursuli I, as his heir. Mursili I conquered that city in a campaign against the Amorites in 1595 BC (middle chronology).
Also in 1595 BC, Mursili I (or Murshilish I) conducted a great raid down the Euphrates River, bypassing Assyria, and captured Mari and Babylonia, ejecting the Amorite founders of the Babylonian state in the process. However, internal dissension forced a withdrawal of troops to the Hittite homelands. Throughout the remainder of the 16th century BC, the Hittite kings were held to their homelands by dynastic quarrels and warfare with the Hurrians—their neighbours to the east. Also the campaigns into Amurru (modern Syria) and southern Mesopotamia may be responsible for the reintroduction of cuneiform writing into Anatolia, since the Hittite script is quite different from that of the preceding Assyrian Colonial period.
Mursili continued the conquests of Hattusili I. Mursili's conquests reached southern Mesopotamia and even ransacked Babylon itself in 1531 BC (short chronology). Rather than incorporate Babylonia into Hittite domains, Mursili seems to have instead turned control of Babylonia over to his Kassite allies, who were to rule it for the next four centuries. This lengthy campaign strained the resources of Hatti, and left the capital in a state of near-anarchy. Mursili was assassinated shortly after his return home, and the Hittite Kingdom was plunged into chaos. The Hurrians (under the control of an Indo-Aryan Mitanni ruling class), a people living in the mountainous region along the upper Tigris and Euphrates rivers in modern south east Turkey, took advantage of the situation to seize Aleppo and the surrounding areas for themselves, as well as the coastal region of Adaniya, renaming it Kizzuwatna (later Cilicia).
Following this, the Hittites entered a weak phase of obscure records, insignificant rulers, and reduced domains. This pattern of expansion under strong kings followed by contraction under weaker ones, was to be repeated over and over through the Hittite Kingdom's 500-year history, making events during the waning periods difficult to reconstruct. The political instability of these years of the Old Hittite Kingdom can be explained in part by the nature of the Hittite kingship at that time. During the Old Hittite Kingdom prior to 1400 BC, the king of the Hittites was not viewed by his subjects as a "living god" like the Pharaohs of Egypt, but rather as a first among equals. Only in the later period from 1400 BC until 1200 BC did the Hittite kingship become more centralized and powerful. Also in earlier years the succession was not legally fixed, enabling "War of the Roses" style rivalries between northern and southern branches.
The next monarch of note following Mursili I was Telepinu (c. 1500 BC), who won a few victories to the southwest, apparently by allying himself with one Hurrian state (Kizzuwatna) against another (Mitanni). Telepinu also attempted to secure the lines of succession.
The last monarch of the Old kingdom, Telepinu, reigned until about 1500 BC. Telepinu's reign marked the end of the "Old Kingdom" and the beginning of the lengthy weak phase known as the "Middle Kingdom". The period of the 15th century BC is largely unknown with very sparse surviving records. Part of the reason for both the weakness and the obscurity is that the Hittites were under constant attack, mainly from the Kaska, a non Indo-European people settled along the shores of the Black Sea. The capital once again went on the move, first to Sapinuwa and then to Samuha. There is an archive in Sapinuwa but it has not been adequately translated to date.
It segues into the "Hittite Empire period" proper, which dates from the reign of Tudhaliya I from c. 1430 BC.
One innovation that can be credited to these early Hittite rulers is the practice of conducting treaties and alliances with neighboring states; the Hittites were thus among the earliest known pioneers in the art of international politics and diplomacy. This is also when the Hittite religion adopted several gods and rituals from the Hurrians.
With the reign of Tudhaliya I (who may actually not have been the first of that name; see also Tudhaliya), the Hittite Kingdom re-emerged from the fog of obscurity. Hittite civilization entered the period of time called the "Hittite Empire period". Many changes were afoot during this time, not the least of which was a strengthening of the kingship. Settlement of the Hittites progressed in the Empire period. However, the Hittite people tended to settle in the older lands of south Anatolia rather than the lands of the Aegean. As this settlement progressed, treaties were signed with neighboring peoples. During the Hittite Empire period the kingship became hereditary and the king took on a "superhuman aura" and began to be referred to by the Hittite citizens as "My Sun". The kings of the Empire period began acting as a high priest for the whole kingdom—making an annual tour of the Hittite holy cities, conducting festivals and supervising the upkeep of the sanctuaries.
During his reign (c. 1400 BC), King Tudhaliya I, again allied with Kizzuwatna, then vanquished the Hurrian states of Aleppo and Mitanni, and expanded to the west at the expense of Arzawa (a Luwian state).
Another weak phase followed Tudhaliya I, and the Hittites' enemies from all directions were able to advance even to Hattusa and raze it. However, the Kingdom recovered its former glory under Šuppiluliuma I (c. 1350 BC), who again conquered Aleppo, Mitanni was reduced to vassalage by the Assyrians under his son-in-law, and he defeated Carchemish, another Amorite city-state. With his own sons placed over all of these new conquests, Babylonia still in the hands of the allied Kassites, this left Šuppiluliuma the supreme power broker in the known world, alongside Assyria and Egypt, and it was not long before Egypt was seeking an alliance by marriage of another of his sons with the widow of Tutankhamen. Unfortunately, that son was evidently murdered before reaching his destination, and this alliance was never consummated. However, the Middle Assyrian Empire (1365–1050 BC) once more began to grow in power also, with the ascension of Ashur-uballit I in 1365 BC. Ashur-uballit I attacked and defeated Mattiwaza the Mitanni king despite attempts by the Hittite king Šuppiluliuma I, now fearful of growing Assyrian power, attempting to preserve his throne with military support. The lands of the Mitanni and Hurrians were duly appropriated by Assyria, enabling it to encroach on Hittite territory in eastern Asia Minor, and Adad-nirari I annexed Carchemish and north east Syria from the control of the Hittites.
After Šuppiluliuma I, and a very brief reign by his eldest son, another son, Mursili II became king (c. 1330). Having inherited a position of strength in the east, Mursili was able to turn his attention to the west, where he attacked Arzawa and a city known as Millawanda (Miletus), which was under the control of Ahhiyawa. More recent research based on new readings and interpretations of the Hittite texts, as well as of the material evidence for Mycenaean contacts with the Anatolian mainland, came to the conclusion that "Ahhiyawa" referred to Mycenaean Greece, or at least to a part of it.
Hittite prosperity was mostly dependent on control of the trade routes and metal sources. Because of the importance of Northern Syria to the vital routes linking the Cilician gates with Mesopotamia, defense of this area was crucial, and was soon put to the test by Egyptian expansion under Pharaoh Ramesses II. The outcome of the battle is uncertain, though it seems that the timely arrival of Egyptian reinforcements prevented total Hittite victory. The Egyptians forced the Hittites to take refuge in the fortress of Kadesh, but their own losses prevented them from sustaining a siege. This battle took place in the 5th year of Ramesses (c. 1274 BC by the most commonly used chronology).
After this date, the power of both the Hittites and Egyptians began to decline yet again because of the power of the Assyrians. The Assyrian king Shalmaneser I had seized the opportunity to vanquish Hurria and Mitanni, occupy their lands, and expand up to the head of the Euphrates in Anatolia and into Babylonia, Ancient Iran, Aram (Syria), Canaan (Palestine) and Phoenicia, while Muwatalli was preoccupied with the Egyptians. The Hittites had vainly tried to preserve the Mitanni kingdom with military support. Assyria now posed just as great a threat to Hittite trade routes as Egypt ever had. Muwatalli's son, Urhi-Teshub, took the throne and ruled as king for seven years as Mursili III before being ousted by his uncle, Hattusili III after a brief civil war. In response to increasing Assyrian annexation of Hittite territory, he concluded a peace and alliance with Ramesses II (also fearful of Assyria), presenting his daughter's hand in marriage to the Pharaoh. The "Treaty of Kadesh", one of the oldest completely surviving treaties in history, fixed their mutual boundaries in southern Canaan, and was signed in the 21st year of Rameses (c. 1258 BC). Terms of this treaty included the marriage of one of the Hittite princesses to Ramesses.
Hattusili's son, Tudhaliya IV, was the last strong Hittite king able to keep the Assyrians out of the Hittite heartland to some degree at least, though he too lost much territory to them, and was heavily defeated by Tukulti-Ninurta I of Assyria in the Battle of Nihriya. He even temporarily annexed the Greek island of Cyprus, before that too fell to Assyria. The very last king, Šuppiluliuma II also managed to win some victories, including a naval battle against Alashiya off the coast of Cyprus. But the Assyrians, under Ashur-resh-ishi I had by this time annexed much Hittite territory in Asia Minor and Syria, driving out and defeating the Babylonian king Nebuchadnezzar I in the process, who also had eyes on Hittite lands. The Sea Peoples had already begun their push down the Mediterranean coastline, starting from the Aegean, and continuing all the way to Canaan, founding the state of Philistia—taking Cilicia and Cyprus away from the Hittites en route and cutting off their coveted trade routes. This left the Hittite homelands vulnerable to attack from all directions, and Hattusa was burnt to the ground sometime around 1180 BC following a combined onslaught from new waves of invaders, the Kaskas, Phrygians and Bryges. The Hittite Kingdom thus vanished from historical records, much of the territory being seized by Assyria. Alongside with these attacks, many internal issues also led to the end of the Hittite kingdom. The end of the kingdom was part of the larger Bronze Age Collapse.
By 1160 BCE, the political situation in Asia Minor looked vastly different from that of only 25 years earlier. In that year, the Assyrian king Tiglath-Pileser I was defeating the "Mushki" (Phrygians) who had been attempting to press into Assyrian colonies in southern Anatolia from the Anatolian highlands, and the Kaska people, the Hittites' old enemies from the northern hill-country between Hatti and the Black Sea, seem to have joined them soon after. The Phrygians had apparently overrun Cappadocia from the West, with recently discovered epigraphic evidence confirming their origins as the Balkan "Bryges" tribe, forced out by the Macedonians.
Although the Hittite kingdom disappeared from Anatolia at this point, there emerged a number of so-called Syro-Hittite states in Anatolia and northern Syria. They were the successors of the Hittite Kingdom. The most notable Syrian Neo-Hittite kingdoms were those at Carchemish and Melid. These Syro-Hittite states gradually fell under the control of the Neo-Assyrian Empire (911–608 BCE). Carchemish and Melid were made vassals of Assyria under Shalmaneser III (858–823 BCE), and fully incorporated into Assyria during the reign of Sargon II (722–705 BCE).
A large and powerful state known as Tabal occupied much of southern Anatolia. Known as Greek "Tibarenoi" (), Latin "Tibareni", "Thobeles" in Josephus, their language may have been Luwian, testified to by monuments written using Anatolian hieroglyphs. This state too was conquered and incorporated into the vast Neo-Assyrian Empire.
Ultimately, both Luwian hieroglyphs and cuneiform were rendered obsolete by an innovation, the alphabet, which seems to have entered Anatolia simultaneously from the Aegean (with the Bryges, who changed their name to Phrygians), and from the Phoenicians and neighboring peoples in Syria.
The head of the Hittite state was the king, followed by the heir-apparent. The king was the supreme ruler of the land, in charge of being a military commander, judicial authority, as well as a high priest. However, some officials exercised independent authority over various branches of the government. One of the most important of these posts in the Hittite society was that of the gal mesedi (Chief of the Royal Bodyguards). It was superseded by the rank of the gal gestin (Chief of the Wine Stewards), who, like the "gal mesedi", was generally a member of the royal family. The kingdom's bureaucracy was headed by the gal dubsar (Chief of the Scribes), whose authority didn't extend over the "Lugal Dubsar", the king's personal scribe.
In Egyptian inscriptions dating back before the days of the Exodus, Egyptian monarchs were engaged with two chief seats, located at Kadesh (a Hittite city located on the Orontes River) and Carchemish (located on the Euphrates river in Southern Anatolia).
In the Central Anatolian settlement of Ankuwa, home of the pre-Hittite goddess Kattaha and the worship of other Hattic deities illustrates the ethnic differences in the areas the Hittites tried to control. Kattaha was originally given the name Hannikkun. The usage of the term Kattaha over Hannikkun, according to Ronald Gorny (head of the Alisar regional project in Turkey), was a device to downgrade the pre-Hittite identity of this female deity, and to bring her more in touch with the Hittite tradition. Their reconfiguration of Gods throughout their early history such as with Kattaha was a way of legitimizing their authority and to avoid conflicting ideologies in newly included regions and settlements. By transforming local deities to fit their own customs, the Hittites hoped that the traditional beliefs of these communities would understand and accept the changes to become better suited for the Hittite political and economic goals.
In 1595 BCE, King Marsilis I ( BCE) marched into the city of Babylon and sacked the city. Due to fear of revolts at home he did not remain there long, quickly returning to his capital of Hattusa. On his journey back to Hattusa, he was assassinated by his brother-in-law Hantili I, who then took the throne. Hantili was able to escape multiple murder attempts on himself, however, his family did not. His wife, Harapsili and her son were murdered. In addition, other members of the royal family were killed by Zindata I, who was then murdered by his own son, Ammunna. All of the internal unrest among the Hittite royal family led to a decline of power. This led to surrounding kingdoms, such as the Hurrians, to have success against Hittite forces and be the center of power in the Anatolian region.
King Telipinu (reigned BCE) is considered to be the last king of the Old Kingdom of the Hittites. He seized power during a dynastic power struggle. During his reign, he wanted to take care of lawlessness and regulate royal succession. He then issued the Edict of Telipinus. Within this edict, he designated the pankus, which was a "general assembly" that acted as a high court. Crimes such as murder were observed and judged by the Pankus. Kings were also subject to jurisdiction under the Pankus. The Pankus also served as an advisory council for the king. The rules and regulations set out by the Edict and the establishment of the Pankus proved to be very successful and lasted all the way through to the new Kingdom in the 14th century BCE.
The Pankus established a legal code where violence was not a punishment for a crime. Crimes such as a murder and theft, which were punishable by death in other southwest Asian Kingdoms at this time, were not under the Hittite law code. Most penalties for crimes involved restitution. For example, in cases of thievery, the punishment of that crime would to be to repay what was stolen in equal value.
The Hittite language is recorded fragmentarily from about the 19th century BC (in the Kültepe texts, see "Ishara"). It remained in use until about 1100 BC. Hittite is the best attested member of the Anatolian branch of the Indo-European language family, and the Indo-European language for which the earliest surviving written attestation exists, with isolated Hittite loanwords and numerous personal names appearing in an Old Assyrian context from as early as the 20th century BC.
The language of the Hattusa tablets was eventually deciphered by a Czech linguist, Bedřich Hrozný (1879–1952), who, on 24 November 1915, announced his results in a lecture at the Near Eastern Society of Berlin. His book about the discovery was printed in Leipzig in 1917, under the title "The Language of the Hittites; Its Structure and Its Membership in the Indo-European Linguistic Family". The preface of the book begins with:
The decipherment famously led to the confirmation of the laryngeal theory in Indo-European linguistics, which had been predicted several decades before. Due to its marked differences in its structure and phonology, some early philologists, most notably Warren Cowgill, had even argued that it should be classified as a sister language to Indo-European languages (Indo-Hittite), rather than a daughter language. By the end of the Hittite Empire, the Hittite language had become a written language of administration and diplomatic correspondence. The population of most of the Hittite Empire by this time spoke Luwian, another Indo-European language of the Anatolian family that had originated to the west of the Hittite region.
According to Craig Melchert, the current tendency is to suppose that Proto-Indo-European evolved, and that the "prehistoric speakers" of Anatolian became isolated "from the rest of the PIE speech community, so as not to share in some common innovations." Hittite, as well as its Anatolian cousins, split off from Proto-Indo-European at an early stage, thereby preserving archaisms that were later lost in the other Indo-European languages.
In Hittite there are many loanwords, particularly religious vocabulary, from the non-Indo-European Hurrian and Hattic languages. The latter was the language of the Hattians, the local inhabitants of the land of Hatti before being absorbed or displaced by the Hittites. Sacred and magical texts from Hattusa were often written in Hattic, Hurrian, and Luwian, even after Hittite became the norm for other writings.
Given the size of the empire, there are relatively few remains of Hittite art. These include some impressive monumental carvings, a number of rock reliefs, as well as metalwork, in particular the Alaca Höyük bronze standards, carved ivory, and ceramics, including the Hüseyindede vases. The Sphinx Gates of Alaca Höyük and Hattusa, with the monument at the spring of Eflatun Pınar, are among the largest constructed sculptures, along with a number of large recumbent lions, of which the "Lion of Babylon" statue at Babylon is the largest, if it is indeed Hittite. Unfortunately, nearly all are notably worn. Rock reliefs include the Hanyeri relief, and Hemite relief. The Niğde Stele is a Neo-Hittite monument from the modern Turkish city of Niğde, which dates from the end of the 8th century BC.
Hittite religion and mythology were heavily influenced by their Hattic, Mesopotamian, and Hurrian counterparts. In earlier times, Indo-European elements may still be clearly discerned.
Storm gods were prominent in the Hittite pantheon. Tarhunt (Hurrian's Teshub) was referred to as 'The Conqueror', 'The king of Kummiya', 'King of Heaven', 'Lord of the land of Hatti'. He was chief among the gods and his symbol is the bull. As Teshub he was depicted as a bearded man astride two mountains and bearing a club. He was the god of battle and victory, especially when the conflict involved a foreign power. Teshub was also known for his conflict with the serpent Illuyanka.
The Hittite gods are also honoured with festivals, such as Puruli in the spring, the "nuntarriyashas" festival in the autumn, and the KI.LAM festival of the gate house where images of the Storm God and up to thirty other idols were paraded through the streets.
Hittite laws, much like other records of the empire, are recorded on cuneiform tablets made from baked clay. What is understood to be the Hittite Law Code comes mainly from two clay tablets, each containing 186 articles, and are a collection of practiced laws from across the early Hittite Kingdom. In addition to the tablets, monuments bearing Hittite cuneiform inscriptions can be found in central Anatolia describing the government and law codes of the empire. The tablets and monuments date from the Old Hittite Kingdom (1650–1500 BC) to what is known as the New Hittite Kingdom (1500–1180 BC). Between these time periods, different translations can be found that modernize the language and create a series of legal reforms in which many crimes are given more humane punishments. These changes could possibly be attributed to the rise of new and different kings throughout the history empire or to the new translations that change the language used in the law codes. In either case, the law codes of the Hittites provide very specific fines or punishments that are to be issued for specific crimes and have many similarities to Biblical laws found in the books of Exodus and Deuteronomy. In addition to criminal punishments, the law codes also provide instruction on certain situations such as inheritance and death.
The law articles used by the Hittites most often outline very specific crimes or offenses, either against the state or against other individuals, and provide a sentence for these offenses. The laws carved in the tablets are an assembly of established social conventions from across the empire. Hittite laws at this time have a prominent lack of equality in punishments In many cases, distinct punishments or compensations for men and women are listed. Free men most often received more compensation for offenses against them than free women did. Slaves, male or female, had very little rights, and could easily be punished or executed by their masters for crimes. Most articles describe destruction of property and personal injury, to which the most common sentence was payment for compensation of the lost property. Again, in these cases men oftentimes receive a greater amount of compensation than women. Other articles describe how marriage of slaves and free individuals should be handled. In any case of separation or estrangement, the free individual, male or female, would keep all but one child that resulted from the marriage. Another thing to note is that homosexuality is not mentioned in any of the law articles of the Hittite Empire.
Cases in which capital punishment is recommended in the articles most often seem to come from pre-reform sentences for severe crimes and prohibited sexual pairings. Many of these cases include public torture and execution as punishment for serious crimes against religion. Most of these sentences would begin to go away in the later stages of the Hittite Empire as major law reforms began to occur.
While different translations of laws can be seen throughout the history of the empire, the Hittite outlook of law was originally founded on religion and were intended to preserve the authority of the state. Additionally, punishments had the goal of crime prevention and the protection of individual property rights. The goals of crime prevention can be seen in the severity of the punishments issued for certain crimes. Capital punishment and torture are specifically mentioned as punishment for more severe crimes against religion and harsh fines for the loss of private property or life. The tablets also describe the ability of the king to pardon certain crimes, but specifically prohibit an individual being pardoned for murder.
At some point in the 16th or 15th century BC, Hittite law codes move away from torture and capital punishment and to more humanitarian forms of punishments, such as fines. Where the old law system was based on retaliation and retribution for crimes, the new system saw punishments that were much more mild, favoring monetary compensation over physical or capital punishment. Why these drastic reforms happened is not exactly clear, but it is likely that punishing murder with execution was deemed not to benefit any individual or family involved. These reforms were not just seen in the realm of capital punishment. Where major fines were to be paid, a severe reduction in penalty can be seen. For example, prior to these major reforms, the payment to be made for the theft of an animal was thirty times the animal's value; after the reforms, the penalty was reduced to half the original fine. Simultaneously, attempts to modernize the language and change the verbiage used in the law codes can be seen during this period of reform.
Under both the old and reformed Hittite law codes, three main types of punishment can be seen: Death, torture, or compensation/fines. The articles outlined on the cuneiform tablets provide very specific punishments for crimes committed against the Hittite religion or against individuals. In many, but not all cases, articles describing similar laws are grouped together. More than a dozen consecutive articles describe what are known to be permitted and prohibited sexual pairings. These pairings mostly describe men (sometimes specifically referred to as free men, sometimes just men in general) having relations, be they consensual or not, with animals, step-family, relatives of spouses, or concubines. Many of these articles do not provide specific punishments but, prior to the law reforms, crimes against religion were most often punishable by death. These include incestuous marriages and sexual relations with certain animals. For example, one article states, "If a man has sexual relations with a cow, it is an unpermitted sexual pairing: he will be put to death." Similar relations with horses and mules were not subject to capital punishment, but the offender could not become a priest afterwards. Actions at the expense of other individuals most often see the offender paying some sort of compensation, be it in the form money, animals, or land. These actions could include the destruction of farmlands, death or injury of livestock, or assault of an individual. Several articles also specifically mention acts of the gods. If an animal were to die by certain circumstances, the individual could claim that it died by the hand of a god. Swearing that what they claim was true, it seems that they were exempt from paying compensation to the animal's owner. Injuries inflicted upon animals owned by another individual are almost always compensated with either direct payment, or trading the injured animal with a healthy one owned by the offender.
Not all laws prescribed in the tablets deal with criminal punishment. For example, the instructions of how the marriage of slaves and division of their children are given in a group of articles, "The slave woman shall take most of the children, with the male slave taking one child." Similar instructions are given to the marriage of free individuals and slaves. Other actions include how breaking of engagements are to be handled.
The Bible refers to "Hittites" in several passages, ranging from Genesis to the post-Exilic Ezra–Nehemiah. The Hittites are usually depicted as a people living among the Israelites—Abraham purchases the Patriarchal burial-plot of Machpelah from "Ephron HaChiti", Ephron the Hittite; and Hittites serve as high military officers in David's army. In 2 Kings 7:6, however, they are a people with their own kingdoms (the passage refers to "kings" in the plural), apparently located outside geographic Canaan, and sufficiently powerful to put a Syrian army to flight.
It is a matter of considerable scholarly debate whether the biblical "Hittites" signified any or all of: 1) the original Hattians; 2) their Indo-European conquerors, who retained the name "Hatti" for Central Anatolia, and are today referred to as the "Hittites" (the subject of this article); or 3) a Canaanite group who may or may not have been related to either or both of the Anatolian groups, and who also may or may not be identical with the later Syro-Hittite states.
Other biblical scholars (following Max Müller) have argued that, rather than being connected with Heth, son of Canaan, the Anatolian land of "Hatti" was instead mentioned in Old Testament literature and apocrypha as "Kittim" (Chittim), a people said to be named for a son of Javan. | https://en.wikipedia.org/wiki?curid=13308 |
Hormone
A hormone (from the Greek participle , "setting in motion") is any member of a class of signaling molecules, produced by glands in multicellular organisms, that are transported by the circulatory system to target distant organs to regulate physiology and behavior. Hormones have diverse chemical structures, mainly of three classes:
The glands that secrete hormones comprise the endocrine signaling system. The term "hormone" is sometimes extended to include chemicals produced by cells that affect the same cell (autocrine or intracrine signaling) or nearby cells (paracrine signalling).
Hormones serve to communicate between organs and tissues for physiological regulation and behavioral activities such as digestion, metabolism, respiration, tissue function, sensory perception, sleep, excretion, lactation, stress induction, growth and development, movement, reproduction, and mood manipulation. Hormones affect distant cells by binding to specific receptor proteins in the target cell, resulting in a change in cell function. When a hormone binds to the receptor, it results in the activation of a signal transduction pathway that typically activates gene transcription, resulting in increased expression of target proteins; non-genomic effects are more rapid, and can be synergistic with genomic effects. Amino acid–based hormones (amines and peptide or protein hormones) are water-soluble and act on the surface of target cells via second messengers; steroid hormones, being lipid-soluble, move through the plasma membranes of target cells (both cytoplasmic and nuclear) to act within their nuclei.
Hormone secretion may occur in many tissues. Endocrine glands provide the cardinal example, but specialized cells in various other organs also secrete hormones. Hormone secretion occurs in response to specific biochemical signals from a wide range of regulatory systems. For instance, serum calcium concentration affects parathyroid hormone synthesis; blood sugar (serum glucose concentration) affects insulin synthesis; and because the outputs of the stomach and exocrine pancreas (the amounts of gastric juice and pancreatic juice) become the input of the small intestine, the small intestine secretes hormones to stimulate or inhibit the stomach and pancreas based on how busy it is. Regulation of hormone synthesis of gonadal hormones, adrenocortical hormones, and thyroid hormones often depends on complex sets of direct-influence and feedback interactions involving the hypothalamic-pituitary-adrenal (HPA), -gonadal (HPG), and -thyroid (HPT) axes.
Upon secretion, certain hormones, including protein hormones and catecholamines, are water-soluble and are thus readily transported through the circulatory system. Other hormones, including steroid and thyroid hormones, are lipid-soluble; to achieve widespread distribution, these hormones must bond to carrier plasma glycoproteins (e.g., thyroxine-binding globulin (TBG)) to form ligand-protein complexes. Some hormones are completely active when released into the bloodstream (as is the case for insulin and growth hormones), while others are prohormones that must be activated in specific cells through a series of activation steps that are commonly highly regulated. The endocrine system secretes hormones directly into the bloodstream, typically via fenestrated capillaries, whereas the exocrine system secretes its hormones indirectly using ducts. Hormones with paracrine function diffuse through the interstitial spaces to nearby target tissue.
Hormonal signaling involves the following steps:
Hormone producing cells are typically of a specialized cell type, residing within a particular endocrine gland, such as the thyroid gland, ovaries, and testes. Hormones exit their cell of origin via exocytosis or another means of membrane transport. The hierarchical model is an oversimplification of the hormonal signaling process. Cellular recipients of a particular hormonal signal may be one of several cell types that reside within a number of different tissues, as is the case for insulin, which triggers a diverse range of systemic physiological effects. Different tissue types may also respond differently to the same hormonal signal.
The discovery of hormones and endocrine signaling occurred during studies of how the digestive system regulates its activities, as explained at Secretin § Discovery.
Arnold Adolph Berthold was a German physiologist and zoologist, who, in 1849, had a question about the function of the testes. He noticed that in castrated roosters that they did not have the same sexual behaviors as roosters with their testes intact. He decided to run an experiment on male roosters to examine this phenomenon. He kept a group of roosters with their testes intact, and saw that they had normal sized wattles and combs (secondary sexual organs), a normal crow, and normal sexual and aggressive behaviors. He also had a group with their testes surgically removed, and noticed that their secondary sexual organs were decreased in size, had a weak crow, did not have sexual attraction towards females, and were not aggressive. He realized that this organ was essential for these behaviors, but he did not know how. To test this further, he removed one testis and placed it in the abdominal cavity. The roosters acted and had normal physical anatomy. He was able to see that location of the testes do not matter. He then wanted to see if it was a genetic factor that was involved in the testes that provided these functions. He transplanted a testis from another rooster to a rooster with one testis removed, and saw that they had normal behavior and physical anatomy as well. Berthold determined that the location or genetic factors of the testes do not matter in relation to sexual organs and behaviors, but that some chemical in the testes being secreted is causing this phenomenon. It was later identified that this factor was the hormone testosterone.
William Bayliss and Ernest Starling, a physiologist and biologist, respectively, wanted to see if the nervous system had an impact on the digestive system. They knew that the pancreas was involved in the secretion of digestive fluids after the passage of food from the stomach to the intestines, which they believed to be due to the nervous system. They cut the nerves to the pancreas in an animal model and discovered that it was not nerve impulses that controlled secretion from the pancreas. It was determined that a factor secreted from the intestines into the bloodstream was stimulating the pancreas to secrete digestive fluids. This factor was named secretin: a hormone, although the term hormone was not coined until 1905 by Starling.
Hormonal effects are dependent on where they are released, as they can be released in different manners. Not all hormones are released from a cell and into the blood until it binds to a receptor on a target. The major types of hormone signaling are:
As hormones are defined functionally, not structurally, they may have diverse chemical structures. Hormones occur in multicellular organisms (plants, animals, fungi, brown algae, and red algae). These compounds occur also in unicellular organisms, and may act as signaling molecules however there is no agreement that these molecules can be called hormones.
Compared with vertebrates, insects and crustaceans possess a number of structurally unusual hormones such as the juvenile hormone, a sesquiterpenoid.
Examples include abscisic acid, auxin, cytokinin, ethylene, and gibberellin.
Most hormones initiate a cellular response by initially binding to either cell membrane associated or intracellular receptors. A cell may have several different receptor types that recognize the same hormone but activate different signal transduction pathways, or a cell may have several different receptors that recognize different hormones and activate the same biochemical pathway.
Receptors for most peptide as well as many eicosanoid hormones are embedded in the plasma membrane at the surface of the cell and the majority of these receptors belong to the G protein-coupled receptor (GPCR) class of seven alpha helix transmembrane proteins. The interaction of hormone and receptor typically triggers a cascade of secondary effects within the cytoplasm of the cell, described as signal transduction, often involving phosphorylation or dephosphorylation of various other cytoplasmic proteins, changes in ion channel permeability, or increased concentrations of intracellular molecules that may act as secondary messengers (e.g., cyclic AMP). Some protein hormones also interact with intracellular receptors located in the cytoplasm or nucleus by an intracrine mechanism.
For steroid or thyroid hormones, their receptors are located inside the cell within the cytoplasm of the target cell. These receptors belong to the nuclear receptor family of ligand-activated transcription factors. To bind their receptors, these hormones must first cross the cell membrane. They can do so because they are lipid-soluble. The combined hormone-receptor complex then moves across the nuclear membrane into the nucleus of the cell, where it binds to specific DNA sequences, regulating the expression of certain genes, and thereby increasing the levels of the proteins encoded by these genes. However, it has been shown that not all steroid receptors are located inside the cell. Some are associated with the plasma membrane.
Hormones have the following effects on the body:
A hormone may also regulate the production and release of other hormones. Hormone signals control the internal environment of the body through homeostasis.
The rate of hormone biosynthesis and secretion is often regulated by a homeostatic negative feedback control mechanism. Such a mechanism depends on factors that influence the metabolism and excretion of hormones. Thus, higher hormone concentration alone cannot trigger the negative feedback mechanism. Negative feedback must be triggered by overproduction of an "effect" of the hormone.
Hormone secretion can be stimulated and inhibited by:
One special group of hormones is the tropic hormones that stimulate the hormone production of other endocrine glands. For example, thyroid-stimulating hormone (TSH) causes growth and increased activity of another endocrine gland, the thyroid, which increases output of thyroid hormones.
To release active hormones quickly into the circulation, hormone biosynthetic cells may produce and store biologically inactive hormones in the form of pre- or prohormones. These can then be quickly converted into their active hormone form in response to a particular stimulus.
Eicosanoids are considered to act as local hormones. They are considered to be "local" because they possess specific effects on target cells close to their site of formation. They also have a rapid degradation cycle, making sure they do not reach distant sites within the body.
Hormones are also regulated by receptor agonists. Hormones are ligands, which are any kinds of molecules that produce a signal by binding to a receptor site on a protein. Hormone effects can be inhibited, thus regulated, by competing ligands that bind to the same target receptor as the hormone in question. When a competing ligand is bound to the receptor site, the hormone is unable to bind to that site and is unable to elicit a response from the target cell. These competing ligands are called antagonists of the hormone.
Many hormones and their structural and functional analogs are used as medication. The most commonly prescribed hormones are estrogens and progestogens (as methods of hormonal contraception and as HRT), thyroxine (as levothyroxine, for hypothyroidism) and steroids (for autoimmune diseases and several respiratory disorders). Insulin is used by many diabetics. Local preparations for use in otolaryngology often contain pharmacologic equivalents of adrenaline, while steroid and vitamin D creams are used extensively in dermatological practice.
A "pharmacologic dose" or "supraphysiological dose" of a hormone is a medical usage referring to an amount of a hormone far greater than naturally occurs in a healthy body. The effects of pharmacologic doses of hormones may be different from responses to naturally occurring amounts and may be therapeutically useful, though not without potentially adverse side effects. An example is the ability of pharmacologic doses of glucocorticoids to suppress inflammation.
At the neurological level, behavior can be inferred based on: hormone concentrations; hormone-release patterns; the numbers and locations of hormone receptors; and the efficiency of hormone receptors for those involved in gene transcription. Not only do hormones influence behavior, but also behavior and the environment influence hormones. Thus, a feedback loop is formed. For example, behavior can affect hormones, which in turn can affect behavior, which in turn can affect hormones, and so on.
Three broad stages of reasoning may be used when determining hormone-behavior interactions:
There are various clear distinctions between hormones and neurotransmitters:
Neurohormones are a type of hormone that are produced by endocrine cells that receive input from neurons, or neuroendocrine cells. Both classic hormones and neurohormones are secreted by endocrine tissue; however, neurohormones are the result of a combination between endocrine reflexes and neural reflexes, creating a neuroendocrine pathway. While endocrine pathways produce chemical signals in the form of hormones, the neuroendocrine pathway involves the electrical signals of neurons. In this pathway, the result of the electrical signal produced by a neuron is the release of a chemical, which is the neurohormone. Finally, like a classic hormone, the neurohormone is released into the bloodstream to reach its target.
Hormone transport and the involvement of binding proteins is an essential aspect when considering the function of hormones. There are several benefits with the formation of a complex with a binding protein: the effective half-life of the bound hormone is increased; a reservoir of bound hormones is created, which evens the variations in concentration of unbound hormones (bound hormones will replace the unbound hormones when these are eliminated). | https://en.wikipedia.org/wiki?curid=13311 |
Hammond organ
The Hammond organ is an electric organ invented by Laurens Hammond and John M. Hanert and first manufactured in 1935. Various models have been produced, most of which use sliding drawbars to vary sounds. Until 1975, Hammond organs generated sound by creating an electric current from rotating a metal tonewheel near an electromagnetic pickup, and then strengthening the signal with an amplifier to drive a speaker cabinet. The organ is commonly used with the Leslie speaker.
Around two million Hammond organs have been manufactured. The organ was originally marketed by the Hammond Organ Company to churches as a lower-cost alternative to the wind-driven pipe organ, or instead of a piano. It quickly became popular with professional jazz musicians in organ trios, small groups centered on the Hammond organ. Jazz club owners found that organ trios were cheaper than hiring a big band. Jimmy Smith's use of the Hammond B-3, with its additional harmonic percussion feature, inspired a generation of organ players, and its use became more widespread in the 1960s and 1970s in rhythm and blues, rock, reggae, and progressive rock.
In the 1970s, the Hammond Organ Company abandoned tonewheels and switched to integrated circuits. These organs were less popular, and the company went out of business in 1985. The Hammond name was purchased by the Suzuki Musical Instrument Corporation, which proceeded to manufacture digital simulations of the most popular tonewheel organs. This culminated in the production of the "New B-3" in 2002, a recreation of the original B-3 organ using digital technology. Hammond-Suzuki continues to manufacture a variety of organs for both professional players and churches. Companies such as Korg, Roland, and Clavia have achieved success in providing more lightweight and portable emulations of the original tonewheel organs. The sound of a tonewheel Hammond can be emulated using modern software such as Native Instruments B4.
A number of distinctive Hammond organ features are not usually found on other keyboards like the piano or synthesizer. Some are similar to a pipe organ, but others are unique to the instrument.
Most Hammond organs have two 61-note (five-octave) keyboards called manuals. As with pipe organ keyboards, the two manuals are arrayed on two levels close to each other. Each is laid out in a similar manner to a piano keyboard, except that pressing a key on a Hammond results in the sound continuously playing until it is released, whereas with a piano, the note's volume decays. No difference in volume occurs regardless of how heavily or lightly the key is pressed (unlike with a piano), so overall volume is controlled by a pedal (also known as a "swell" or "expression" pedal). The keys on each manual have a lightweight action, which allows players to perform rapid passages more easily than on a piano. In contrast to piano and pipe organ keys, Hammond keys have a flat-front profile, commonly referred to as "waterfall" style. Early Hammond console models had sharp edges, but starting with the B-2, these were rounded, as they were cheaper to manufacture. The M series of spinets also had waterfall keys (which has subsequently made them ideal for spares on B-3s and C-3s), but later spinet models had "diving board" style keys which resembled those found on a church organ. Modern Hammond-Suzuki models use waterfall keys.
Hammond console organs come with a wooden pedalboard played with the feet, for bass notes. Most console Hammond pedalboards have 25 notes, with the bottom note a low C and the top note a middle C two octaves higher. Hammond used a 25-note pedalboard because he found that on traditional 32-note pedalboards used in church pipe organs, the top seven notes were seldom used. The Hammond Concert models E, RT, RT-2, RT-3 and D-100 had 32-note American Guild of Organists (AGO) pedalboards going up to the G above middle C as the top note. The RT-2, RT-3 and D-100 also contained a separate solo pedal system that had its own volume control and various other features. Spinet models have 12- or 13-note miniature pedalboards.
The sound on a tonewheel Hammond organ is varied through the manipulation of drawbars. A drawbar is a metal slider that controls the volume of a particular sound component, in a similar way to a fader on an audio mixing board. As a drawbar is incrementally pulled out, it increases the volume of its sound. When pushed all the way in, the volume is decreased to zero.
The labeling of the drawbar derives from the stop system in pipe organs, in which the physical length of the pipe corresponds to the pitch produced. Most Hammonds contain nine drawbars per manual. The drawbar marked "8′" generates the fundamental of the note being played, the drawbar marked "16′" is an octave below, and the drawbars marked "4′", "2′" and "1′" are one, two and three octaves above, respectively. The other drawbars generate various other harmonics and subharmonics of the note. While each individual drawbar generates a relatively pure sound similar to a flute or electronic oscillator, more complex sounds can be created by mixing the drawbars in varying amounts.
Some drawbar settings have become well-known and associated with certain musicians. A very popular setting is 888000000 (i.e., with the drawbars labelled "16′", "′" and "8′" fully pulled out), and has been identified as the "classic" Jimmy Smith sound.
In addition to drawbars, many Hammond tonewheel organ models also include presets, which make predefined drawbar combinations available at the press of a button. Console organs have one octave of reverse colored keys (naturals are black, sharps and flats are white) to the left of each manual, with each key activating a preset; the far left key (C), also known as the cancel key, de-activates all presets, and results in no sound coming from that manual. The two right-most preset keys (B and B) activate the corresponding set of drawbars for that manual, while the other preset keys produce preselected drawbar settings that are internally wired into the preset panel.
Hammond organs have a built-in vibrato effect that provides a small variation in pitch while a note is being played, and a chorus effect where a note's sound is combined with another sound at a slightly different and varying pitch. The best known vibrato and chorus system consists of six settings, V1, V2, V3, C1, C2 and C3 (i.e., three each of vibrato and chorus), which can be selected via a rotary switch. Vibrato / chorus can be selected for each manual independently.
The B-3 and C-3 models introduced the concept of "Harmonic Percussion", which was designed to emulate the percussive sounds of the harp, xylophone, and marimba. When selected, this feature plays a decaying second- or third-harmonic overtone when a key is pressed. The selected percussion harmonic fades out, leaving the sustained tones the player selected with the drawbars. The volume of this percussive effect is selectable as either normal or soft. Harmonic Percussion retriggers only after all notes have been released, so legato passages sound the effect only on the very first note or chord, making Harmonic Percussion uniquely a "single-trigger", or monophonic effect.
Before a Hammond organ can produce sound, the motor that drives the tonewheels must come up to speed. On most models, starting a Hammond organ involves two switches. The "Start" switch turns a dedicated starter motor, which must run for about 12 seconds. Then, the "Run" switch is turned on for about four seconds. The "Start" switch is then released, whereupon the organ is ready to generate sound. The H-100 and E-series consoles and L-100 and T-100 spinet organs, however, had a self-starting motor that required only a single "On" switch. A pitch bend effect can be created on the Hammond organ by turning the "Run" switch off and on again. This briefly cuts power to the generators, causing them to run at a slower pace and generate a lower pitch for a short time. Hammond's New B3 contains similar switches to emulate this effect, though it is a digital instrument.
The Hammond organ's technology derives from the Telharmonium, an instrument created in 1897 by Thaddeus Cahill. The telharmonium used revolving electric alternators which generated tones that could be transmitted over wires. The instrument was bulky enough to require several railway cars for its transportation, because the alternators had to be large enough to generate high voltage for a loud enough signal. The Hammond organ solved this problem by using an amplifier.
Laurens Hammond graduated from Cornell University with a mechanical engineering degree in 1916. By the start of the 1920s, he had designed a spring-driven clock, which provided enough sales for him to start his own business, the Hammond Clock Company, in 1928. As well as clocks, his early inventions included three-dimensional glasses and an automatic bridge table shuffler. However, as the Great Depression continued into the 1930s, sales of the bridge table declined and he decided to look elsewhere for a commercially successful product. Hammond was inspired to create the tonewheel or "phonic wheel" by listening to the moving gears of his electric clocks and the tones produced by them. He gathered pieces from a second-hand piano he had purchased for $15 and combined it with a tonewheel generator in a similar form to the telharmonium, albeit much shorter and more compact. Since Hammond was not a musician, he asked the company's assistant treasurer, W. L. Lahey, to help him achieve the desired organ sound. To cut costs, Hammond made a pedalboard with only 25 notes, instead of the standard 32 on church organs, and it quickly became a "de facto" standard.
On April 24, 1934, Hammond filed a patent for an "electrical musical instrument", which was personally delivered to the patent office by Hanert, explaining that they could start production immediately and it would be good for local employment in Chicago. The invention was unveiled to the public in April 1935, and the first model, the Model A, was made available in June of that year. Over 1,750 churches purchased a Hammond organ in the first three years of manufacturing, and by the end of the 1930s, over 200 instruments were being made each month. By 1966, an estimated 50,000 churches had installed a Hammond. For all its subsequent success with professional musicians, the original company did not target its products at that market, principally because Hammond did not think enough money was in it.
In 1936, the Federal Trade Commission (FTC) filed a complaint claiming that the Hammond Company made "false and misleading" claims in advertisements for its organ, including that the Hammond could produce "the entire range of tone coloring of a pipe organ". The complaint resulted in lengthy hearing proceedings, which featured a series of auditory tests that pitted a Hammond costing about $2600 against a $75,000 Skinner pipe organ in the University of Chicago Rockefeller Chapel. During the auditory tests, sustained tones and excerpts from musical works were played on the electric and pipe organs while a group of musicians and laymen attempted to distinguish between the instruments. While attorneys for Hammond argued that the test listeners were wrong or guessed nearly half the time, witnesses for the FTC claimed that Hammond employees had unfairly manipulated the Skinner organ to sound more like the Hammond. In 1938, the FTC ordered Hammond to "cease and desist" a number of advertising claims, including that its instrument was equivalent to a $10,000 pipe organ. After the FTC's decision, Hammond claimed that the hearings had vindicated his company's assertions that the organ produced "real", "fine", and "beautiful" music, phrases which were each cited in the FTC's original complaint, but not included in the "cease and desist" order. Hammond also claimed that although the hearing was expensive for his company, the proceedings generated so much publicity that "as a result we sold enough extra organs to cover the expense."
The Hammond Organ Company produced an estimated two million instruments in its lifetime; these have been described as "probably the most successful electronic organs ever made". A key ingredient to the Hammond organ's success was the use of dealerships and a sense of community. Several dedicated organ dealers set up business in the United States and there was a bi-monthly newsletter, "The Hammond Times", mailed out to subscribers. Advertisements tended to show families gathered around the instrument, often with a child playing it, as an attempt to show the organ as a center-point of home life and to encourage children to learn music.
Hammond organs, as manufactured by the original company, can be divided into two main groups:
The first model in production, in June 1935, was the Model A. It contained most of the features that came to be standard on all console Hammonds, including two 61-key manuals, a 25-key pedalboard, an expression pedal, 12 reverse-color preset keys, two sets of drawbars for each manual, and one for the pedals.
To address concerns that the sound of the Hammond was not rich enough to accurately mimic a pipe organ, the model BC was introduced in December 1936. It included a chorus generator, in which a second tonewheel system added slightly sharp or flat tones to the overall sound of each note. The cabinet was made deeper to accommodate this. Production of the old Model A cases stopped, but the older model continued to be available as the AB until October 1938.
Criticism that the Hammond organ was more aesthetically suitable to the home instead of the church led to the introduction of the model C in September 1939. It contained the same internals as the AB or BC, but covered on the front and sides by "modesty panels" to cover female organists' legs while playing in a skirt, often a consideration when a church organ was placed in front of the congregation. The model C did not contain the chorus generator, but had space in the cabinet for it to be fitted. The concurrent model D was a model C with a prefitted chorus. Development of the vibrato system took place during the early 1940s, and was put into production shortly after the end of World War II. The various models available were the BV and CV (vibrato only) and BCV and DV (vibrato and chorus).
The B-2 and C-2, introduced in 1949, allowed vibrato to be enabled or disabled on each manual separately. In 1954, the B-3 and C-3 models were introduced with the additional harmonic percussion feature. Despite several attempts by Hammond to replace them, these two models remained popular and stayed in continuous production through early 1975.
To cater more specifically to the church market, Hammond introduced the Concert Model E in July 1937, which included a full 32-note pedalboard and four electric switches known as toe pistons, allowing various sounds to be selected by the feet. The model E was replaced by the model RT in 1949, which retained the full-sized pedalboard, but otherwise was internally identical to the B and C models. RT-2 and RT-3 models subsequently appeared in line with the B-2/C-2 and B-3/C-3, respectively.
In 1959, Hammond introduced the A-100 series. It was effectively a self-contained version of the B-3/C-3, with an internal power amplifier and speakers. The organ was manufactured in a variety of different chassis, with the last two digits of the specific model number determining the style and finish of the instrument. For example, A-105 was "Tudor styling in light oak or walnut", while the A-143 was "warm cherry finish, Early American styling". This model numbering scheme was used for several other series of console and spinet organs that subsequently appeared. The D-100 series, which provided a self-contained version of the RT-3, followed in 1963.
The E-100 series was a cost-reduced version of the A-100 introduced in 1965, with only one set of drawbars per manual, a reduced number of presets, and a slightly different tone generator. This was followed by the H-100 series, with a redesigned tonewheel generator and various other additional features. The organ was not particularly well made, and suffered a reputation for being unreliable. Hammond service engineer Harvey Olsen said, "When they [H-100s] work, they sound pretty decent. But die-hard enthusiasts won't touch it."
Though the instrument had been originally designed for use in a church, Hammond realized that the amateur home market was a far more lucrative business, and started manufacturing spinet organs in the late 1940s. Outside of the United States, they were manufactured in greater numbers than the consoles, and hence were more widely used. Several different types of M series instruments were produced between 1948 and 1964; they contained two 44-note manuals with one set of drawbars each, and a 12-note pedalboard. The M model was produced from 1948 to 1951, the M-2 from 1951 to 1955, and the M-3 from 1955 to 1964. The M series was replaced by the M-100 series in 1961, which used a numbering system to identify the body style and finish as used on earlier console series. It included the same manuals as the M, but increased the pedalboard size to 13 notes, stretching a full octave, and included a number of presets.
The L-100 series entered production at the same time as the M-100. It was an economy version, with various cost-cutting changes so the organ could retail for under $1000. The vibrato was a simpler circuit than on other consoles and spinets. Two variations of the vibrato were provided, plus a chorus that mixed various vibrato signals together. The expression pedal, based on a cheaper design, was not as sophisticated as on the other organs. The L-100 was particularly popular in the UK and sold well, with several notable British musicians using it instead of a B-3 or C-3.
The T series, produced from 1968 to 1975, was the last of the tonewheel spinet organs. Unlike all the earlier Hammond organs, which used vacuum tubes for preamplification, amplification, percussion and chorus-vibrato control, the T series used all-solid-state, transistor circuitry, though, unlike the L-100, it did include the scanner-vibrato as seen on the B-3. Other than the T-100 series models, all other T-Series models included a built-in rotating Leslie speaker and some included an analog drum machine, while the T-500 also included a built-in cassette recorder. It was one of the last tonewheel Hammonds produced.
In the 1960s, Hammond started making transistor organs. The first organ that bridged the gap between tonewheel and transistor was the X-66, introduced in May 1967. The X-66 contained just 12 tonewheels, and used electronics for frequency division. It contained separate "vibrato bass" and "vibrato treble" in an attempt to simulate a Leslie speaker. Hammond designed it as the company's flagship product, in response to market competition and to replace the B-3. However, it was considered expensive at $9,795 and it sold poorly. It did not sound like a B-3.
Hammond introduced their first integrated circuit (IC) model, the Concorde, in 1971. The company had stopped manufacturing tonewheel organs entirely by 1975, due to increased financial inefficiency, and switched to making IC models full-time. Console models included the 8000 Aurora (1976) and 8000M Aurora (1977), which contained drawbars and a built-in rotating speaker. Spinet organs included the Romance series, manufactured between 1977 and 1983. In 1979, a Japanese offshoot, Nihon Hammond, introduced the X-5, a portable solid-state clone of the B-3.
Laurens Hammond died in 1973, and the company struggled to survive, proposing an acquiring of Roland in 1972, which was turned down. Roland's Ikutaro Kakehashi did not believe it was practical at that point to move the entire manufacturing operation from Chicago to Japan, and also viewed Hammond's declining sales figures as a problem.
In 1985, Hammond went out of business, though servicing and spares continued to be available after this under the name of The Organ Service Company. In early 1986, the Hammond brand and rights were acquired by Hammond Organ Australia, run by Noel Crabbe.
Then in 1989, the name was purchased by the Suzuki Musical Instrument Corporation, which rebranded the company as Hammond-Suzuki. Although nominally a Japanese company, founder Manji Suzuki was a fan of the instrument and retained several former Hammond Organ Company staff for research and development, and ensured that production would partially remain in the United States. The new company produced their own brand of portable organs, including the XB-2, XB-3 and XB-5. "Sound on Sound" Rod Spark, a longtime Hammond enthusiast, said these models were "a matter of taste, of course, but I don't think they're a patch on the old ones".
In 2002, Hammond-Suzuki launched the New B-3, a recreation of the original electromechanical instrument using contemporary electronics and a digital tonewheel simulator. The New B-3 is constructed to appear like the original B-3, and the designers attempted to retain the subtle nuances of the familiar B-3 sound. Hammond-Suzuki promotional material states that it would be difficult for even an experienced B-3 player to distinguish between the old and new B-3 organs. A review of the New B-3 by Hugh Robjohns called it "a true replica of an original B-3 ... in terms of the look and layout, and the actual sound." The instrument project nearly stalled after a breakdown in negotiations between Japanese and United States staff, the latter of whom insisted on manufacturing the case in the United States and designing the organ to identical specifications to the original.
The company has since released the XK-3, a single-manual organ using the same digital tonewheel technology as the New B-3. The XK-3 is part of a modular system that allows an integrated lower manual and pedals to be added. In response to some clones, including a variety of vintage keyboards in a single package, Hammond released the SK series of organs, which include grand piano, Rhodes piano, Wurlitzer electric piano, Hohner clavinet, and samples of wind and brass instruments alongside the standard drawbar and tonewheel emulation. "Keyboard" magazine's Stephen Fortner praised the single manual SK1, indicated that it gave an accurate sound throughout the range of drawbar settings, and said the organ sound was "fat, warm, utterly authentic". The XK-1c model was introduced in early 2014, which is simply an organ-only version of the SK1. An updated flagship organ, the XK-5, was launched in 2016, and a stage keyboard, the SK-X followed in 2019, which allows a player to select an individual instrument (organ, piano or synthesizer) for each manual.
In the US, Hammond manufactures a number of dedicated console organs, including the B-3mk2 and the C-3mk2, and the A-405, a Chapel Console Organ. The company has a dedicated Church Advisory Team that provides a consultancy, so churches can choose the most appropriate instrument.
The authorized loudspeaker enclosure to use with a console organ was the Hammond Tone Cabinet, which housed an external amplifier and speaker in a cabinet. The cabinet carried a balanced mono signal along with the necessary mains power directly from the organ, using a six-pin cable. Spinet organs contained a built-in power amplifier and loudspeakers, so did not require a tone cabinet.
The tone cabinet was originally the only method of adding reverberation to a Hammond organ; reverb was not fitted to older organs. The most commercially successful tone cabinets were probably the PR series, particularly the 40-watt PR40.
Many players prefer to play the Hammond through a rotating speaker cabinet known, after several name changes, as a Leslie speaker, after its inventor Donald J. Leslie. The Leslie system is an integrated speaker/amplifier combination in which sound is emitted by a rotating horn over a stationary treble compression driver, and a rotating baffle beneath a stationary bass woofer. This creates a characteristic sound because of the constantly changing pitch shifts that result from the Doppler effect created by the moving sound sources.
The Leslie was originally designed to mimic the complex tones and constantly shifting sources of sound emanating from a large group of ranks in a pipe organ. The effect varies depending on the speed of the rotors, which can be toggled between fast (tremolo) and slow (chorale) using a console half-moon or pedal switch, with the most distinctive effect occurring as the speaker rotation speed changes. The most popular Leslies were the 122, which accepted a balanced signal suitable for console organs, and the 147, which accepted an unbalanced signal and could be used for spinet organs with a suitable adapter. The Pro-Line series of Leslies which were made to be portable for gigging bands using solid-state amps were popular during the 1970s.
Leslie initially tried to sell his invention to Hammond, but Laurens Hammond was unimpressed and declined to purchase it. Hammond modified their interface connectors to be "Leslie-proof", but Leslie quickly engineered a workaround. The Leslie company was sold to CBS in 1965 and was finally bought by Hammond in 1980. Hammond-Suzuki acquired the rights to Leslie in 1992; the company currently markets a variety of speakers under this name. As well as faithful reissues of the original 122 speaker, the company announced in 2013 that they would start manufacturing a standalone Leslie simulator in a stomp box.
Although they are sometimes included in the category of electronic organs, the majority of Hammond organs are, strictly speaking, electric or electromechanical rather than electronic organs, because the sound is produced by moving parts rather than electronic oscillators.
The basic component sound of a Hammond organ comes from a tonewheel. Each one rotates in front of an electromagnetic pickup. The variation in the magnetic field induces a small alternating current at a particular frequency, which represents a signal similar to a sine wave. When a key is pressed on the organ, it completes a circuit of nine electrical switches, which are linked to the drawbars. The position of the drawbars, combined with the switches selected by the key pressed, determines which tonewheels are allowed to sound. Every tonewheel is connected to a synchronous motor via a system of gears, which ensures that each note remains at a constant relative pitch to every other. The combined signal from all depressed keys and pedals is fed through to the vibrato system, which is driven by a metal scanner. As the scanner rotates around a set of pickups, it changes the pitch of the overall sound slightly. From here, the sound is sent to the main amplifier, and on to the audio speakers.
The Hammond organ makes technical compromises in the notes it generates. Rather than produce harmonics that are exact multiples of the fundamental as in equal temperament, it uses the nearest-available frequencies generated by the tonewheels. The only guaranteed frequency for a Hammond's tuning is concert A at 440 Hz.
Crosstalk or "leakage" occurs when the instrument's magnetic pickups receive the signal from rotating metal tonewheels other than those selected by the organist. Hammond considered crosstalk a defect that required correcting, and in 1963 introduced a new level of resistor–capacitor filtering to greatly reduce this crosstalk, along with 50–60 Hz mains hum. However, the sound of tonewheel crosstalk is now considered part of the signature of the Hammond organ, to the extent that modern digital clones explicitly emulate it.
Some Hammond organs have an audible pop or click when a key is pressed. Originally, key click was considered a design defect and Hammond worked to eliminate or at least reduce it with equalization filters. However, many performers liked the percussive effect, and it has been accepted as part of the classic sound. Hammond research and development engineer Alan Young said, "the professionals who were playing popular music [liked] that the attack was so prominent. And they objected when it was eliminated."
The original Hammond organ was never designed to be transported regularly. A Hammond B-3 organ, bench, and pedalboard weighs . This weight, combined with that of a Leslie speaker, makes the instrument cumbersome and difficult to move between venues. This created a demand for a more portable and reliable way of generating the same sound. Electronic and digital keyboards that imitate the sound of the Hammond are referred to as "clonewheel organs".
The first attempts to electronically copy a Hammond appeared in the 1970s, including the Roland VK-1 and VK-9, the Yamaha YP45D, and the Crumar Organiser. The Korg CX-3 (single manual) and BX-3 (dual manual) were the first lightweight organs to produce a comparable sound to the original. "Sound on Sound" Gordon Reid said that the CX-3 "came close to emulating the true depth and passion of a vintage Hammond," particularly when played through a Leslie speaker.
The Roland VK-7, introduced in 1997, attempted to emulate the sound of a Hammond using digital signal processing technology. An updated version, the VK-8, which appeared in 2002, also provided emulations of other vintage keyboards and provided a connector for a Leslie. Clavia introduced the Nord Electro in 2001; this used buttons to emulate the physical action of pulling or pushing a drawbar, with an LED graph indicating its current state. Clavia has released several updated versions of the Electro since then, and introduced the Nord Stage with the same technology. The Nord C2D was Clavia's first organ with real drawbars. Diversi, founded by former Hammond-Suzuki sales representative Tom Tuson in 2003, specialises in Hammond clones, and has an endorsement from Joey DeFrancesco.
The Hammond organ has also been emulated in software. One prominent emulator is the Native Instruments B4 series, which has been praised for its attention to detail and choice of features. Emagic (now part of Apple) has also produced a software emulation, the EVB3. This has led to a Hammond organ module with all controls and features of the original instrument in the Logic Pro audio production suite.
Early customers of the Hammond included Albert Schweitzer, Henry Ford, Eleanor Roosevelt, and George Gershwin.
The instrument was not initially favored by classical organ purists, because the tones of two notes an octave apart were in exact synchronization, as opposed to the slight variation present on a pipe organ. However, the instrument did gradually become popular with jazz players. One of the first performers to use the Hammond organ was Ethel Smith, who was known as the "first lady of the Hammond organ". Fats Waller and Count Basie also started using the Hammond. Organist John Medeski thinks the Hammond became "the poor man's big band", but because of that, it became more economical to book organ trios.
Jimmy Smith began to play Hammond regularly in the 1950s, particularly in his sessions for the Blue Note label between 1956 and 1963. He eschewed a bass player, and played all the bass parts himself using the pedals, generally using a walking bassline on the pedals in combination with percussive left-hand chords. His trio format, composed of organ, guitar, and drums, became internationally famous following an appearance at the Newport Jazz Festival in 1957. Medeski says musicians "were inspired when they heard Jimmy Smith's records." "Brother" Jack McDuff switched from piano to Hammond in 1959, and toured regularly throughout the 1960s and 1970s. In his Hammond playing, Keith Emerson sought partly to replicate the sound achieved by McDuff in his arrangement of "Rock Candy". An admirer of Billy Preston's work also, particularly the 1965 instrumental "Billy's Bag", Emerson limited the use of Leslie because he felt that was Preston's domain at the time, whereas he himself was approaching the instrument with an aesthetic combining "a white European attitude", classical music, and rock.
Booker T. Jones is cited as being the bridge from rhythm and blues to rock. British organist James Taylor said the Hammond "became popular [in the UK] when people such as Booker T. & the M.G.'s and artists on the Stax Records label came over to London and played gigs." Matthew Fisher first encountered the Hammond in 1966, having heard the Small Faces' Ian McLagan playing one. When Fisher asked if he could play it, McLagan told him, "They're yelling out for Hammond players; why don't you go out and buy one for yourself?" Fisher went on to play the organ lines on Procol Harum's "A Whiter Shade of Pale", which topped the UK charts in the summer of 1967. Steve Winwood started his musical career with the Spencer Davis Group playing guitar and piano, but he switched to Hammond when he hired one to record "Gimme Some Lovin'".
Gregg Allman became interested in the Hammond after Mike Finnigan had introduced him to Jimmy Smith's music, and started to write material with it. His brother Duane specifically requested he play the instrument when forming the Allman Brothers Band, and he was presented with a brand new B-3 and Leslie 122RV upon joining. Allman recalls the instrument was cumbersome to transport, particularly on flights of stairs, which often required the whole band's assistance. Author Frank Moriarty considers Allman's Hammond playing a vital ingredient of the band's sound.
Deep Purple's Jon Lord became inspired to play the Hammond after hearing Jimmy Smith's "Walk on the Wild Side". He modified his Hammond so it could be played through a Marshall stack to get a growling, overdriven sound, which became known as his trademark and he is strongly identified with it. This organ was later acquired by Joey DeFrancesco. Van der Graaf Generator's Hugh Banton modified his Hammond E-100 extensively with customised electronics, including the ability to put effects such as distortion on one manual but not the other, and rewiring the motor. The modifications created, in Banton's own words, "unimaginable sonic chaos."
The Hammond was a key instrument in progressive rock music. Author Edward Macan thinks this is because of its versatility, allowing both chords and lead lines to be played, and a choice between quiet and clean, and what Emerson described as a "tacky, aggressive, almost distorted, angry sound." Emerson first found commercial success with the Nice, with whom he used and abused an L-100, putting knives in the instrument, setting fire to it, playing it upside down, or riding it across stage in the manner of a horse. He continued to play the instrument in this manner alongside other keyboards in Emerson, Lake and Palmer. Other prominent Hammond organists in progressive rock include the Zombies' and Argent's Rod Argent, Yes's Tony Kaye and Rick Wakeman, Focus's Thijs van Leer, Uriah Heep's Ken Hensley, Pink Floyd's Rick Wright, Kansas's Steve Walsh, and Genesis's Tony Banks. Banks later claimed he only used the Hammond because a piano was impractical to transport to gigs.
Ska and reggae music made frequent use of the Hammond throughout the 1960s and 1970s. Junior Marvin started to play the instrument after hearing Booker T & The MGs' "Green Onions", although he complained about its weight. Winston Wright was regarded in the music scene of Jamaica as one of the best organ players, and used the Hammond when performing live with Toots and the Maytals, as well as playing it on sessions with Lee "Scratch" Perry, Jimmy Cliff, and Gregory Isaacs. Tyrone Downie, best known as Bob Marley and the Wailers' keyboard player, made prominent use of the Hammond on "No Woman, No Cry", as recorded at the Lyceum Theatre, London, for the album "Live!"
The Hammond organ was perceived as outdated by the late 1970s, particularly in the UK, where it was often used to perform pop songs in social clubs. Punk and new wave bands tended to prefer second-hand combo organs from the 1960s, or use no keyboards at all. Other groups started taking advantage of cheaper and more portable synthesizers that were starting to come onto the market. The Stranglers' Dave Greenfield was an exception to this, and used a Hammond onstage during the band's early career. Andy Thompson, better known for being an aficionado of the Mellotron, stated, "the Hammond never really went away. There are a lot of studios that have had a B-3 or C-3 sitting away in there since the 70s." The instrument underwent a brief renaissance in the 1980s with the mod revival movement. Taylor played the Hammond through the 1980s, first with the Prisoners and later with the James Taylor Quartet. The sound of the Hammond has appeared in hip-hop music, albeit mostly via samples. A significant use is the Beastie Boys' 1992 single "So What'cha Want", which features a Hammond mixed into the foreground (the instrument was recorded live rather than being sampled).
Jazz, blues, and gospel musicians continued to use Hammond organs into the 21st century. Barbara Dennerlein has received critical acclaim for her performances on the Hammond, particularly her use of the bass pedals, and has modified the instrument to include samplers triggered by the pedals. Joey DeFrancesco embraced the instrument during the 1990s, and later collaborated with Jimmy Smith. He is positive about the future of the Hammond organ, saying "Everybody loves it. It makes you feel good ... I think it's bigger now than ever." Grammy-winning jazz keyboardist Cory Henry learned to play the Hammond organ at age two and used it on 2016's "The Revival". | https://en.wikipedia.org/wiki?curid=13312 |
Hypoglycemia
Hypoglycemia, also known as low blood sugar, is a fall in blood sugar to levels below normal. This may result in a variety of symptoms including clumsiness, trouble talking, confusion, loss of consciousness, seizures or death. A feeling of hunger, sweating, shakiness and weakness may also be present. Symptoms typically come on quickly.
The most common cause of hypoglycemia is medications used to treat diabetes mellitus such as insulin and sulfonylureas. Risk is greater in diabetics who have eaten less than usual, exercised more than usual or drunk alcohol. Other causes of hypoglycemia include kidney failure, certain tumors (such as insulinoma), liver disease, hypothyroidism, starvation, inborn error of metabolism, severe infections, reactive hypoglycemia and a number of drugs including alcohol. Low blood sugar may occur in otherwise healthy babies who have not eaten for a few hours.
The glucose level that defines hypoglycemia is variable. In people with diabetes, levels below 3.9 mmol/L (70 mg/dL) are diagnostic. In adults without diabetes, symptoms related to low blood sugar, low blood sugar at the time of symptoms and improvement when blood sugar is restored to normal confirm the diagnosis. Otherwise, a level below 2.8 mmol/L (50 mg/dL) after not eating or following exercise may be used. In newborns, a level below 2.2 mmol/L (40 mg/dL), or less than 3.3 mmol/L (60 mg/dL) if symptoms are present, indicates hypoglycemia. Other tests that may be useful in determining the cause include insulin and C peptide levels in the blood.
Among people with diabetes, prevention is by matching the foods eaten with the amount of exercise and the medications used. When people feel their blood sugar is low, testing with a glucose monitor is recommended. Some people have few initial symptoms of low blood sugar, and frequent routine testing in this group is recommended. Treatment of hypoglycemia is by eating foods high in simple sugars or taking dextrose. If a person is not able to take food by mouth, glucagon by injection or in the nose may help. The treatment of hypoglycemia unrelated to diabetes includes treating the underlying problem as well and a healthy diet. The term "hypoglycemia" is sometimes incorrectly used to refer to idiopathic postprandial syndrome, a controversial condition with similar symptoms that occur following eating but with normal blood sugar levels.
Hypoglycemic symptoms and manifestations can be divided into those produced by the counterregulatory hormones (epinephrine/adrenaline and glucagon) triggered by the falling glucose, and the neuroglycopenic effects produced by the reduced brain sugar.
Not all of the above manifestations occur in every case of hypoglycemia. There is no consistent order to the appearance of the symptoms, if symptoms even occur. Specific manifestations may also vary by age, by severity of the hypoglycemia and the speed of the decline. In young children, vomiting can sometimes accompany morning hypoglycemia with ketosis. In older children and adults, moderately severe hypoglycemia can resemble mania, mental illness, drug intoxication, or drunkenness. In the elderly, hypoglycemia can produce focal stroke-like effects or a hard-to-define malaise. The symptoms of a single person may be similar from episode to episode, but are not necessarily so and may be influenced by the speed at which glucose levels are dropping, as well as previous incidents.
In newborns, hypoglycemia can produce irritability, jitters, myoclonic jerks, cyanosis, respiratory distress, apneic episodes, sweating, hypothermia, somnolence, hypotonia, refusal to feed, and seizures or "spells." Hypoglycemia can resemble asphyxia, hypocalcemia, sepsis, or heart failure.
In both young and old people with hypoglycemia, the brain may habituate to low glucose levels, with a reduction of noticeable symptoms despite neuroglycopenic impairment. In insulin-dependent diabetic people this phenomenon is termed "hypoglycemia unawareness" and is a significant clinical problem when improved glycemic control is attempted. Another aspect of this phenomenon occurs in type I glycogenosis, when chronic hypoglycemia before diagnosis may be better tolerated than acute hypoglycemia after treatment is underway.
Hypoglycemic symptoms can also occur when one is sleeping. Examples of symptoms during sleep can include damp bed sheets or clothes from perspiration. Having nightmares or the act of crying out can be a sign of hypoglycemia. Once the individual is awake they may feel tired, irritable, or confused and these may be signs of hypoglycemia as well.
In nearly all cases, hypoglycemia that is severe enough to cause seizures or unconsciousness can be reversed without obvious harm to the brain. Cases of death or permanent neurological damage occurring with a single episode have usually involved prolonged, untreated unconsciousness, interference with breathing, severe concurrent disease, or some other type of vulnerability. Nevertheless, brain damage or death has occasionally resulted from severe hypoglycemia.
Research in healthy adults shows that mental efficiency declines slightly but measurably as blood glucose falls below 3.6 mM (65 mg/dL). Hormonal defense mechanisms (adrenaline and glucagon) are normally activated as it drops below a threshold level (about 55 mg/dL (3.0 mM) for most people), producing the typical hypoglycemic symptoms of shakiness and dysphoria. Obvious impairment may not occur until the glucose falls below 40 mg/dL (2.2 mM), and many healthy people may occasionally have glucose levels below 65 in the morning without apparent effects. Since the brain effects of hypoglycemia, termed neuroglycopenia, determine whether a given low glucose is a "problem" for that person, most doctors use the term "hypoglycemia" only when a moderately low glucose level is accompanied by symptoms or brain effects.
Determining the presence of both parts of this definition is not always straightforward, as hypoglycemic symptoms and effects are vague and can be produced by other conditions; people with recurrently low glucose levels can lose their threshold symptoms so that severe neuroglycopenic impairment can occur without much warning, and many measurement methods (especially glucose meters) are imprecise at low levels.
It may take longer to recover from severe hypoglycemia with unconsciousness or seizure even after restoration of normal blood glucose. When a person has not been unconscious, failure of carbohydrate to reverse the symptoms in 10–15 minutes increases the likelihood that hypoglycemia was not the cause of the symptoms. When severe hypoglycemia has persisted in a hospitalized person, the amount of glucose required to maintain satisfactory blood glucose levels becomes an important clue to the underlying cause. Glucose requirements above 10 mg/kg/minute in infants, or 6 mg/kg/minute in children and adults are strong evidence for hyperinsulinism. In this context this is referred to as the "glucose infusion rate" (GIR). Finally, the blood glucose response to glucagon given when the glucose is low can also help distinguish among various types of hypoglycemia. A rise of blood glucose by more than 30 mg/dL (1.70 mmol/l) suggests insulin excess as the probable cause of the hypoglycemia.
Significant hypoglycemia appears to increase the risk of cardiovascular disease.
The most common cause of hypoglycemia is medications used to treat diabetes mellitus such as insulin, sulfonylureas, and biguanides. Risk is greater in diabetics who have eaten less than usual, exercised more than usual, or drunk alcohol. Other causes of hypoglycemia include kidney failure, certain tumors, liver disease, hypothyroidism, starvation, inborn errors of metabolism, severe infections, reactive hypoglycemia, and a number of drugs including alcohol. Low blood sugar may occur in babies who are otherwise healthy who have not eaten for a few hours. Inborn errors of metabolism may include the lack of an enzyme to make glycogen (glycogen storage type 0).
Serious illness may result in low blood sugar. Severe disease of nearly all major organ systems can cause hypoglycemia as a secondary problem. Hospitalized persons, especially in intensive care units or those prevented from eating, can develop hypoglycemia from a variety of circumstances related to the care of their primary disease. Hypoglycemia in these circumstances is often multifactorial or caused by the healthcare. Once identified, these types of hypoglycemia are readily reversed and prevented, and the underlying disease becomes the primary problem.
Not enough cortisol, such as in Addison's disease, not enough glucagon, or not enough epinephrine can result in low blood sugar. This is a more common cause in children.
Like most animal tissues, brain metabolism depends primarily on glucose for fuel in most circumstances. A limited amount of glucose can be derived from glycogen stored in astrocytes, but it is consumed within minutes. For most practical purposes, the brain is dependent on a continual supply of glucose diffusing from the blood into the interstitial tissue within the central nervous system and into the neurons themselves.
Therefore, if the amount of glucose supplied by the blood falls, the brain is one of the first organs affected. In most people, subtle reduction of mental efficiency can be observed when the glucose falls below 65 mg/dL (3.6 mM). Impairment of action and judgment usually becomes obvious below 40 mg/dL (2.2 mM). Seizures may occur as the glucose falls further. As blood glucose levels fall below 10 mg/dL (0.55 mM), most neurons become electrically silent and nonfunctional, resulting in coma. These brain effects are collectively referred to as neuroglycopenia.
The importance of an adequate supply of glucose to the brain is apparent from the number of nervous, hormonal and metabolic responses to a falling glucose level. Most of these are defensive or adaptive, tending to raise the blood sugar by glycogenolysis and gluconeogenesis or provide alternative fuels. If the blood sugar level falls too low, the liver converts a storage of glycogen into glucose and releases it into the bloodstream, to prevent the person going into a diabetic coma, for a short time.
Brief or mild hypoglycemia produces no lasting effects on the brain, though it can temporarily alter brain responses to additional hypoglycemia. Prolonged, severe hypoglycemia can produce lasting damage of a wide range. This can include impairment of cognitive function, motor control, or even consciousness. The likelihood of permanent brain damage from any given instance of severe hypoglycemia is difficult to estimate and depends on a multitude of factors such as age, recent blood and brain glucose experience, concurrent problems such as hypoxia, and availability of alternative fuels. Prior hypoglycemia also blunts the counterregulatory response to future hypoglycemia. While the mechanism leading to blunted counterregulation is unknown several have been proposed.
It has been frequently found that those type 1 diabetics found "dead in bed" in the morning after suspected severe hypoglycemia had some underlying coronary pathology that led to an induced fatal heart attack. In 2010, a case report was published demonstrating the first known case of an individual found "dead in bed" whilst wearing a continuous glucose monitor (CGM), which provided a history of glucose levels before the fatal event; the person had suffered a severe hypoglycemic incident, and while the authors described only a "minimal counter-regulatory response" they stated no "anatomic abnormalities" were observed during autopsy.
The vast majority of symptomatic hypoglycemic episodes result in no detectable permanent harm.
The glucose level that defines hypoglycemia is variable. In diabetics a level below 3.9 mmol/L (70 mg/dL) is diagnostic. In adults without diabetes, symptoms related to low blood sugar, low blood sugar at the time of symptoms, and improvement when blood sugar is restored to normal confirm the diagnosis. This is known as the Whipple's triad. Otherwise a level below 2.8 mmol/L (50 mg/dL) after not eating or following exercise may be used. In newborns a level below 2.2 mmol/L (40 mg/dL) or less than 3.3 mmol/L (60 mg/dL) if symptoms are present indicates hypoglycemia. Other tests that may be useful in determining the cause include insulin and C peptide levels in the blood. Hyperglycemia, a high blood sugar, is the opposite condition.
Throughout a 24‑hour period blood plasma glucose levels are generally maintained between 4–8 mmol/L (72 and 144 mg/dL). Although 3.3 or 3.9 mmol/L (60 or 70 mg/dL) is commonly cited as the lower limit of normal glucose, symptoms of hypoglycemia usually do not occur until 2.8 to 3.0 mmol/L (50 to 54 mg/dL).
In cases of recurrent hypoglycemia with severe symptoms, the best method of excluding dangerous conditions is often a "diagnostic fast". This is usually conducted in the hospital, and the duration depends on the age of the person and response to the fast. A healthy adult can usually maintain a glucose level above 50 mg/dL (2.8 mM) for 72 hours, a child for 36 hours, and an infant for 24 hours. The purpose of the fast is to determine whether the person can maintain his or her blood glucose as long as normal, and can respond to fasting with the appropriate metabolic changes. At the end of the fast the insulin should be nearly undetectable and ketosis should be fully established. The person's blood glucose levels are monitored and a critical specimen is obtained if the glucose falls. Despite its unpleasantness and expense, a diagnostic fast may be the only effective way to confirm or refute a number of serious forms of hypoglycemia, especially those involving excessive insulin.
The precise level of glucose considered low enough to define hypoglycemia is dependent on (1) the measurement method, (2) the age of the person, (3) presence or absence of effects, and (4) the purpose of the definition. While there is no disagreement as to the normal range of blood sugar, debate continues as to what degree of hypoglycemia warrants medical evaluation or treatment, or can cause harm.
Deciding whether a blood glucose in the borderline range of 45–75 mg/dL (2.5–4.2 mM) represents clinically problematic hypoglycemia is not always simple. This leads people to use different "cutoff levels" of glucose in different contexts and for different purposes. Because of all the variations, the Endocrine Society recommends that a diagnosis of hypoglycemia as a problem for an individual be based on the combination of a low glucose level and evidence of adverse effects.
Glucose concentrations are expressed as milligrams per deciliter (mg/dL or mg/100 mL) in Lebanon, the United States, Japan, Portugal, Spain, France, Belgium, Egypt, Turkey, Saudi Arabia, Colombia, India and Israel, while millimoles per liter (mmol/L or mM) are the units used in most of the rest of the world. Glucose concentrations expressed as mg/dL can be converted to mmol/L by dividing by 18.0 g/dmol (the molar mass of glucose). For example, a glucose concentration of 90 mg/dL is 5.0 mmol/L or 5.0 mM.
The circumstances of hypoglycemia provide most of the clues to diagnosis. Circumstances include the age of the person, time of day, time since last meal, previous episodes, nutritional status, physical and mental development, drugs or toxins (especially insulin or other diabetes drugs), diseases of other organ systems, family history, and response to treatment. When hypoglycemia occurs repeatedly, a record or "diary" of the spells over several months, noting the circumstances of each spell (time of day, relation to last meal, nature of last meal, response to carbohydrate, and so forth) may be useful in recognizing the nature and cause of the hypoglycemia.
Blood glucose levels discussed in this article are venous plasma or serum levels measured by standard, automated glucose oxidase methods used in medical laboratories. For clinical purposes, plasma and serum levels are similar enough to be interchangeable. Arterial plasma or serum levels are slightly higher than venous levels, and capillary levels are typically in between. This difference between arterial and venous levels is small in the fasting state but is amplified and can be greater than 10% in the postprandial state. On the other hand, whole blood glucose levels (e.g., by fingerprick meters) are about 10–15% lower than venous plasma levels. Furthermore, available fingerstick glucose meters are only warranted to be accurate to within 15% of a simultaneous laboratory value under optimal conditions, and home use in the investigation of hypoglycemia is fraught with misleading low numbers. In other words, a meter glucose reading of 39 mg/dL could be properly obtained from a person whose laboratory serum glucose was 53 mg/dL; even wider variations can occur with "real world" home use.
Two other factors significantly affect glucose measurement: hematocrit and delay after blood drawing. The disparity between venous and whole blood concentrations is greater when the hematocrit is high, as in newborn infants, or adults with polycythemia. High neonatal hematocrits are particularly likely to confound glucose measurement by meter. Second, unless the specimen is drawn into a fluoride tube or processed immediately to separate the serum or plasma from the cells, the measurable glucose will be gradually lowered by "in vitro" metabolism of the glucose at a rate of approximately 7 mg/dL/h, or even more in the presence of leukocytosis. The delay that occurs when blood is drawn at a satellite site and transported to a central laboratory hours later for routine processing is a common cause of mildly low glucose levels in general chemistry panels.
Children's blood sugar levels are often slightly lower than adults'. Overnight fasting glucose levels are below 70 mg/dL (3.9 mM) in 5% of healthy adults, but up to 5% of children can be below 60 mg/dL (3.3 mM) in the morning fasting state. As the duration of fasting is extended, a higher percentage of infants and children will have mildly low plasma glucose levels, typically without symptoms. The normal range of newborn blood sugars continues to be debated. It has been proposed that newborn brains are able to use alternate fuels when glucose levels are low more readily than adults. Experts continue to debate the significance and risk of such levels, though the trend has been to recommend maintenance of glucose levels above 60–70 mg/dL the first day after birth.
Diabetic hypoglycemia represents a special case with respect to the relationship of measured glucose and hypoglycemic symptoms for several reasons. First, although home glucose meter readings are often misleading, the probability that a low reading, whether accompanied by symptoms or not, represents real hypoglycemia is much higher in a person who takes insulin than in someone who does not.
The following is a brief list of hormones and metabolites which may be measured in a critical sample. Not all tests are checked on every person. A "basic version" would include insulin, cortisol, and electrolytes, with C-peptide and drug screen for adults and growth hormone in children. The value of additional specific tests depends on the most likely diagnoses for an individual person, based on the circumstances described above. Many of these levels change within minutes, especially if glucose is given, and there is no value in measuring them after the hypoglycemia is reversed. Others, especially those lower in the list, remain abnormal even after hypoglycemia is reversed, and can be usefully measured even if a critical specimen is missed.
Part of the value of the critical sample may simply be the proof that the symptoms are indeed due to hypoglycemia. More often, measurement of certain hormones and metabolites at the time of hypoglycemia indicates which organs and body systems are responding appropriately and which are functioning abnormally. For example, when the blood glucose is low, hormones which raise the glucose should be rising and insulin secretion should be completely suppressed.
It can also be mistaken for alcohol intoxication.
The most effective means of preventing further episodes of hypoglycemia depends on the cause.
The risk of further episodes of diabetic hypoglycemia can often (but not always) be reduced by lowering the dose of insulin or other medications, or by more meticulous attention to blood sugar balance during unusual hours, higher levels of exercise, or decreasing alcohol intake.
Many of the inborn errors of metabolism require avoidance or shortening of fasting intervals, or extra carbohydrates. For the more severe disorders, such as type 1 glycogen storage disease, this may be supplied in the form of cornstarch every few hours or by continuous gastric infusion.
Several treatments are used for hyperinsulinemic hypoglycemia, depending on the exact form and severity. Some forms of congenital hyperinsulinism respond to diazoxide or octreotide. Surgical removal of the overactive part of the pancreas is curative with minimal risk when hyperinsulinism is focal or due to a benign insulin-producing tumor of the pancreas. When congenital hyperinsulinism is diffuse and refractory to medications, near-total pancreatectomy may be the treatment of last resort, but in this condition is less consistently effective and fraught with more complications.
Hypoglycemia due to hormone deficiencies such as hypopituitarism or adrenal insufficiency usually ceases when the appropriate hormone is replaced.
Hypoglycemia due to dumping syndrome and other post-surgical conditions is best dealt with by altering diet. Including fat and protein with carbohydrates may slow digestion and reduce early insulin secretion. Some forms of this respond to treatment with an alpha-glucosidase inhibitor, which slows starch digestion.
Reactive hypoglycemia with demonstrably low blood glucose levels is most often a predictable nuisance which can be avoided by consuming fat and protein with carbohydrates, by adding morning or afternoon snacks, and reducing alcohol intake.
Idiopathic postprandial syndrome without demonstrably low glucose levels at the time of symptoms can be more of a management challenge. Many people find improvement by changing eating patterns (smaller meals, avoiding excessive sugar, mixed meals rather than carbohydrates by themselves), reducing intake of stimulants such as caffeine, or by making lifestyle changes to reduce stress. See the following section of this article.
Treatment of some forms of hypoglycemia, such as in diabetes, involves immediately raising the blood sugar to normal through the eating of carbohydrates such as sugars, determining the cause, and taking measures to hopefully prevent future episodes. However, this treatment is not optimal in other forms such as reactive hypoglycemia, where rapid carbohydrate ingestion may lead to a further hypoglycemic episode.
Blood glucose can be raised to normal within minutes by taking (or receiving) 10–20 grams of carbohydrate. It can be taken as food or drink if the person is conscious and able to swallow. This amount of carbohydrate is contained in about 3–4 ounces (100–120 ml) of orange, apple, or grape juice although fruit juices contain a higher proportion of fructose which is more slowly metabolized than pure dextrose. Alternatively, about 4–5 ounces (120–150 ml) of regular (non-diet) soda may also work, as will about one slice of bread, about 4 crackers, or about 1 serving of most starchy foods. Starch is quickly digested to glucose (unless the person is taking acarbose), but adding fat or protein retards digestion. Symptoms should begin to improve within 5 minutes, though full recovery may take 10–20 minutes. Overfeeding does not speed recovery and if the person has diabetes will simply produce hyperglycemia afterwards. A mnemonic used by the American Diabetes Association and others is the "rule of 15" – consuming 15 grams of carbohydrate followed by a 15-minute wait, repeated if glucose remains low (variable by individual, sometimes 70 mg/dL).
If a person has such severe effects of hypoglycemia that they cannot (due to combativeness) or should not (due to seizures or unconsciousness) be given anything by mouth, medical personnel such as paramedics, or in-hospital personnel can establish IV access and give intravenous dextrose, concentrations varying depending on age (infants are given 2 ml/kg dextrose 10%, children are given dextrose 25%, and adults are given dextrose 50%). Care must be taken in giving these solutions because they can cause skin necrosis if the IV is infiltrated, sclerosis of veins, and many other fluid and electrolyte disturbances if administered incorrectly. If IV access cannot be established, the person can be given 1 to 2 milligrams of glucagon in an intramuscular injection. More treatment information can be found in the article diabetic hypoglycemia. If a person has less severe effects, and is conscious with the ability to swallow, medical personal may administer gelatinous oral glucose. The soft drink Lucozade has been used for hypoglycemia in the United Kingdom, however it has recently replaced much of its glucose with the artificial sugars, which do not treat hypoglycemia.
One situation where starch may be less effective than glucose or sucrose is when a person is taking acarbose. Since acarbose and other alpha-glucosidase inhibitors prevents starch and other sugars from being broken down into monosaccharides that can be absorbed by the body, people taking these medications should consume monosaccharide-containing foods such as glucose tablets, honey, or juice to reverse hypoglycemia.
Hypoglycemia was first discovered by James Collip when he was working with Frederick Banting on purifying insulin in 1922. Collip was tasked with developing an assay to measure the activity of insulin. He first injected insulin into a rabbit, and then measured the reduction in blood glucose levels. Measuring blood glucose was a time consuming step. Collip observed that if he injected rabbits with a too large a dose of insulin, the rabbits began convulsing, went into a coma, and then died. This observation simplified his assay. He defined one unit of insulin as the amount necessary to induce this convulsing hypoglycemic reaction in a rabbit. Collip later found he could save money, and rabbits, by injecting them with glucose once they were convulsing.
The word "hypoglycemia" is also spelled "hypoglycaemia" or "hypoglycæmia". The term means low blood sugar in Greek. "ὑπογλυκαιμία", from "hypo-", "glykys", "haima". | https://en.wikipedia.org/wiki?curid=13315 |
Henry Ford
Henry Ford (July 30, 1863 – April 7, 1947) was an American industrialist and business magnate, founder of the Ford Motor Company and chief developer of the assembly line technique of mass production. By creating the first automobile that middle-class Americans could afford, he converted the automobile from an expensive curiosity into an accessible conveyance that would profoundly impact the landscape of the 20th century.
His introduction of the Model T automobile revolutionized transportation and American industry. As the owner of the Ford Motor Company, he became one of the richest and best-known people in the world. He is credited with "Fordism": mass production of inexpensive goods coupled with high wages for workers. Ford had a global vision, with consumerism as the key to peace. His intense commitment to systematically lowering costs resulted in many technical and business innovations, including a franchise system that put dealerships throughout most of North America and in major cities on six continents. Ford left most of his vast wealth to the Ford Foundation and arranged for his family to control the company permanently.
Ford was also widely known for his pacifism during the first years of World War I, and for promoting antisemitic content, including "The Protocols of the Elders of Zion", through his newspaper "The Dearborn Independent" and the book "The International Jew", having an alleged influence on the development of Nazism.
Henry Ford was born July 30, 1863, on a farm in Greenfield Township, Michigan. His father, William Ford (1826–1905), was born in County Cork, Ireland, to a family which emigrated from Somerset, England in the 16th century. His mother, Mary Ford (née Litogot; 1839–1876), was born in Michigan as the youngest child of Belgian immigrants; her parents died when she was a child and she was adopted by neighbors, the O'Herns. Henry Ford's siblings were Margaret Ford (1867–1938); Jane Ford (c. 1868–1945); William Ford (1871–1917) and Robert Ford (1873–1934).
His father gave him a pocket watch in his early teens. At 15, Ford dismantled and reassembled the timepieces of friends and neighbors dozens of times, gaining the reputation of a watch repairman. At twenty, Ford walked four miles to their Episcopal church every Sunday.
Ford was devastated when his mother died in 1876. His father expected him to eventually take over the family farm, but he despised farm work. He later wrote, "I never had any particular love for the farm—it was the mother on the farm I loved."
In 1879, Ford left home to work as an apprentice machinist in Detroit, first with James F. Flower & Bros., and later with the Detroit Dry Dock Co. In 1882, he returned to Dearborn to work on the family farm, where he became adept at operating the Westinghouse portable steam engine. He was later hired by Westinghouse to service their steam engines. During this period Ford also studied bookkeeping at Goldsmith, Bryant & Stratton Business College in Detroit.
Ford stated two major events occurred in 1875, when he was 12. He received a watch, and he witnessed the operation of a Nichols and Shepard road engine, "...the first vehicle other than horse-drawn that I had ever seen." In his farm workshop, Ford built a "steam wagon or tractor" and a steam car, but thought "steam was not suitable for light vehicles," as "the boiler was dangerous." Ford also stated, he "did not see the use of experimenting with electricity, due to the expense of trolley wires, and "no storage battery was in sight of a weight that was practical." Then in 1885, Ford had the opportunity to repair an Otto engine, and built a four-cycle model in 1887, with a one-inch bore and a three-inch stroke. In 1890, Ford started work on a two cylinder engine. Ford stated, "In 1892, I completed my first motor car, powered by a two cylinder four horsepower motor, with a two-and-half-inch bore and a six-inch stroke, which was connected to a countershaft by a belt, and then to the rear wheel by a chain. The belt was shifted by a clutch lever to control speeds at 10 or 20 miles per hour, augmented by a throttle. Other features included 28-inch wire bicycle wheels with rubber tires, a foot brake, a 3-gallon gasoline tank, and later, a water jacket around the cylinders for cooling. Ford stated that "in the spring of 1893 the machine was running to my partial satisfaction and giving an opportunity further to test out the design and material on the road." Between 1895 and 1896, Ford drove that machine about 1000 miles. Ford then started a second car in 1896, eventually building 3 cars in his home workshop.
Ford married Clara Jane Bryant (1866–1950) on April 11, 1888, and supported himself by farming and running a sawmill. They had one child: Edsel Ford (1893–1943).
In 1891, Ford became an engineer with the Edison Illuminating Company of Detroit. After his promotion to Chief Engineer in 1893, he had enough time and money to devote attention to his personal experiments on gasoline engines. These experiments culminated in 1896 with the completion of a self-propelled vehicle which he named the Ford Quadricycle. He test-drove it on June 4. After various test drives, Ford brainstormed ways to improve the Quadricycle.
Also in 1896, Ford attended a meeting of Edison executives, where he was introduced to Thomas Edison. Edison approved of Ford's automobile experimentation. Encouraged by Edison, Ford designed and built a second vehicle, completing it in 1898. Backed by the capital of Detroit lumber baron William H. Murphy, Ford resigned from the Edison Company and founded the Detroit Automobile Company on August 5, 1899. However, the automobiles produced were of a lower quality and higher price than Ford wanted. Ultimately, the company was not successful and was dissolved in January 1901.
With the help of C. Harold Wills, Ford designed, built, and successfully raced a 26-horsepower automobile in October 1901. With this success, Murphy and other stockholders in the Detroit Automobile Company formed the Henry Ford Company on November 30, 1901, with Ford as chief engineer. In 1902, Murphy brought in Henry M. Leland as a consultant; Ford, in response, left the company bearing his name. With Ford gone, Murphy renamed the company the Cadillac Automobile Company.
Teaming up with former racing cyclist Tom Cooper, Ford also produced the 80+ horsepower racer "999" which Barney Oldfield was to drive to victory in a race in October 1902. Ford received the backing of an old acquaintance, Alexander Y. Malcomson, a Detroit-area coal dealer. They formed a partnership, "Ford & Malcomson, Ltd." to manufacture automobiles. Ford went to work designing an inexpensive automobile, and the duo leased a factory and contracted with a machine shop owned by John and Horace E. Dodge to supply over $160,000 in parts. Sales were slow, and a crisis arose when the Dodge brothers demanded payment for their first shipment.
In response, Malcomson brought in another group of investors and convinced the Dodge Brothers to accept a portion of the new company. Ford & Malcomson was reincorporated as the Ford Motor Company on June 16, 1903, with $28,000 capital. The original investors included Ford and Malcomson, the Dodge brothers, Malcomson's uncle John S. Gray, Malcolmson's secretary James Couzens, and two of Malcomson's lawyers, John W. Anderson and Horace Rackham. Ford then demonstrated a newly designed car on the ice of Lake St. Clair, driving in 39.4 seconds and setting a new land speed record at . Convinced by this success, the race driver Barney Oldfield, who named this new Ford model "999" in honor of the fastest locomotive of the day, took the car around the country, making the Ford brand known throughout the United States. Ford also was one of the early backers of the Indianapolis 500.
The Model T was introduced on October 1, 1908. It had the steering wheel on the left, which every other company soon copied. The entire engine and transmission were enclosed; the four cylinders were cast in a solid block; the suspension used two semi-elliptic springs. The car was very simple to drive, and easy and cheap to repair. It was so cheap at $825 in 1908 ($ today) (the price fell every year) that by the 1920s, a majority of American drivers had learned to drive on the Model T.
Ford created a huge publicity machine in Detroit to ensure every newspaper carried stories and ads about the new product. Ford's network of local dealers made the car ubiquitous in almost every city in North America. As independent dealers, the franchises grew rich and publicized not just the Ford but the concept of automobiling; local motor clubs sprang up to help new drivers and to encourage exploring the countryside. Ford was always eager to sell to farmers, who looked on the vehicle as a commercial device to help their business. Sales skyrocketed—several years posted 100% gains on the previous year. Always on the hunt for more efficiency and lower costs, in 1913 Ford introduced the moving assembly belts into his plants, which enabled an enormous increase in production. Although Ford is often credited with the idea, contemporary sources indicate that the concept and its development came from employees Clarence Avery, Peter E. Martin, Charles E. Sorensen, and C. Harold Wills. (See Ford Piquette Avenue Plant)
Sales passed 250,000 in 1914. By 1916, as the price dropped to $360 for the basic touring car, sales reached 472,000. (Using the consumer price index, this price was equivalent to $7,828 in 2015 dollars.)
By 1918, half of all cars in the United States were Model Ts. All new cars were black; as Ford wrote in his autobiography, "Any customer can have a car painted any color that he wants so long as it is black". Until the development of the assembly line, which mandated black because of its quicker drying time, Model Ts were available in other colors, including red. The design was fervently promoted and defended by Ford, and production continued as late as 1927; the final total production was 15,007,034. This record stood for the next 45 years. This record was achieved in 19 years from the introduction of the first Model T (1908).
President Woodrow Wilson asked Ford to run as a Democrat for the United States Senate from Michigan in 1918. Although the nation was at war, Ford ran as a peace candidate and a strong supporter of the proposed League of Nations. Ford was defeated in a close election by the Republican candidate, Truman Newberry, a former United States Secretary of the Navy.
Henry Ford turned the presidency of Ford Motor Company over to his son Edsel Ford in December 1918. Henry retained final decision authority and sometimes reversed the decisions of his son. Ford started another company, Henry Ford and Son, and made a show of taking himself and his best employees to the new company; the goal was to scare the remaining holdout stockholders of the Ford Motor Company to sell their stakes to him before they lost most of their value. (He was determined to have full control over strategic decisions.) The ruse worked, and Ford and Edsel purchased all remaining stock from the other investors, thus giving the family sole ownership of the company.
In 1921, Ford also purchased Lincoln Motor Co., founded by Cadillac founder Henry Leland and his son Wilfred during World War I. The company went into receivership and the Lelands agreed to a Ford buyout, although they were soon expelled from it. Despite this acquisition of a premium car make, Henry displayed relatively little enthusiasm for luxury automobiles in contrast to Edsel, who actively sought to expand Ford into the upscale market. The original Lincoln Model L the Lelands had introduced in 1920 was also kept in production for a decade untouched, until it became too outdated and was replaced by the modernized Model K in 1931.
By the mid-1920s, General Motors was rapidly rising as the leading American automobile manufacturer. GM president Alfred Sloan established the company's "price ladder" whereby GM would offer an automobile for "every purse and purpose" in contrast to Ford's lack of interest in anything outside the low end market. Although Henry Ford was against replacing the Model T, now 16 years old, Chevrolet was mounting a bold new challenge as the make had been established under Sloan's price ladder as GM's entry-level division. Ford also resisted the increasingly popular idea of payment plans for cars. With Model T sales starting to slide, Ford was forced to relent and approve work on a successor model, shutting down production for 18 months. During this time, Ford constructed a massive new assembly plant at River Rouge for the new Model A, which launched in 1927.
In addition to its price ladder, GM also quickly established itself at the forefront of automotive styling under Harley Earl's Arts & Color Department, another area of automobile design that Henry Ford did not entirely appreciate or understand and Ford would not have a true equivalent of the GM styling department for many years.
By 1926, flagging sales of the Model T finally convinced Ford to make a new model. He pursued the project with a great deal of interest in the design of the engine, chassis, and other mechanical necessities, while leaving the body design to his son. Although Ford fancied himself an engineering genius, he had little formal training in mechanical engineering and could not even read a blueprint. A talented team of engineers performed most of the actual work of designing the Model A (and later the flathead V8) with Ford supervising them closely and giving them overall direction. Edsel also managed to prevail over his father's initial objections in the inclusion of a sliding-shift transmission.
The result was the successful Ford Model A, introduced in December 1927 and produced through 1931, with a total output of more than 4 million. Subsequently, the Ford company adopted an annual model change system similar to that recently pioneered by its competitor General Motors (and still in use by automakers today). Not until the 1930s did Ford overcome his objection to finance companies, and the Ford-owned Universal Credit Corporation became a major car-financing operation. Henry Ford still resisted many technological innovations such as hydraulic brakes and all-metal roofs, which Ford vehicles did not adopt until 1935-36. For 1932 however, Ford dropped a bombshell with the flathead Ford V8, the first low-price eight cylinder engine. The flathead V8, variants of which were used in Ford vehicles for 20 years, was the result of a secret project launched in 1930 and Henry had originally considered a radical X-8 engine before agreeing to a conventional design. It gave Ford a reputation as a performance make well-suited for hot-rodding.
Ford did not believe in accountants; he amassed one of the world's largest fortunes without ever having his company audited under his administration. Without an accounting department, Ford had no way of knowing exactly how much money was being taken in and spent each month and the company's bills and invoices were reportedly guessed at by weighing them on a scale. Not until 1956 would Ford be a publicly traded company.
Also at Edsel's insistence, Ford launched Mercury in 1939 as a mid-range make to challenge Dodge and Buick, although Henry also displayed relatively little enthusiasm for it.
Ford was a pioneer of "welfare capitalism", designed to improve the lot of his workers and especially to reduce the heavy turnover that had many departments hiring 300 men per year to fill 100 slots. Efficiency meant hiring and keeping the best workers.
Ford astonished the world in 1914 by offering a $5 per day wage ($ today), which more than doubled the rate of most of his workers. A Cleveland, Ohio, newspaper editorialized that the announcement "shot like a blinding rocket through the dark clouds of the present industrial depression." The move proved extremely profitable; instead of constant turnover of employees, the best mechanics in Detroit flocked to Ford, bringing their human capital and expertise, raising productivity, and lowering training costs. Ford announced his $5-per-day program on January 5, 1914, raising the minimum daily pay from $2.34 to $5 for qualifying male workers.
Detroit was already a high-wage city, but competitors were forced to raise wages or lose their best workers. Ford's policy proved, however, that paying people more would enable Ford workers to afford the cars they were producing and be good for the local economy. He viewed the increased wages as profit-sharing linked with rewarding those who were most productive and of good character. It may have been Couzens who convinced Ford to adopt the $5-day wage.
Real profit-sharing was offered to employees who had worked at the company for six months or more, and, importantly, conducted their lives in a manner of which Ford's "Social Department" approved. They frowned on heavy drinking, gambling, and (what today are called) deadbeat dads. The Social Department used 50 investigators, plus support staff, to maintain employee standards; a large percentage of workers were able to qualify for this "profit-sharing."
Ford's incursion into his employees' private lives was highly controversial, and he soon backed off from the most intrusive aspects. By the time he wrote his 1922 memoir, he spoke of the Social Department and of the private conditions for profit-sharing in the past tense, and admitted that paternalism has no place in industry. Welfare work that consists in prying into employees' private concerns is out of date. Men need counsel and men need help, often special help; and all this ought to be rendered for decency's sake. But the broad workable plan of investment and participation will do more to solidify industry and strengthen organization than will any social work on the outside. Without changing the principle we have changed the method of payment.
In addition to raising the wages of his workers, Ford also introduced a new, reduced workweek in 1926. The decision was made in 1922, when Ford and Crowther described it as six 8-hour days, giving a 48-hour week, but in 1926 it was announced as five 8-hour days, giving a 40-hour week. (Apparently the program started with Saturday being a workday and sometime later it was changed to a day off.) On May 1, 1926, the Ford Motor Company's factory workers switched to a five-day 40-hour workweek, with the company's office workers making the transition the following August.
Ford had made the decision to boost productivity, as workers were expected to put more effort into their work in exchange for more leisure time, and because he believed decent leisure time was good for business, since workers would actually have more time to purchase and consume more goods. However, altruistic concerns also played a role, with Ford explaining "It is high time to rid ourselves of the notion that leisure for workmen is either 'lost time' or a class privilege."
Ford was adamantly against labor unions. He explained his views on unions in Chapter 18 of "My Life and Work". He thought they were too heavily influenced by some leaders who, despite their ostensible good motives, would end up doing more harm than good for workers. Most wanted to restrict productivity as a means to foster employment, but Ford saw this as self-defeating because, in his view, productivity was necessary for economic prosperity to exist.
He believed that productivity gains that obviated certain jobs would nevertheless stimulate the larger economy and thus grow new jobs elsewhere, whether within the same corporation or in others. Ford also believed that union leaders had a perverse incentive to foment perpetual socio-economic crisis as a way to maintain their own power. Meanwhile, he believed that smart managers had an incentive to do right by their workers, because doing so would maximize their own profits. Ford did acknowledge, however, that many managers were basically too bad at managing to understand this fact. But Ford believed that eventually, if good managers such as he could fend off the attacks of misguided people from both left and right (i.e., both socialists and bad-manager reactionaries), the good managers would create a socio-economic system wherein neither bad management nor bad unions could find enough support to continue existing.
To forestall union activity, Ford promoted Harry Bennett, a former Navy boxer, to head the Service Department. Bennett employed various intimidation tactics to squash union organizing. The most famous incident, on May 26, 1937, involved Bennett's security men beating with clubs members of the United Automobile Workers, including Walter Reuther. While Bennett's men were beating the UAW representatives, the supervising police chief on the scene was Carl Brooks, an alumnus of Bennett's Service Department, and [Brooks] "did not give orders to intervene." The following day photographs of the injured UAW members appeared in newspapers, later becoming known as The Battle of the Overpass.
In the late 1930s and early 1940s, Edsel—who was president of the company—thought Ford had to come to some sort of collective bargaining agreement with the unions because the violence, work disruptions, and bitter stalemates could not go on forever. But Ford, who still had the final veto in the company on a "de facto" basis even if not an official one, refused to cooperate. For several years, he kept Bennett in charge of talking to the unions that were trying to organize the Ford Motor Company. Sorensen's memoir makes clear that Ford's purpose in putting Bennett in charge was to make sure no agreements were ever reached.
The Ford Motor Company was the last Detroit automaker to recognize the UAW, despite pressure from the rest of the U.S. automotive industry and even the U.S. government. A sit-down strike by the UAW union in April 1941 closed the River Rouge Plant. Sorensen recounted that a distraught Henry Ford was very close to following through with a threat to break up the company rather than cooperate, but his wife Clara told him she would leave him if he destroyed the family business. In her view, it would not be worth the chaos it would create. Ford complied with his wife's ultimatum, and even agreed with her in retrospect. Overnight, the Ford Motor Company went from the most stubborn holdout among automakers to the one with the most favorable UAW contract terms. The contract was signed in June 1941. About a year later, Ford told Walter Reuther, "It was one of the most sensible things Harry Bennett ever did when he got the UAW into this plant." Reuther inquired, "What do you mean?" Ford replied, "Well, you've been fighting General Motors and the Wall Street crowd. Now you're in here and we've given you a union shop and more than you got out of them. That puts you on our side, doesn't it? We can fight General Motors and Wall Street together, eh?"
Ford, like other automobile companies, entered the aviation business during World War I, building Liberty engines. After the war, it returned to auto manufacturing until 1925, when Ford acquired the Stout Metal Airplane Company.
Ford's most successful aircraft was the Ford 4AT Trimotor, often called the "Tin Goose" because of its corrugated metal construction. It used a new alloy called Alclad that combined the corrosion resistance of aluminum with the strength of duralumin. The plane was similar to Fokker's V.VII-3m, and some say that Ford's engineers surreptitiously measured the Fokker plane and then copied it. The Trimotor first flew on June 11, 1926, and was the first successful U.S. passenger airliner, accommodating about 12 passengers in a rather uncomfortable fashion. Several variants were also used by the U.S. Army. Ford has been honored by the Smithsonian Institution for changing the aviation industry. 199 Trimotors were built before it was discontinued in 1933, when the Ford Airplane Division shut down because of poor sales during the Great Depression.
Ford opposed war, which he viewed as a terrible waste, and supported causes that opposed military intervention. Ford became highly critical of those who he felt financed war, and he tried to stop them. In 1915, the pacifist Rosika Schwimmer gained favor with Ford, who agreed to fund a Peace Ship to Europe, where World War I was raging. He and about 170 other prominent peace leaders traveled there. Ford's Episcopalian pastor, Reverend Samuel S. Marquis, accompanied him on the mission. Marquis headed Ford's Sociology Department from 1913 to 1921. Ford talked to President Wilson about the mission but had no government support. His group went to neutral Sweden and the Netherlands to meet with peace activists. A target of much ridicule, Ford left the ship as soon as it reached Sweden. In 1915, Ford blamed "German-Jewish bankers" for instigating the war.
Ford plants in the United Kingdom produced Fordson tractors to increase the British food supply, as well as trucks and aircraft engines. When the U.S. entered the war in 1917 the company became a major supplier of weapons, especially the Liberty engine for airplanes, and anti-submarine boats.
In 1918, with the war on and the League of Nations a growing issue in global politics, President Woodrow Wilson, a Democrat, encouraged Ford to run for a Michigan seat in the U.S. Senate. Wilson believed that Ford could tip the scales in Congress in favor of Wilson's proposed League. "You are the only man in Michigan who can be elected and help bring about the peace you so desire," the president wrote Ford. Ford wrote back: "If they want to elect me let them do so, but I won't make a penny's investment." Ford did run, however, and came within 4,500 votes of winning, out of more than 400,000 cast statewide. Ford remained a staunch Wilsonian and supporter of the League. When Wilson made a major speaking tour in the summer of 1919 to promote the League, Ford helped fund the attendant publicity.
Ford had opposed the United States entry into World War II and continued to believe that international business could generate the prosperity that would head off wars. Ford "insisted that war was the product of greedy financiers who sought profit in human destruction"; in 1939 he went so far as to claim that the torpedoing of U.S. merchant ships by German submarines was the result of conspiratorial activities undertaken by financier war-makers. The financiers to whom he was referring was Ford's code for Jews; he had also accused Jews of fomenting the First World War. In the run-up to World War II and when the war erupted in 1939, he reported that he did not want to trade with belligerents. Like many other businessmen of the Great Depression era, he never liked or entirely trusted the Franklin Roosevelt Administration, and thought Roosevelt was inching the U.S. closer to war. Ford continued to do business with Nazi Germany, including the manufacture of war materiel. However, he also agreed to build warplane engines for the British government. In early 1940, he boasted that Ford Motor Company would soon be able to produce 1,000 U.S. warplanes a day, even though the company did not have an aircraft production facility at that time.
Beginning in 1940, with the requisitioning of between 100 and 200 French POWs to work as slave laborers, "Ford-Werke" contravened Article 31 of the 1929 Geneva Convention. At that time, which was before the U.S. entered the war and still had full diplomatic relations with Nazi Germany, "Ford-Werke" was under the control of the Ford Motor Company. The number of slave laborers grew as the war expanded although Wallace makes it clear that companies in Germany were not required by the Nazi authorities to use slave laborers.
When Rolls-Royce sought a U.S. manufacturer as an additional source for the Merlin engine (as fitted to Spitfire and Hurricane fighters), Ford first agreed to do so and then reneged. He "lined up behind the war effort" when the U.S. entered in December 1941. His support of the American war effort, however, was problematic.
Before the U.S. entered the war, responding to President Roosevelt's call in December 1940 for the "Great Arsenal of Democracy", Ford directed the Ford Motor Company to construct a vast new purpose-built aircraft factory at Willow Run near Detroit, Michigan. Ford broke ground on Willow Run in the spring of 1941, B-24 component production began in May 1942, and the first complete B-24 came off the line in October 1942. At , it was the largest assembly line in the world at the time. At its peak in 1944, the Willow Run plant produced 650 B-24s per month, and by 1945 Ford was completing each B-24 in eighteen hours, with one rolling off the assembly line every 58 minutes. Ford produced 9,000 B-24s at Willow Run, half of the 18,000 total B-24s produced during the war.
When Edsel Ford died of cancer in 1943, aged only 49, Henry Ford nominally resumed control of the company, but a series of strokes in the late 1930s had left him increasingly debilitated, and his mental ability was fading. Ford was increasingly sidelined, and others made decisions in his name. The company was in fact controlled by a handful of senior executives led by Charles Sorensen, an important engineer and production executive at Ford; and Harry Bennett, the chief of Ford's Service Unit, Ford's paramilitary force that spied on, and enforced discipline upon, Ford employees. Ford grew jealous of the publicity Sorensen received and forced Sorensen out in 1944. Ford's incompetence led to discussions in Washington about how to restore the company, whether by wartime government fiat, or by instigating some sort of coup among executives and directors. Nothing happened until 1945 when, with bankruptcy a serious risk, Ford's wife Clara and Edsel's widow Eleanor confronted him and demanded he cede control of the company to his grandson Henry Ford II. They threatened to sell off their stock, which amounted to three quarters of the company's total shares, if he refused. Ford was reportedly infuriated, but had no choice but to give in. The young man took over and, as his first act of business, fired Harry Bennett.
In the early 1920s, Ford sponsored a weekly newspaper that published strongly antisemitic views. At the same time, Ford had a reputation as one of the few major corporations actively hiring Black workers. He also hired women and handicapped men at a time when doing so was uncommon. Part of his racist and anti-semitic legacy includes the funding of square-dancing in American schools because he hated jazz and associated its creation with Jewish people.
In 1918, Ford's closest aide and private secretary, Ernest G. Liebold, purchased an obscure weekly newspaper for Ford, "The Dearborn Independent". The "Independent" ran for eight years, from 1920 until 1927, with Liebold as editor. Every Ford franchise nationwide had to carry the paper and distribute it to its customers.
During this period, Ford emerged as "a respected spokesman for right-wing extremism and religious prejudice", reaching around 700,000 readers through his newspaper. The 2010 documentary film "" (written by Pulitzer Prize winner Ira Berkow) states that Ford wrote on May 22, 1920: "If fans wish to know the trouble with American baseball they have it in three words—too much Jew."
In Germany, Ford's antisemitic articles from "The Dearborn Independent" were issued in four volumes, cumulatively titled "The International Jew, the World's Foremost Problem" published by Theodor Fritsch, founder of several antisemitic parties and a member of the Reichstag. In a letter written in 1924, Heinrich Himmler described Ford as "one of our most valuable, important, and witty fighters". Ford is the only American mentioned favorably in "Mein Kampf", although he is only mentioned twice: Adolf Hitler wrote, "only a single great man, Ford, [who], to [the Jews'] fury, still maintains full independence ... [from] the controlling masters of the producers in a nation of one hundred and twenty millions." Speaking in 1931 to a "Detroit News" reporter, Hitler said he regarded Ford as his "inspiration", explaining his reason for keeping Ford's life-size portrait next to his desk. Steven Watts wrote that Hitler "revered" Ford, proclaiming that "I shall do my best to put his theories into practice in Germany", and modeling the Volkswagen, the people's car, on the Model T. Max Wallace has stated "History records that ... Adolf Hitler was an ardent Anti-Semite before he ever read Ford's "The International Jew"." Under Ford, the newspaper also reprinted the antisemitic fabricated text "The Protocols of the Elders of Zion".
On February 1, 1924, Ford received Kurt Ludecke, a representative of Hitler, at home. Ludecke was introduced to Ford by Siegfried Wagner (son of the composer Richard Wagner) and his wife Winifred, both Nazi sympathizers and antisemites. Ludecke asked Ford for a contribution to the Nazi cause, but was apparently refused.
Ford's articles were denounced by the Anti-Defamation League (ADL). While these articles explicitly condemned pogroms and violence against Jews, they blamed the Jews themselves for provoking them. According to some trial testimony, none of this work was written by Ford, but he allowed his name to be used as author. Friends and business associates have said they warned Ford about the contents of the "Independent" and that he probably never read the articles (he claimed he only read the headlines). On the other hand, court testimony in a libel suit, brought by one of the targets of the newspaper, alleged that Ford did know about the contents of the "Independent" in advance of publication.
A libel lawsuit was brought by San Francisco lawyer and Jewish farm cooperative organizer Aaron Sapiro in response to the antisemitic remarks, and led Ford to close the "Independent" in December 1927. News reports at the time quoted him as saying he was shocked by the content and unaware of its nature. During the trial, the editor of Ford's "Own Page", William Cameron, testified that Ford had nothing to do with the editorials even though they were under his byline. Cameron testified at the libel trial that he never discussed the content of the pages or sent them to Ford for his approval. Investigative journalist Max Wallace noted that "whatever credibility this absurd claim may have had was soon undermined when James M. Miller, a former "Dearborn Independent" employee, swore under oath that Ford had told him he intended to expose Sapiro."
Michael Barkun observed:
According to Spencer Blakeslee:
Wallace also found that Ford's apology was likely, or at least partly, motivated by a business that was slumping as a result of his antisemitism, repelling potential buyers of Ford cars. Up until the apology, a considerable number of dealers, who had been required to make sure that buyers of Ford cars received the "Independent", bought up and destroyed copies of the newspaper rather than alienate customers.
Ford's 1927 apology was well received. "Four-Fifths of the hundreds of letters addressed to Ford in July 1927 were from Jews, and almost without exception they praised the industrialist." In January 1937, a Ford statement to the "Detroit Jewish Chronicle" disavowed "any connection whatsoever with the publication in Germany of a book known as the "International Jew"."
According to Pool and Pool (1978), Ford's retraction and apology (which were written by others) were not even truly signed by him (rather, his signature was forged by Harry Bennett), and Ford never privately recanted his antisemitic views, stating in 1940: "I hope to republish "The International Jew" again some time."
In July 1938, before the outbreak of war, the German consul at Cleveland gave Ford, on his 75th birthday, the award of the Grand Cross of the German Eagle, the highest medal Nazi Germany could bestow on a foreigner. James D. Mooney, vice president of overseas operations for General Motors, received a similar medal, the Merit Cross of the German Eagle, First Class.
On January 7, 1942, Ford wrote a letter to Sigmund Livingston as the Founder and National Chairman of the Anti-Defamation League. The purpose of the letter was to clarify some general misconceptions that he subscribed or supported directly or indirectly, "any agitation which would promote antagonism toward my Jewish fellow citizens." He concluded the letter with "My sincere hope that now in this country and throughout the world when the war is finished, hatred of the Jews and hatred against any other racial or religious groups shall cease for all time."
Distribution of "The International Jew" was halted in 1942 through legal action by Ford, despite complications from a lack of copyright. It is still banned in Germany. Extremist groups often recycle the material; it still appears on antisemitic and neo-Nazi websites.
Testifying at Nuremberg, convicted Hitler Youth leader Baldur von Schirach who, in his role as military governor of Vienna, deported 65,000 Jews to camps in Poland, stated:
Robert Lacey wrote in "Ford: The Men and the Machines" that a close Willow Run associate of Ford reported that when he was shown newsreel footage of the Nazi concentration camps, he "was confronted with the atrocities which finally and unanswerably laid bare the bestiality of the prejudice to which he contributed, he collapsed with a stroke – his last and most serious." | https://en.wikipedia.org/wiki?curid=13371 |
Human geography
Human geography or anthropogeography is the branch of geography that deals with humans and their communities, cultures, economies, and interactions with the environment by studying their relations with and across locations. It analyzes patterns of human social interaction, their interactions with the environment, and their spatial interdependencies by application of qualitative and quantitative research methods.
Geography was not recognized as a formal academic discipline until the 18th century, although many scholars had undertaken geographical scholarship for much longer, particularly through cartography.
The Royal Geographical Society was founded in England in 1830, although the United Kingdom did not get its first full Chair of geography until 1917. The first real geographical intellect to emerge in United Kingdom's geographical minds was Halford John Mackinder, appointed reader at Oxford University in 1887.
The National Geographic Society was founded in the United States in 1888 and began publication of the "National Geographic" magazine which became, and continues to be, a great popularizer of geographic information. The society has long supported geographic research and education on geographical topics.
The Association of American Geographers was founded in 1904 and was renamed the American Association of Geographers in 2016 to better reflect the increasingly international character of its membership.
One of the first examples of geographic methods being used for purposes other than to describe and theorize the physical properties of the earth is John Snow's map of the 1854 Broad Street cholera outbreak. Though Snow was primarily a physician and a pioneer of epidemiology rather than a geographer, his map is probably one of the earliest examples of health geography.
The now fairly distinct differences between the subfields of physical and human geography have developed at a later date. This connection between both physical and human properties of geography is most apparent in the theory of environmental determinism, made popular in the 19th century by Carl Ritter and others, and has close links to the field of evolutionary biology of the time. Environmental determinism is the theory, that people's physical, mental and moral habits are directly due to the influence of their natural environment. However, by the mid-19th century, environmental determinism was under attack for lacking methodological rigor associated with modern science, and later as a means to justify racism and imperialism.
A similar concern with both human and physical aspects is apparent during the later 19th and first half of the 20th centuries focused on regional geography. The goal of regional geography, through something known as regionalisation, was to delineate space into regions and then understand and describe the unique characteristics of each region through both human and physical aspects. With links to (possibilism) (geography) and cultural ecology some of the same notions of causal effect of the environment on society and culture remain with environmental determinism.
By the 1960s, however, the quantitative revolution led to strong criticism of regional geography. Due to a perceived lack of scientific rigor in an overly descriptive nature of the discipline, and a continued separation of geography from its two subfields of physical and human geography and from geology, geographers in the mid-20th century began to apply statistical and mathematical models in order to solve spatial problems. Much of the development during the quantitative revolution is now apparent in the use of geographic information systems; the use of statistics, spatial modeling, and positivist approaches are still important to many branches of human geography. Well-known geographers from this period are Fred K. Schaefer, Waldo Tobler, William Garrison, Peter Haggett, Richard J. Chorley, William Bunge, and Torsten Hägerstrand.
From the 1970s, a number of critiques of the positivism now associated with geography emerged. Known under the term 'critical geography,' these critiques signaled another turning point in the discipline. Behavioral geography emerged for some time as a means to understand how people made perceived spaces and places, and made locational decisions. The more influential 'radical geography' emerged in the 1970s and 1980s. It draws heavily on Marxist's theory and techniques, and is associated with geographers such as David Harvey and Richard Peet. Radical geographers seek to say meaningful things about problems recognized through quantitative methods, provide explanations rather than descriptions, put forward alternatives and solutions, and be politically engaged, rather than using the detachment associated with positivists. (The detachment and objectivity of the quantitative revolution was itself critiqued by radical geographers as being a tool of capital). Radical geography and the links to Marxism and related theories remain an important part of contemporary human geography (See: "Antipode"). Critical geography also saw the introduction of 'humanistic geography', associated with the work of Yi-Fu Tuan, which pushed for a much more qualitative approach in methodology.
The changes under critical geography have led to contemporary approaches in the discipline such as feminist geography, new cultural geography, "demonic" geographies, and the engagement with postmodern and post-structural theories and philosophies.
The primary fields of study in human geography focus around the core fields of:
Cultural geography is the study of cultural products and norms - their variation across spaces and places, as well as their relations. It focuses on describing and analyzing the ways language, religion, economy, government, and other cultural phenomena vary or remain constant from one place to another and on explaining how humans function spatially.
Development geography is the study of the Earth's geography with reference to the standard of living and the quality of life of its human inhabitants, study of the location, distribution and spatial organization of economic activities, across the Earth. The subject matter investigated is strongly influenced by the researcher's methodological approach.
Economic geography examines relationships between human economic systems, states, and other factors, and the biophysical environment.
Medical or health geography is the application of geographical information, perspectives, and methods to the study of health, disease, and health care. Health geography deals with the spatial relations and patterns between people and the environment. This is a sub-discipline of human geography, researching how and why diseases are spread.
Historical geography is the study of the human, physical, fictional, theoretical, and "real" geographies of the past. Historical geography studies a wide variety of issues and topics. A common theme is the study of the geographies of the past and how a place or region changes through time. Many historical geographers study geographical patterns through time, including how people have interacted with their environment, and created the cultural landscape.
Political geography is concerned with the study of both the spatially uneven outcomes of political processes and the ways in which political processes are themselves affected by spatial structures.
Population geography is the study of ways in which spatial variations in the distribution, composition, migration, and growth of populations are related to their environment or location.
Settlement geography, including urban geography, is the study of urban and rural areas with specific regards to spatial, relational and theoretical aspects of settlement. That is the study of areas which have a concentration of buildings and infrastructure. These are areas where the majority of economic activities are in the secondary sector and tertiary sectors. In case of urban settlement, they probably have a high population density.
Urban geography is the study of cities, towns, and other areas of relatively dense settlement. Two main interests are site (how a settlement is positioned relative to the physical environment) and situation (how a settlement is positioned relative to other settlements). Another area of interest is the internal organization of urban areas with regard to different demographic groups and the layout of infrastructure. This subdiscipline also draws on ideas from other branches of Human Geography to see their involvement in the processes and patterns evident in an urban area.
Within each of the subfields, various philosophical approaches can be used in research; therefore, an urban geographer could be a Feminist or Marxist geographer, etc.
Such approaches are:
As with all social sciences, human geographers publish research and other written work in a variety of academic journals. Whilst human geography is interdisciplinary, there are a number of journals that focus on human geography.
These include: | https://en.wikipedia.org/wiki?curid=13372 |
Haiti
Haiti (; ; ), officially the Republic of Haiti (; ) formerly founded as Hayti, is a country located on the island of Hispaniola in the Greater Antilles archipelago of the Caribbean Sea, to the east of Cuba and Jamaica and south of The Bahamas and the Turks and Caicos Islands. It occupies the western three-eighths of the island which it shares with the Dominican Republic. To its south-west lies the small island of Navassa Island, which is claimed by Haiti but is disputed as a United States territory under federal administration. Haiti is in size and has an estimated population of /1e6 round 1 million, making it the most populous country in the Caribbean Community (CARICOM) and the second-most populous country in the Caribbean after Cuba.
The island was originally inhabited by the indigenous Taíno people, who migrated from South America. The first Europeans arrived on 5 December 1492 during the first voyage of Christopher Columbus, who initially believed he had found India or China. Columbus subsequently founded the first European settlement in the Americas, La Navidad, on what is now the northeastern coast of Haiti. The island was claimed by Spain and named "La Española," forming part of the Spanish Empire until the early 17th century. However, competing claims and settlements by the French led to the western portion of the island being ceded to France in 1697, which was subsequently named "Saint-Domingue". French colonists established lucrative sugarcane plantations, worked by vast numbers of slaves brought from Africa, which made the colony one of the richest in the world.
In the midst of the French Revolution (1789–99), slaves and free people of color launched the Haitian Revolution (1791–1804), led by a former slave and the first black general of the French Army, Toussaint Louverture. After 12 years of conflict, Napoleon Bonaparte's forces were defeated by Louverture's successor, Jean-Jacques Dessalines (later Emperor Jacques I), who declared Haiti's sovereignty on 1 January 1804—the first independent nation of Latin America and the Caribbean, the second republic in the Americas, the first country to abolish slavery, and the only state in history established by a successful slave revolt. Apart from Alexandre Pétion, the first President of the Republic, all of Haiti's first leaders were former slaves. After a brief period in which the country was split in two, President Jean-Pierre Boyer united the country and then attempted to bring the whole of Hispaniola under Haitian control, precipitating a long series of wars that ended in the 1870s when Haiti formally recognized the independence of the Dominican Republic. Haiti's first century of independence was characterised by political instability, ostracism by the international community and the payment of a crippling debt to France. Political volatility and foreign economic influence in the country prompted the United States to occupy the country from 1915–1934. Following a series of short-lived presidencies, François 'Papa Doc' Duvalier took power in 1956, ushering in a long period of autocratic rule that was continued by his son Jean-Claude 'Baby Doc' Duvalier that lasted until 1986; the period was characterised by state-sanctioned violence against the opposition and civilians, corruption and economic stagnation. Since 1986 Haiti has been attempting to establish a more democratic political system.
Haiti is a founding member of the United Nations, Organization of American States (OAS), Association of Caribbean States, and the International Francophonie Organisation. In addition to CARICOM, it is a member of the International Monetary Fund, World Trade Organization, and the Community of Latin American and Caribbean States. Historically poor and politically unstable, Haiti has the lowest Human Development Index in the Americas. Since the turn of the 21st century, the country has endured a "coup d'état," which prompted a U.N. intervention, as well as a deadly earthquake that killed over 250,000.
The name Haiti (or "Hayti") comes from the indigenous Taíno language which was the native name given to the entire island of Hispaniola to mean, "land of high mountains." The "h" is silent in French and the "ï" in "Haïti" has a diacritical mark used to show that the second vowel is pronounced separately, as in the word "naïve". In English, this rule for the pronunciation is often disregarded, thus the spelling "Haiti" is used. There are different anglicizations for its pronunciation such as "HIGH-ti", "high-EE-ti" and "haa-EE-ti", which are still in use, but "HAY-ti" is the most widespread and best-established. The name was restored by Haitian revolutionary Jean-Jacques Dessalines as the official name of independent Saint-Domingue, as a tribute to the Amerindian predecessors.
In French, Haiti's nickname is the "Pearl of the Antilles" ("La Perle des Antilles") because of both its natural beauty, and the amount of wealth it accumulated for the Kingdom of France; during the 18th century the colony was the world's leading producer of sugar and coffee.
The island of Hispaniola, of which Haiti occupies the western three-eighths, has been inhabited since about 5000 BC by groups of Native Americans thought to have arrived from Central or South America. Genetic studies show that some of these groups were related to the Yanomami of the Amazon Basin. Amongst these early settlers were the Ciboney peoples, followed by the Taíno, speakers of an Arawakan language, elements of which have been preserved in Haitian Creole. The Taíno name for the entire island was "Haiti", or alternatively "Quisqeya".
In Taíno society the largest unit of political organisation was led by a "cacique," or chief, as the Europeans understood them. The island of Hipaniola was divided among five 'caciquedoms': the Magua in the north east, the Marien in the north west, the Jaragua in the south west, the Maguana in the central regions of Cibao, and the Higüey in the south east.
Taíno cultural artifacts include cave paintings in several locations in the country. These have become national symbols of Haiti and tourist attractions. Modern-day Léogâne, started as a French colonial town in the southwest, is beside the former capital of the caciquedom of "Xaragua."
Navigator Christopher Columbus landed in Haiti on 6 December 1492, in an area that he named "Môle-Saint-Nicolas," and claimed the island for the Crown of Castile. Nineteen days later, his ship the "Santa María" ran aground near the present site of Cap-Haïtien. Columbus left 39 men on the island, who founded the settlement of La Navidad on 25 December 1492. Relations with the native peoples, initially good, broke down and the settlers were later killed by the Taíno.
The sailors carried endemic Eurasian infectious diseases to which the native peoples lacked immunity, causing them to die in great numbers in epidemics. The first recorded smallpox epidemic in the Americas erupted on Hispaniola in 1507. Their numbers were further reduced by the harshness of the "" system, in which the Spanish forced natives to work in gold mines and plantations.
The Spanish passed the Laws of Burgos, 1512–13, which forbade the maltreatment of natives, endorsed their conversion to Catholicism, and gave legal framework to "." The natives were brought to these sites to work in specific plantations or industries.
As the Spanish re-focused their colonization efforts on the greater riches of mainland Central and South America, Hispaniola became reduced largely to a trading and refuelling post. As a result piracy became widespread, encouraged by European powers hostile to Spain such as France (based on Île de la Tortue) and England. The Spanish largely abandoned the western third of the island, focusing their colonization effort on the eastern two-thirds. The western part of the island was thus gradually settled by French buccaneers; among them was Bertrand d'Ogeron, who succeeded in growing tobacco and recruited many French colonial families from Martinique and Guadeloupe. In 1697 France and Spain settled their hostilities on the island by way of the Treaty of Ryswick of 1697, which divided Hispaniola between them.
France received the western third and subsequently named it Saint-Domingue, the French equivalent of "Santo Domingo", the Spanish colony on Hispaniola. The French set about creating sugar and coffee plantations, worked by vast numbers of slaves imported from Africa, and Saint-Domingue grew to become their richest colonial possession.
The French settlers were outnumbered by slaves by almost 10 to 1. According to the 1788 Census, Haiti's population consisted of nearly 25,000 Europeans, 22,000 free coloreds and 700,000 African slaves. In contrast, by 1763 the white population of French Canada, a far larger territory, had numbered only 65,000. In the north of the island, slaves were able to retain many ties to African cultures, religion and language; these ties were continually being renewed by newly imported Africans. Some West African slaves held on to their traditional Vodou beliefs by secretly syncretizing it with Catholicism.
The French enacted the "Code Noir" ("Black Code"), prepared by Jean-Baptiste Colbert and ratified by Louis XIV, which established rules on slave treatment and permissible freedoms. Saint-Domingue has been described as one of the most brutally efficient slave colonies; one-third of newly imported Africans died within a few years. Many slaves died from diseases such as smallpox and typhoid fever. They had low birth rates, and there is evidence that some women aborted fetuses rather than give birth to children within the bonds of slavery. The colony's environment also suffered, as forests were cleared to make way for plantations and the land was overworked so as to extract maximum profit for French plantation owners.
As in its Louisiana colony, the French colonial government allowed some rights to free people of color ("gens de couleur"), the mixed-race descendants of European male colonists and African female slaves (and later, mixed-race women). Over time, many were released from slavery and they established a separate social class. White French Creole fathers frequently sent their mixed-race sons to France for their education. Some men of color were admitted into the military. More of the free people of color lived in the south of the island, near Port-au-Prince, and many intermarried within their community. They frequently worked as artisans and tradesmen, and began to own some property, including slaves of their own. The free people of color petitioned the colonial government to expand their rights.
The brutality of slave life led many slaves to escape to mountainous regions, where they set up their own autonomous communities and became known as Maroons. One Maroon leader, François Mackandal, led a rebellion in the 1750s, however he was later captured and executed by the French.
Inspired by the French Revolution of 1789 and principles of the rights of man, the French settlers and free people of color pressed for greater political freedom and more civil rights. Tensions between these two groups led to conflict, as a militia of free-coloreds was set up in 1790 by Vincent Ogé, resulting in his capture, torture and execution. Sensing an opportunity, in August 1791 the first slave armies were established in northern Haiti under the leadership of Toussaint Louverture inspired by the Vodou "houngan" (priest) Boukman, and backed by the Spanish in Santo Domingo – soon a full-blown slave rebellion had broken out across the entire colony.
In 1792, the French government sent three commissioners with troops to re-establish control; to build an alliance with the "gens de couleur" and slaves commissioners Léger-Félicité Sonthonax and Étienne Polverel abolished slavery in the colony. Six months later, the National Convention, led by Maximilien de Robespierre and the Jacobins, endorsed abolition and extended it to all the French colonies.
Political leaders in the United States, which was a new republic itself, reacted with ambivalence, at times providing aid to enable planters to put down the revolt. Later in the revolution, the US provided support to native Haitian military forces, with the goal of reducing French influence in North America and the Caribbean.
With slavery abolished, Toussaint Louverture pledged allegiance to France, and he fought off the British and Spanish forces who had taken advantage of the situation and invaded Saint-Domingue. The Spanish were later forced to cede their part of the island to France under the terms of the Peace of Basel in 1795, uniting the island under one government. However an insurgency against French rule broke out in the east, and in the west there was fighting between Louverture's forces and the free people of color led by André Rigaud in the War of the Knives (1799–1800). Many surviving free people of color left the island as refugees.
After Louverture created a separatist constitution and proclaimed himself governor-general for life, Napoléon Bonaparte in 1802 sent an expedition of 20,000 soldiers and as many sailors under the command of his brother-in-law, Charles Leclerc, to reassert French control. The French achieved some victories, but within a few months most of their army had died from yellow fever. Ultimately more than 50,000 French troops died in an attempt to retake the colony, including 18 generals. The French managed to capture Louverture, transporting him to France for trial. He was imprisoned at Fort de Joux, where he died in 1803 of exposure and possibly tuberculosis.
The slaves, along with free "gens de couleur" and allies, continued their fight for independence, led by generals Jean-Jacques Dessalines, Alexandre Pétion and Henry Christophe. The rebels finally managed to decisively defeat the French troops at the Battle of Vertières on 18 November 1803, leading the first ever nation to successfully gain independence through a slave revolt. Under the overall command of Dessalines, the Haitian armies avoided open battle, and instead conducted a successful guerrilla campaign against the Napoleonic forces, working with diseases such as yellow fever to reduce the numbers of French soldiers. Later that year France withdrew its remaining 7,000 troops from the island and Napoleon gave up his idea of re-establishing a North American empire, selling Louisiana (New France) to the United States, in the Louisiana Purchase. It has been estimated that between 24,000 and 100,000 Europeans, and between 100,000 and 350,000 Haitian ex-slaves, died in the revolution. In the process, Dessalines became arguably the most successful military commander in the struggle against Napoleonic France.
The independence of Saint-Domingue was proclaimed under the native name 'Haiti' by Dessalines on 1 January 1804 in Gonaïves and he was proclaimed "Emperor for Life" as Emperor Jacques I by his troops. Dessalines at first offered protection to the white planters and others. However, once in power, he ordered the massacre of nearly all white men, women, children; between January and April 1804, 3,000 to 5,000 whites were killed, including those who had been friendly and sympathetic to the black population. Only three categories of white people were selected out as exceptions and spared: Polish soldiers, the majority of whom had deserted from the French army and fought alongside the Haitian rebels; the small group of German colonists invited to the north-west region; and a group of medical doctors and professionals. Reportedly, people with connections to officers in the Haitian army were also spared, as well as the women who agreed to marry non-white men.
Fearful of the potential impact the slave rebellion could have in the slave states, U.S. President Thomas Jefferson refused to recognize the new republic. The Southern politicians who were a powerful voting block in the American Congress prevented U.S. recognition for decades until they withdrew in 1861 to form the Confederacy.
The revolution led to a wave of emigration. In 1809, 9,000 refugees from Saint-Domingue, both white planters and people of color, settled "en masse" in New Orleans, doubling the city's population, having been expelled from their initial refuge in Cuba by Spanish authorities. In addition, the newly arrived slaves added to the city's African population.
The plantation system was reestablished in Haiti, albeit for wages, however many Haitians were marginalized and resented the heavy-handed manner in which this was enforced in the new nation's politics. The rebel movement splintered, and Dessalines was assassinated by rivals on 17 October 1806.
After Dessalines' death Haiti became split into two, with the Kingdom of Haiti in the north directed by Henri Christophe, later declaring himself Henri I, and a republic in the south centred on Port-au-Prince, directed by Alexandre Pétion, an "homme de couleur". Christophe established a semi-feudal corvée system, with a rigid education and economic code. Pétion's republic was less absolutist, and he initiated a series of land reforms which benefited the peasant class. President Pétion also gave military and financial assistance to the revolutionary leader Simón Bolívar, which were critical in enabling him to liberate the Viceroyalty of New Granada. Meanwhile the French, who had managed to maintain a precarious control of eastern Hispaniola, were defeated by insurgents led by Juan Sánchez Ramírez, with the area returning to Spanish rule in 1809 following the Battle of Palo Hincado.
Beginning in 1821, President Jean-Pierre Boyer, also an "homme de couleur" and successor to Pétion, reunified the island following the suicide of Henry Christophe. After Santo Domingo declared its independence from Spain on 30 November 1821, Boyer invaded, seeking to unite the entire island by force and ending slavery in Santo Domingo.
Struggling to revive the agricultural economy to produce commodity crops, Boyer passed the Code Rural, which denied peasant laborers the right to leave the land, enter the towns, or start farms or shops of their own, causing much resentment as most peasants wished to have their own farms rather than work on plantations.
The American Colonization Society (ACS) encouraged free blacks in the United States to emigrate to Haiti. Starting in September 1824, more than 6,000 African Americans migrated to Haiti, with transportation paid by the ACS. Many found the conditions too harsh and returned to the United States.
In July 1825, King Charles X of France, during a period of restoration of the French monarchy, sent a fleet to reconquer the island. Under pressure, President Boyer agreed to a treaty by which France formally recognized the independence of the nation in exchange for a payment of 150 million francs. By an order of 17 April 1826, the King of France renounced his rights of sovereignty and formally recognized the independence of Haiti. The enforced payments to France hampered Haiti's economic growth for years, exacerbated by the fact that many Western nations continued to refuse formal diplomatic recognition to Haiti; Britain recognized Haitian independence in 1833, and the United States not until 1862. Haiti borrowed heavily from Western banks at extremely high interest rates to repay the debt. Although the amount of the reparations was reduced to 90 million in 1838, by 1900 80% of the country's gross domestic product was being spent on debt repayment and the country did not finish repaying it until 1947.
After losing the support of Haiti's elite, Boyer was ousted in 1843, with Charles Rivière-Hérard replacing him as president. Nationalist Dominican forces in eastern Hispaniola led by Juan Pablo Duarte seized control of Santo Domingo on 27 February 1844. The Haitian forces, unprepared for a significant uprising, capitulated to the rebels, effectively ending Haitian rule of eastern Hispaniola. In March Rivière-Hérard attempted to reimpose his authority, but the Dominicans put up stiff opposition and inflicted heavy losses. Rivière-Hérard was removed from office by the mulatto hierarchy and replaced with the aged general Philippe Guerrier, who assumed the presidency on 3 May 1844.
Guerrier died in April 1845, and was succeeded by General Jean-Louis Pierrot. Pierrot's most pressing duty as the new president was to check the incursions of the Dominicans, who were harassing the Haitian troops. Dominican gunboats were also making depredations on Haiti's coasts. President Pierrot decided to open a campaign against the Dominicans, whom he considered merely as insurgents, however the Haitian offensive of 1845 was stopped on the frontier.
On 1 January 1846 Pierrot announced a fresh campaign to re-imposed Haitian suzerainty over eastern Hispaniola, but his officers and men greeted this fresh summons with contempt. Thus, a month later – February 1846 – when Pierrot ordered his troops to march against the Dominicans, the Haitian army mutinied, and its soldiers proclaimed his overthrow as president of the republic. With the war against the Dominicans having become very unpopular in Haiti, it was beyond the power of the new president, General Jean-Baptiste Riché, to stage another invasion.
On 27 February 1847, President Riché died after only a year in power and was replaced by an obscure officer, General Faustin Soulouque. During the first two years of Soulouque's administration the conspiracies and opposition he faced in retaining power were so manifold that the Dominicans were given a further breathing space in which to consolidate their independence. But, when in 1848 France finally recognized the Dominican Republic as a free and independent state and provisionally signed a treaty of peace, friendship, commerce and navigation, Haiti immediately protested, claiming the treaty was an attack upon their own security. Soulouque decided to invade the new Republic before the French Government could ratify the treaty.
On 21 March 1849, Haitian soldiers attacked the Dominican garrison at Las Matas. The demoralized defenders offered almost no resistance before abandoning their weapons. Soulouque pressed on, capturing San Juan. This left only the town of Azua as the remaining Dominican stronghold between the Haitian army and the capital. On 6 April, Azua fell to the 18,000-strong Haitian army, with a 5,000-man Dominican counterattack failing to oust them. The way to Santo Domingo was now clear. But the news of discontent existing at Port-au-Prince, which reached Soulouque, arrested his further progress and caused him to return with the army to his capital.
Emboldened by the sudden retreat of the Haitian army, the Dominicans counter-attacked. Their flotilla went as far as Dame-Marie, which they plundered and set on fire. Soulouque, now self-proclaimed as Emperor Faustin I, decided to start a new campaign against them. In 1855, he again invaded the territory of the Dominican Republic. But owing to insufficient preparation, the army was soon in want of victuals and ammunition. In spite of the bravery of the soldiers, the Emperor had once more to give up the idea of a unified island under Haitian control. After this campaign, Britain and France interfered and obtained an armistice on behalf of the Dominicans, who declared independence as the Dominican Republic.
The sufferings endured by the soldiers during the campaign of 1855, and the losses and sacrifices inflicted on the country without yielding any compensation or any practical results provoked great discontent. In 1858 a revolution began, led by General Fabre Geffrard, Duke of Tabara. In December of that year, Geffrard defeated the Imperial Army and seized control of most of the country. As a result, the Emperor abdicated his throne on 15 January 1859. Refused aid by the French Legation, Faustin was taken into exile aboard a British warship on 22 January 1859, and General Geffrard succeeded him as President.
The period following Soulouque's overthrow down to the turn of the century was a turbulent one for Haiti, with repeated bouts of political instability. President Geffrard was overthrown in a coup in 1867, as was his successor, Sylvain Salnave, in 1869. Under the Presidency of Michel Domingue (1874–76) relations with the Dominican Republic were dramatically improved by the signing of a treaty, in which both parties acknowledged the independence of the other, bringing an end to Haitian dreams of bringing the entirety of Hispaniola under their control. Some modernisation of the economy and infrastructure also occurred in this period, especially under the Presidencies of Lysius Salomon (1879–88) and Florvil Hyppolite (1889–96).
Haiti's relations with outside powers were often strained. In 1889 the United States attempted to force Haiti to permit the building of a naval base at Môle Saint-Nicolas, which was firmly resisted by President Hyppolite. In 1892 the German government supported suppression of the reform movement of Anténor Firmin, and in 1897, the Germans used gunboat diplomacy to intimidate and then humiliate the Haitian government of President Tirésias Simon Sam (1896–1902) during the Lüders Affair.
In the first decades of the 20th century, Haiti experienced great political instability and was heavily in debt to France, Germany and the United States. A series of short lived presidencies came and went: President Pierre Nord Alexis was forced from power in 1908, as was his successor François C. Antoine Simon in 1911; President Cincinnatus Leconte (1911–12) was killed in a (possibly deliberate) explosion at the National Palace; Michel Oreste (1913–14) was ousted in a coup, as was his successor Oreste Zamor in 1914.
Germany increased its influence in Haiti in this period, with a small community of German settlers wielding disproportionate influence in Haiti's economy. The German influence prompted anxieties in the United States, who had also invested heavily in the country, and whose government defended their right to oppose foreign interference in the Americas under the Monroe Doctrine. In December 1914, the Americans removed $500,000 from the Haitian National Bank, but rather than seize it to help pay the debt, it was removed for safe-keeping in New York, thus giving the United States control of the bank and preventing other powers from doing so. This gave a stable financial base on which to build the economy, and so enable the debt to be repaid.
In 1915, Haiti's new President Vilbrun Guillaume Sam sought to strengthen his tenuous rule by a mass execution of 167 political prisoners. Outrage at the killings led to riots, and Sam was captured and killed by a lynch mob. Fearing possible foreign intervention, or the emergence of a new government led by the anti-American Haitian politician Rosalvo Bobo, President Woodrow Wilson sent U.S. Marines into Haiti in July 1915. The , under Rear Admiral Caperton, arrived in Port-au-Prince in an attempt to restore order and protect U.S. interests. Within days, the Marines had taken control of the capital city and its banks and customs house. The Marines declared martial law and severely censored the press. Within weeks, a new pro-U.S. Haitian president, Philippe Sudré Dartiguenave, was installed and a new constitution written that was favorable to the interests of the United States. The constitution (written by future US President Franklin D. Roosevelt) included a clause that allowed, for the first time, foreign ownership of land in Haiti, which was bitterly opposed by the Haitian legislature and citizenry.
The occupation greatly improved some of Haiti's infrastructure and centralized power in Port-au-Prince. Infrastructure improvements were particularly impressive: 1700 km of roads were made usable, 189 bridges were built, many irrigation canals were rehabilitated, hospitals, schools, and public buildings were constructed, and drinking water was brought to the main cities. Port-au-Prince became the first Caribbean city to have a phone service with automatic dialling. Agricultural education was organized, with a central school of agriculture and 69 farms in the country. However many infrastructure projects were built using the corvée system that allowed the government/occupying forces to take people from their homes and farms, at gunpoint if necessary, to build roads, bridges etc. by force, a process that was deeply resented by ordinary Haitians. Sisal was also introduced to Haiti, and sugarcane and cotton became significant exports, boosting prosperity. Haitian traditionalists, based in rural areas, were highly resistant to U.S.-backed changes, while the urban elites, typically mixed-race, welcomed the growing economy, but wanted more political control. Together they helped secure an end to the occupation in 1934, under the Presidency of Sténio Vincent (1930–41). The debts were still outstanding, though less due to increased prosperity, and the U.S. financial advisor-general receiver handled the budget until 1941.
The U.S. Marines were instilled with a special brand of paternalism towards Haitians "expressed in the metaphor of a father's relationship with his children." Armed opposition to the US presence was led by the cacos under the command of Charlemagne Péralte; his capture and execution in 1919 earned him the status of a national martyr. During Senate hearings in 1921, the commandant of the Marine Corps reported that, in the 20 months of active unrest, 2,250 Haitians had been killed. However, in a report to the Secretary of the Navy, he reported the death toll as being 3,250. Haitian historians have claimed the true number was much higher. One went so far as to say, "the total number of battle victims and casualties of repression and consequences of the war might have reached, by the end of the pacification period, four or five times that – somewhere in the neighborhood of 15,000 persons." This is not supported by most historians outside Haiti.
Recognition of the distinctive traditionalism of the Haitian people had an influence on American writers, including Eugene O'Neill, James Weldon Johnson, Langston Hughes, Zora Neale Hurston and Orson Welles.
After US forces left in 1934, Dominican dictator Rafael Trujillo used anti-Haitian sentiment as a nationalist tool. In an event that became known as the Parsley Massacre, he ordered his army to kill Haitians living on the Dominican side of the border. Few bullets were used – instead, 20,000–30,000 Haitians were bludgeoned and bayonetted, then herded into the sea, where sharks finished what Trujillo had begun. Congressman Hamilton Fish, ranking member of the House Foreign Affairs Committee, called the Parsley Massacre "the most outrageous atrocity that has ever been perpetrated on the American continent."
President Vincent became increasingly dictatorial, and resigned under US pressure in 1941, being replaced by Élie Lescot (1941–46). In 1941, during the Second World War, Lescot declared war on Japan (8 December), Germany (12 December), Italy (12 December), Bulgaria (24 December), Hungary (24 December) and Romania (24 December). Out of these six Axis countries, only Romania reciprocated, declaring war on Haiti on the same day (24 December 1941). On 27 September 1945, Haiti became a founding member of the United Nations (the successor to the League of Nations, of which Haiti was also a founding member).
In 1946 Lescot was overthrown by the military, with Dumarsais Estimé later becoming the new president (1946–50). He sought to improve the economy and education, and to boost the role of black Haitians, however as he sought to consolidate his rule he too was overthrown in a coup led by Paul Magloire, who replaced him as president (1950–56). Firmly anti-Communist, he was supported by the United States; with greater political stability tourists started to visit Haiti. The waterfront area of Port-au-Prince was redeveloped to allow cruise ship passengers to walk from the docks to cultural attractions. Celebrities such as Truman Capote and Noël Coward visited Haiti; the era is captured in Graham Greene's 1966 novel "The Comedians".
In 1956–57 Haiti underwent severe political turmoil; Magloire was forced to resign and leave the country in 1956 and he was followed by four short-lived presidencies. In the September 1957 election Dr. François Duvalier was elected President of Haiti. Known as 'Papa Doc' and initially popular, Duvalier remained President until his death in 1971. He advanced black interests in the public sector, where over time, people of color had predominated as the educated urban elite. Not trusting the army, despite his frequent purges of officers deemed disloyal, Duvalier created a private militia known as "Tontons Macoutes" ("Bogeymen"), which maintained order by terrorizing the populace and political opponents. In 1964 Duvalier proclaimed himself 'President for Life'; an uprising against his rule that year in Jérémie was violently suppressed, with the ringleaders publicly executed and hundreds of mixed-raced citizens in the town killed. The bulk of the educated and professional class began leaving the country, and corruption became widespread. Duvalier sought to create a personality cult, identifying himself with Baron Samedi, one of the "loa", or spirits, of Haitian Vodou. Despite the well-publicized abuses under his rule, Duvalier's firm anti-Communism earned him the support of the Americans, who burnished the country with aid.
In 1971 Duvalier died, and he was succeeded by his son Jean-Claude Duvalier, nicknamed 'Baby Doc', who ruled until 1986. He largely continued his father's policies, though curbed some of the worst excesses in order to court international respectability. Tourism, which had nosedived in Papa Doc's time, again became a growing industry. However as the economy continued to decline Baby Doc's grip on power began to weaken. Haiti's pig population was slaughtered following an outbreak of swine fever in the late 1970s, causing hardship to rural communities who used them as an investment. The opposition became more vocal, bolstered by a visit to the country by Pope John Paul II in 1983, who publicly lambasted the president. Demonstrations occurred in Gonaïves in 1985 which then spread across the country; under pressure from the United States, Duvalier left the country for France in February 1986.
In total, roughly 40,000 to 60,000 Haitians are estimated to have been killed during the reign of the Duvaliers. Through the use of his intimidation tactics and executions, many intellectual Haitians had fled, leaving the country with a massive brain-drain that it has yet to recover from.
Following Duvalier's departure, army leader General Henri Namphy headed a new National Governing Council. Elections scheduled for November 1987 were aborted after dozens of inhabitants were shot in the capital by soldiers and "Tontons Macoutes". Fraudulent elections followed in 1988, in which only 4% of the citizenry voted. The newly elected President, Leslie Manigat, was then overthrown some months later in the June 1988 Haitian coup d'état. Another coup followed in September 1988, after the St. Jean Bosco massacre in which 13–50 people (estimates vary) attending a mass led by prominent government critic and Catholic priest Jean-Bertrand Aristide were killed. General Prosper Avril subsequently led a military regime until March 1990.
In December 1990 Jean-Bertrand Aristide was elected President in the Haitian general election. However his ambitious reformist agenda worried the elites, and in September of the following year he was overthrown by the military, led by Raoul Cédras, in the 1991 Haitian coup d'état. Amidst the continuing turmoil many Haitians attempted to flee the country.
In September 1994, the United States negotiated the departure of Haiti's military leaders and the peaceful entry of 20,000 US troops under Operation Uphold Democracy. This enabled the restoration of the democratically elected Jean-Bertrand Aristide as president, who returned to Haiti in October to complete his term. As part of the deal Aristide had to implement free market reforms in an attempt to improve the Haitian economy, with mixed results, some sources stating that these reforms had a negative impact on native Haitian industry. In November 1994, Hurricane Gordon brushed Haiti, dumping heavy rain and creating flash flooding that triggered mudslides. Gordon killed an estimated 1,122 people, although some estimates go as high as 2,200.
Elections were held in 1995 which were won by René Préval, gaining 88% of the popular vote, albeit on a low turnout. Aristide subsequently formed his own party, Fanmi Lavalas, and political deadlock ensued; the November 2000 election returned Aristide to the presidency with 92% of the vote. The election had been boycotted by the opposition, then organized into the Convergence Démocratique, over a dispute in the May legislative elections. In subsequent years, there was increasing violence between rival political faction and human rights abuses. Aristide spent years negotiating with the Convergence Démocratique on new elections, but the Convergence's inability to develop a sufficient electoral base made elections unattractive.
In 2004 an anti-Aristide revolt began in northern Haiti. The rebellion eventually reached the capital, and Aristide was forced into exile. The precise nature of the events are disputed; some, including Aristide and his bodyguard, Franz Gabriel, stated that he was the victim of a "new coup d'état or modern kidnapping" by U.S. forces. Mrs. Aristide stated that the kidnappers wore U.S. Special Forces uniforms, but changed into civilian clothes upon boarding the aircraft that was used to remove Aristide from Haiti. These charges were denied by the US government. As political violence and crime continued to grow, a United Nations Stabilisation Mission (MINUSTAH) was brought in to maintain order. Howeever MINUSTAH proved controversial, as their at times heavy-handed approach to maintaining law and order and several instances of abuses, including the alleged sexual abuse of civilians, provoked resentment and distrust amongst ordinary Haitians. Boniface Alexandre assumed interim authority until 2006, when René Préval was re-elected President following elections.
Amidst the continuing political chaos, a series of natural disasters hit Haiti. In 2004 Tropical Storm Jeanne skimmed the north coast, leaving 3,006 people dead in flooding and mudslides, mostly in the city of Gonaïves. In 2008 Haiti was again struck by tropical storms; Tropical Storm Fay, Hurricane Gustav, Hurricane Hanna and Hurricane Ike all produced heavy winds and rain, resulting in 331 deaths and about 800,000 in need of humanitarian aid. The state of affairs produced by these storms was intensified by already high food and fuel prices that had caused a food crisis and political unrest in April 2008.
On 12 January 2010, at 4:53pm local time, Haiti was struck by a magnitude-7.0 earthquake. This was the country's most severe earthquake in over 200 years. The earthquake was reported to have left between 220,000 and 300,000 people dead and up to 1.6 million homeless.
The situation was exacerbated by a subsequent massive cholera outbreak that was triggered when cholera-infected waste from a United Nations peacekeeping station contaminated the country's main river, the Artibonite. In 2017, it was reported that roughly 10,000 Haitians had died and nearly a million had been made ill. After years of denial the United Nations apologized in 2016, but , they have refused to acknowledge fault, thus avoiding financial responsibility.
General elections had been planned for January 2010 but were postponed due to the earthquake. Elections were held on 28 November 2010 for the senate, the parliament and the first round of the presidential elections. The run-off between Michel Martelly and Mirlande Manigat took place on 20 March 2011, and preliminary results, released on 4 April, named Michel Martelly the winner. In 2011 both former dictator Jean-Claude Duvalier and Jean-Bertrand Aristide returned to Haiti; attempts to try Duvalier for crimes committed under his rule were shelved following his death in 2014. In 2013, Haiti called for European nations to pay reparations for slavery and establish an official commission for the settlement of past wrongdoings. Meanwhile, after continuing political wrangling with the opposition and allegations of electoral fraud, Martelly agreed to step down in 2016 without having a successor in place. An interim president, Jocelerme Privert, then took office. After numerous postponements, partly owing to the effects of another devastating hurricane, elections were eventually held in November 2016. The victor, Jovenel Moïse of the Haitian Tèt Kale Party, was subsequently sworn in as president in 2017. The 2018–2019 Haitian protests are demonstrations in cities throughout Haiti that began on 7 July 2018, in response to increased fuel prices. Over time these protests evolved into demands for the resignation of president Moïse.
Haiti forms the western three-eighths of Hispaniola, the second largest island in the Greater Antilles. At 27,750 sq km Haiti is the third largest country in the Caribbean behind Cuba and the Dominican Republic, the latter sharing a border with Haiti. The country has a roughly horseshoe shape and because of this it has a disproportionately long coastline, second in length () behind Cuba in the Greater Antilles.
Haiti is the most mountainous nation in the Caribbean, its terrain consists of mountains interspersed with small coastal plains and river valleys. The climate is tropical, with some variation depending on altitude. The highest point is Pic la Selle, at .
The northern region consists of the "Massif du Nord" (Northern Massif) and the "Plaine du Nord" (Northern Plain). The "Massif du Nord" is an extension of the "Cordillera Central" in the Dominican Republic. It begins at Haiti's eastern border, north of the Guayamouc River, and extends to the northwest through the northern peninsula. The lowlands of the "Plaine du Nord" lie along the northern border with the Dominican Republic, between the "Massif du Nord" and the North Atlantic Ocean.
The central region consists of two plains and two sets of mountain ranges. The "Plateau Central" (Central Plateau) extends along both sides of the Guayamouc River, south of the "Massif du Nord". It runs from the southeast to the northwest. To the southwest of the "Plateau Central" are the "Montagnes Noires", whose most northwestern part merges with the "Massif du Nord". Haiti's most important valley in terms of crops is the Plaine de l'Artibonite, which lies between the Montagnes Noires and the Chaîne des Matheux. This region supports the country's (also Hispaniola's) longest river, the Riviere l'Artibonite, which begins in the western region of the Dominican Republic and continues for most of its length through central Haiti, where it then empties into the Golfe de la Gonâve. Also in this valley lies Haiti's second largest lake, Lac de Péligre, formed as a result of the construction of the Péligre Dam in the mid-1950s.
The southern region consists of the "Plaine du Cul-de-Sac" (the southeast) and the mountainous southern peninsula (also known as the Tiburon Peninsula). The Plaine du Cul-de-Sac is a natural depression that harbors the country's saline lakes, such as Trou Caïman and Haiti's largest lake, Étang Saumatre. The Chaîne de la Selle mountain range – an extension of the southern mountain chain of the Dominican Republic (the Sierra de Baoruco) – extends from the Massif de la Selle in the east to the Massif de la Hotte in the west.
Haiti also includes several offshore islands. The island of Tortuga (Île de la Tortue) is located off the coast of northern Haiti. The arrondissement of La Gonâve is located on the island of the same name, in the Golfe de la Gonâve; Haiti's largest island, Gonâve is moderately populated by rural villagers. Île à Vache (Cow Island) is located off the southwest coast; also part of Haiti are the Cayemites, located in the Gulf of Gonâve north of Pestel. La Navasse (Navassa Island), located west of Jérémie on the south west peninsula of Haiti, is subject to an ongoing territorial dispute with the United States, who currently administer the island via the United States Fish and Wildlife Service.
Haiti's climate is tropical with some variation depending on altitude. Port-au-Prince ranges in January from an average minimum of to an average maximum of ; in July, from . The rainfall pattern is varied, with rain heavier in some of the lowlands and the northern and eastern slopes of the mountains. Haiti's dry season occurs from November to January.
Port-au-Prince receives an average annual rainfall of . There are two rainy seasons, April–June and October–November. Haiti is subject to periodic droughts and floods, made more severe by deforestation. Hurricanes are a menace, and the country is also prone to drought, flooding and earthquakes.
There are blind thrust faults associated with the Enriquillo-Plantain Garden fault system over which Haiti lies. After the earthquake of 2010, there was no evidence of surface rupture and geologists' findings were based on seismological, geological and ground deformation data.
The northern boundary of the fault is where the Caribbean tectonic plate shifts eastwards by about per year in relation to the North American plate. The strike-slip fault system in the region has two branches in Haiti, the Septentrional-Oriente fault in the north and the Enriquillo-Plantain Garden fault in the south.
A 2007 earthquake hazard study, noted that the Enriquillo-Plantain Garden fault zone could be at the end of its seismic cycle and concluded that a worst-case forecast would involve a 7.2 Mw earthquake, similar in size to the 1692 Jamaica earthquake. A study team presented a hazard assessment of the Enriquillo-Plantain Garden fault system to the 18th Caribbean Geologic Conference in March 2008, noting the large strain. The team recommended "high priority" historical geologic rupture studies, as the fault was fully locked and had recorded few earthquakes in the preceding 40 years. An article published in Haiti's "Le Matin" newspaper in September 2008 cited comments by geologist Patrick Charles to the effect that there was a high risk of major seismic activity in Port-au-Prince; and duly the magnitude 7.0 2010 Haiti earthquake happened on this fault zone on 12 January 2010.
Haiti also has rare elements such as gold, which can be found at The Mont Organisé gold mine.
The soil erosion released from the upper catchments and deforestation have caused periodic and severe flooding in Haiti, as experienced, for example, on 17 September 2004. Earlier in May that year, floods had killed over 3,000 people on Haiti's southern border with the Dominican Republic.
Haiti's forests covered 60% of the country as recently as 50 years ago, but that has been halved to a current estimate of 30% tree cover, according to more recent environmental analysis. This estimate poses a stark difference from the erroneous figure of 2% which has been oft-cited in discourse concerning the country's environmental condition.
Scientists at the Columbia University's Center for International Earth Science Information Network (CIESIN) and the United Nations Environment Programme are working on the Haiti Regenerative Initiative an initiative aiming to reduce poverty and natural disaster vulnerability in Haiti through ecosystem restoration and sustainable resource management.
Despite its small size, Haiti's mountainous terrain and resultant multiple climactic zones has resulted in a wide variety of plant life. Notable tree species include the breadfruit tree, mango tree, acacia, mahogany, coconut palm, royal palm and West Indian cedar. The forests were formerly much more extensive, but have been subject to severe deforestation.
Most mammal species are not native, having being brought to the island since colonial times. However there are various native bat species, as well as the endemic Hispaniolan hutia and Hispaniolan solenodon. Various whale and dolphin species can also be found off Haiti's coast.
There are over 260 species of bird, 31 of these being endemic to Hispaniola. Notable endemic species include the Hispaniola trogon, Hispaniola parakeet, grey-crowned tanager and the Hispaniola Amazon. There are also several raptor species, as well as pelicans, ibis, hummingbirds and ducks.
Reptiles are common, with species such as the rhinoceros iguana, Haitian boa, American crocodile and gecko.
The government of Haiti is a semi-presidential republic, a multiparty system wherein the president of Haiti is head of state elected directly by popular elections held every five years. The prime minister of Haiti acts as head of government and is appointed by the president, chosen from the majority party in the National Assembly. Executive power is exercised by the president and prime minister who together constitute the government.
Legislative power is vested in both the government and the two chambers of the National Assembly of Haiti, the Senate (Sénat) and the Chamber of Deputies (Chambre des Députés). The government is organized unitarily, thus the central government "delegates" powers to the departments without a constitutional need for consent. The current structure of Haiti's political system was set forth in the Constitution of Haiti on 29 March 1987.
Haitian politics have been contentious: since independence, Haiti has suffered 32 coups. Haiti is the only country in the Western Hemisphere to undergo a successful slave revolution; however, a long history of oppression by dictators such as François Duvalier and his son Jean-Claude Duvalier has markedly affected the nation. Since the end of the Duvalier era Haiti has been transitioning to a democratic system.
Administratively, Haiti is divided into ten departments. The departments are listed below, with the departmental capital cities in parentheses.
The departments are further divided into 42 arrondissements, 145 communes and 571 communal sections. These serve as, respectively, second- and third-level administrative divisions.
Haiti is a member of a wide range of international and regional organizations, such as the United Nations, Caricom, Community of Latin American and Caribbean States, International Monetary Fund, Organisation of American States, Organisation internationale de la Francophonie, OPANAL and the World Trade Organization.
In February 2012, Haiti signaled it would seek to upgrade its observer status to full associate member status of the African Union (AU). The AU was reported to be planning to upgrade Haiti's status from observer to associate at its June 2013 summit but the application had still not been ratified by May 2016.
Haiti's Ministry of Defense is the main body of the armed forces. The former Haitian Armed Forces were demobilized in 1995, however efforts to reconstitute it are currently underway. The current defense force for Haiti is the Haitian National Police, which has a highly trained SWAT team, and works alongside the Haitian Coast Guard. In 2010, the Haitian National Police force numbered 7,000.
The legal system is based on a modified version of the Napoleonic Code.
Haiti has consistently ranked among the most corrupt countries in the world on the Corruption Perceptions Index. According to a 2006 report by the Corruption Perceptions Index, there is a strong correlation between corruption and poverty in Haiti. The nation ranked first of all countries surveyed for of levels of perceived domestic corruption. It is estimated that President "Baby Doc" Duvalier, his wife Michelle, and their agents stole US $504 million from the country's treasury between 1971 and 1986. Similarly, after the Haitian Army folded in 1995, the Haitian National Police (HNP) gained sole power of authority on the Haitian citizens. Many Haitians as well as observers of the Haitian society believe that this monopolized power could have given way to a corrupt police force.
Similarly, some media outlets alleged that millions were stolen by former president Jean-Bertrand Aristide. In March 2004, at the time of Aristide's kidnapping, a BBC article wrote that the Bush administration State Department stated that Aristide had been involved in drug trafficking. The BBC also described pyramid schemes, in which Haitians lost hundreds of millions in 2002, as the "only real economic initiative" of the Aristide years.
Conversely, according to the 2013 United Nations Office on Drugs and Crime (UNODC) report, murder rates in Haiti (10.2 per 100,000) are far "below" the regional average (26 per 100,000); less than that of Jamaica (39.3 per 100,000) and nearly that of the Dominican Republic (22.1 per 100,000), making it among the safer countries in the region. In large part, this is due to the country's ability to fulfil a pledge by increasing its national police yearly by 50%, a four-year initiative that was started in 2012. In addition to the yearly recruits, the Haitian National Police (HNP) has been using innovative technologies to crack down on crime. A notable bust in recent years led to the dismantlement of the largest kidnapping ring in the country with the use of an advanced software program developed by a West Point-trained Haitian official that proved to be so effective that it has led to its foreign advisers to make inquiries.
In 2010, the New York City Police Department (NYPD) sent a team of veteran officers to Haiti to assist in the rebuilding of its police force with special training in investigative techniques, strategies to improve the anti-kidnapping personnel and community outreach to build stronger relationships with the public especially among the youth. It has also helped the HNP set up a police unit in the center of Delmas, a neighborhood of Port-au-Prince.
In 2012 and 2013, 150 HNP officers received specialized training funded by the US government, which also contributed to the infrastructure and communications support by upgrading radio capacity and constructing new police stations from the most violent-prone neighborhoods of Cité Soleil and Grande Ravine in Port-au-Prince to the new northern industrial park at Caracol.
Port-au-Prince penitentiary is home to half of Haiti's prisoners. The prison has a capacity of 1,200 detainees but the penitentiary was obliged to keep 4,359 detainees, a 454% occupancy level. This leads to severe consequences for the inmates.
One cell could hold up to 60 inmates which was originally designed for only 18, therefore creating tight and uncomfortable living conditions. The inmates are forced to create makeshift hammocks from the wall and ceilings. The men are on a 22/ 23 hour lock up in the cells so the risk of diseases is very high. Unable to receive sufficient funds from the government as Haiti endures severe natural disasters which takes up their attention and resources, such as the 2010 earthquake, has caused deadly cases of malnutrition, combined with the tight living conditions, increases the risk of infectious diseases such as tuberculosis which has led to 21 deaths in January 2017 alone at the Port-au-Prince penitentiary.
Haitian law states that once arrested, one must go before a judge within 48 hours; however, this is very rare. In an interview with Unreported World, the prison governor stated that around 529 detainees were never sentenced, there are 3,830 detainees who are in prolonged detained trial detention. Therefore, 80% are not convicted.
Unless families are able to provide the necessary funds for inmates to appear before a judge there is a very slim chance the inmate would have a trial, on average, within 10 years. Brian Concannon, the director of the non-profit Institute for Justice and Democracy in Haiti, claims that without a substantial bribe to persuade judges, prosecutors and lawyers to undergo their case, there is no prospect for getting a trial for years.
Families may send food to the penitentiary; however, most inmates depend on the meals served twice a day. However, the majority of the meals consists of ration supplies of rice, oats or cornmeal, which has led to deadly cases of malnutrition-related ailments such as beriberi and anaemia. Prisoners too weak are crammed in the penitentiary infirmary.
In the confined living spaces for 22/ 23 hours a day, inmates are not provided with latrines and are forced to defecate into plastic bags and leave them outside their cells. These conditions are considered in-humane by the Inter-American Court of Human Rights in 2008.
Haiti has a predominantly free market economy, with a GDP of $19.97 billion and per capita GDP of $1,800 (2017 estimates). The country uses the Haitian gourde as its currency. Despite its tourism industry, Haiti is one of the poorest countries in the Americas, with poverty, corruption, political instability, poor infrastructure, lack of health care and lack of education cited as the main causes. Unemployment is high and many Haitians seek to emigrate. Trade declined dramatically after the 2010 earthquake and subsequent outbreak of cholera, with the country's purchasing power parity GDP falling by 8% (from US$12.15 billion to US$11.18 billion). Haiti ranked 145 of 182 countries in the 2010 United Nations Human Development Index, with 57.3% of the population being deprived in at least three of the HDI's poverty measures.
Following the disputed 2000 election and accusations about President Aristide's rule, US aid to the Haitian government was cut off between 2001 and 2004. After Aristide's departure in 2004, aid was restored and the Brazilian army led a United Nations Stabilization Mission in Haiti peacekeeping operation. After almost four years of recession, the economy grew by 1.5% in 2005. In September 2009, Haiti met the conditions set out by the IMF and World Bank's Heavily Indebted Poor Countries program to qualify for cancellation of its external debt.
More than 90 percent of the government's budget comes from an agreement with Petrocaribe, a Venezuela-led oil alliance.
Haiti received more than US$4 billion in aid from 1990 to 2003, including US$1.5 billion from the United States. The largest donor is the US, followed by Canada and the European Union. In January 2010, following the earthquake, US President Barack Obama promised US$1.15 billion in assistance. European Union nations pledged more than €400 million (US$616 million). Neighboring Dominican Republic has also provided extensive humanitarian aid to Haiti, including the funding and construction of a public university, human capital, free healthcare services in the border region, and logistical support after the 2010 earthquake.
According to the UN Office of the Special Envoy for Haiti, , of humanitarian funding committed or disbursed by bilateral and multilateral donors in 2010 and 2011, only 1% has been pledged to the Haitian government.
The United Nations states that in total US$13.34 billion has been earmarked for post-earthquake reconstruction through 2020, though two years after the 2010 quake, less than half of that amount had actually been released, according to UN documents. , the US government has allocated US$4 billion, US$3 billion has already been spent, and the rest is dedicated to longer-term projects.
Former US President Bill Clinton's foundation contributed US$250,000 to a recycling initiative for a sister-program of "Ranmase Lajan" or "Picking Up Money" by use of reverse vending machines.
According to the 2015 CIA World Factbook, Haiti's main import partners are: Dominican Republic 35%, US 26.8%, Netherlands Antilles 8.7%, China 7% (est. 2013). Haiti's main export partner is the US 83.5% (est. 2013). Haiti had a trade deficit of US$3 billion in 2011, or 41% of GDP.
In 1925, the city of Jacmel was the first area in the Caribbean to have electricity and was subsequently dubbed the "City of Light".
Today, Haiti relies heavily on an oil alliance with Petrocaribe for much of its energy requirements. In recent years, hydroelectric, solar and wind energy have been explored as possible sustainable energy sources.
As of 2017, among all the countries in the Americas, Haiti is producing the least amount of energy. Less than a quarter of the country has electric coverage. Most regions of Haiti that do have energy are powered by generators. These generators are often expensive and produce a lot of pollution. The areas that do get electricity experience power cuts on a daily basis, and some areas are limited to 12 hours of electricity a day. Electricity is provided by a small number of independent companies: Sogener, E-power, and Haytrac. There is no national electricity grid within the country. The most common source of energy used is wood, along with charcoal. In Haiti, about 4 million metric tons of wood products are consumed yearly. Like charcoal and wood, petroleum is also an important source of energy for Haiti. Since Haiti cannot produce its own fuel, all fuel is imported. Yearly, around 691,000 tons of oil is imported into the country.
On 31 October 2018, Evenson Calixte, the General Director of energy regulation (ANARSE) announced the 24 hour electricity project. To meet this objective, 236 Megawatt needs to installed in Port-au-Prince alone, with an additional 75 Megawatt needed in all other regions in the country. Presently only 27,5% of the population has access to electricity; moreover, the national energy agency l'Électricité d'Haïti (Ed'H) is only able to meet 62% of overall electricity demand said Fritz Caillot, the Minister of Public Works, Transportation and Communication (Travaux publics, transport et communication (TPTC)).
"The World Factbook" reports a shortage of skilled labor, widespread unemployment and underemployment, saying "more than two-thirds of the labor force do not have formal jobs." It is also often stated that three-quarters of the population lives on US$2 or less per day.
"The CIA World Factbook" also states that "remittances are the primary source of foreign exchange, equalling one-fifth (20%) of GDP and representing more than five times the earnings from exports in 2012". The World Bank estimates that over 80% of college graduates from Haiti were living abroad in 2004.
Occasionally, families who unable to care for a child may send him/her to live with a wealthier family as a "restavek", or house servant. In return the family are supposed to ensure that the child is educated and provided with food and shelter, however the system is open to abuse and has proved controversial, with some likening it to child slavery.
In rural areas, people often live in wooden huts with corrugated iron roofs. Outhouses are located in back of the huts. In Port-au-Prince colorful shantytowns surround the central city and go up the mountainsides.
The middle and upper classes live in suburbs, or in the central part of the bigger cities in apartments, where there is urban planning. Many of the houses they live in are like miniature fortresses, located behind walls embedded with metal spikes, barbed wire, broken glass, and sometimes all three. The gates to these houses are barred at night, the house is locked; guard dogs patrol the yard. These houses are often self-sufficient as well. The houses have backup generators, because the electrical grid in Haiti is unreliable. Some even have rooftop reservoirs for water, as the water supply is also unreliable.
Haiti is the world's leading producer of vetiver, a root plant used to make luxury perfumes, essential oils and fragrances, providing for half the world's supply. Roughly 40–50% of Haitians work in the agricultural sector. Haiti relies upon imports for half its food needs and 80% of its rice.
Haiti exports crops such as mangoes, cacao, coffee, papayas, mahogany nuts, spinach, and watercress. Agricultural products comprise 6% of all exports. In addition, local agricultural products include maize, beans, cassava, sweet potato, peanuts, pistachios, bananas, millet, pigeon peas, sugarcane, rice, sorghum, and wood.
The Haitian gourde (HTG) is the national currency. The "Haitian dollar" equates to 5 gourdes ("goud"), which is a fixed exchange rate that exists in concept "only," but are commonly used as informal prices. The vast majority of the business sector and individuals in Haiti will also accept US dollars, though at the outdoor markets gourdes may be preferred. Locals may refer to the USD as "dollar américain" ("dola ameriken") or "dollar US" (pronounced "oo-es").
The tourism market in Haiti is undeveloped and the government is heavily promoting this sector. Haiti has many of the features that attract tourists to other Caribbean destinations, such as white sand beaches, mountainous scenery and a year-round warm climate, however the country's poor image overseas, at times exaggerated, has hampered the development of this sector. In 2014, the country received 1,250,000 tourists (mostly from cruise ships), and the industry generated US$200 million in 2014.
Several hotels were opened in 2014, including an upscale Best Western Premier, a five-star Royal Oasis hotel by Occidental Hotel and Resorts in Pétion-Ville, a four-star Marriott Hotel in the Turgeau area of Port-au-Prince and other new hotel developments in Port-au-Prince, Les Cayes, Cap-Haïtien and Jacmel.
The Haitian Carnival has been one of the most popular carnivals in the Caribbean. In 2010, the government decided to stage the event in a different city outside Port-au-Prince every year in an attempt to decentralize the country. The National Carnival usually held in one of the country's largest cities (i.e., Port-au-Prince, Cap-Haïtien or Les Cayes) follows the also very popular Jacmel Carnival, which takes place a week earlier in February or March.
On 21 October 2012, Haitian President Michel Martelly, US Secretary of State Hillary Clinton, Bill Clinton, Richard Branson, Ben Stiller and Sean Penn inaugurated the Caracol industrial park, the largest in the Caribbean. Costing US$300 million, the project, which includes a 10-megawatt power plant, a water-treatment plant and worker housing, is intended to transform the northern part of the country by creating 65,000 jobs.
The park is part of a "master plan" for Haiti's North and North-East departments, including the expansion of the Cap-Haïtien International Airport to accommodate large international flights, the construction of an international Seaport in Fort-Liberté and the opening of the $50 million Roi Henri Christophe Campus of a new university in Limonade (near Cap-Haïtien) on 12 January 2012.
South Korean clothing manufacturer Sae-A Trading Co. Ltd, one of the park's main tenants, has created 5,000 permanent jobs out of the 20,000 projected and has built 8,600 houses in the surrounding area for its workers. The industrial park ultimately has the potential to create as many as 65,000 jobs once fully developed.
Haiti has two main highways that run from one end of the country to the other. The northern highway, Route Nationale No. 1 (National Highway One), originates in Port-au-Prince, winding through the coastal towns of Montrouis and Gonaïves, before reaching its terminus at the northern port Cap-Haïtien. The southern highway, Route Nationale No. 2, links Port-au-Prince with Les Cayes via Léogâne and Petit-Goâve. The state of Haiti's roads are generally poor, many being potholed and becoming impassable in rough weather.
According to the Washington Post, "Officials from the U.S. Army Corps of Engineers said Saturday [23 January 2010] that they assessed the damage from the [12 January] quake in Port-au-Prince, Haiti, and found that many of the roads aren't any worse than they were before because they've always been in poor condition."
The port at Port-au-Prince, Port international de Port-au-Prince, has more registered shipping than any of the other dozen ports in the country. The port's facilities include cranes, large berths, and warehouses, but these facilities are not in good condition. The port is underused, possibly due to the substantially high port fees. The port of Saint-Marc is currently the preferred port of entry for consumer goods coming into Haiti. Reasons for this may include its location away from volatile and congested Port-au-Prince, as well as its central location relative to numerous Haitian cities.
In the past, Haiti used rail transport, however the rail infrastructure was poorly maintained when in use and cost of rehabilitation is beyond the means of the Haitian economy. In 2018 the Regional Development Council of the Dominican Republic proposed a "trans-Hispaniola" railway between both countries.
Toussaint Louverture International Airport, located North/North East of Port-au-Prince proper in the commune of Tabarre, is the primary transportation hub regarding entry and exit into the country. It has Haiti's main jetway, and along with Cap-Haïtien International Airport located near the northern city of Cap-Haïtien, handles the vast majority of the country's international flights. Cities such as Jacmel, Jérémie, Les Cayes, and Port-de-Paix have smaller, less accessible airports that are serviced by regional airlines and private aircraft. Such companies include: Caribintair (defunct), Sunrise Airways and Tortug' Air (defunct).
In 2013, plans for the development of an international airport on Île-à-Vache were introduced by the Prime Minister.
Tap tap buses are colorfully painted buses or pick-up trucks that serve as share taxis. The "tap tap" name comes from the sound of passengers tapping on the metal bus body to indicate they want off. These vehicles for hire are often privately owned and extensively decorated. They follow fixed routes, do not leave until filled with passengers, and riders can usually disembark at any point. The decorations are a typically Haitian form of art.
In August 2013, the first coach bus prototype was made in Haiti.
In Haiti, communications include the radio, television, fixed and mobile telephones, and the Internet. Haiti ranked last among North American countries in the World Economic Forum's Network Readiness Index (NRI) an indicator for determining the development level of a country's information and communication technologies. Haiti ranked number 143 out of 148 overall in the 2014 NRI ranking, down from 141 in 2013.
Haiti faces key challenges in the water supply and sanitation
sector:
Notably, access to public services is very low, their quality is inadequate and public institutions remain very weak despite foreign aid and the government's declared intent to strengthen the sector's institutions. Foreign and Haitian NGOs play an important role in the sector, especially in rural and urban slum areas.
Haiti's population is about 10,788,000 (July 2018 est.) with half of the population younger than age 20. In 1950, the first formal census gave a total population of 3.1 million. Haiti averages approximately 350 people per square kilometer (~900 per sq mi.), with its population concentrated most heavily in urban areas, coastal plains, and valleys.
Most Haitians are descendants of former black African slaves, including Mulattoes who are mixed-race. The remainder are of European or Arab descent, the descendants of settlers (colonial remnants and contemporary immigration during World War I and World War II). Haitians of East Asian descent or East Indian origin number approximately more than 400.
Millions of Haitians live abroad in the United States, Dominican Republic, Cuba, Canada (primarily Montreal), Bahamas, France, French Antilles, the Turks and Caicos, Jamaica, Puerto Rico, Venezuela, Brazil, Suriname and French Guiana. There are an estimated 881,500 Haitians in the United States, 800,000 in the Dominican Republic, 300,000 in Cuba, 100,000 in Canada, 80,000 in France, and up to 80,000 in the Bahamas. There are also smaller Haitian communities in many other countries, including Chile, Switzerland, Japan and Australia.
In 2017, the life expectancy at birth was 64 years.
The gene pool of Haiti is about 95.5% Sub-Saharan African, 4.3% European, with the rest showing some traces of East Asian genes; according to a 2010 autosomal genealogical DNA testing.
A 2012 genetic study on Haitian and Jamaican Y-chromosomal ancestry has revealed that both populations "exhibit a predominantly Sub-Saharan paternal component, with haplogroups A1b-V152, A3-M32, B2-M182, E1a-M33, E1b1a-M2, E2b-M98, and R1b2-V88" comprising (77.2%) of the Haitian and (66.7%) of Jamaican paternal gene pools. Y-chromosomes indicative of European ancestry "(i.e., haplogroups G2a*-P15, I-M258, R1b1b-M269, and T-M184) were detected at commensurate levels in Haiti (20.3%) and Jamaica (18.9%)".
While Y-haplogroups indicative of Chinese O-M175 (3.8%) and Indian H-M69 (0.6%) and L-M20 (0.6%) ancestry were found at significant levels in Jamaica, Levantine Y-haplogroups were found in Haiti.
According to a 2008 study examining the frequency of the Duffy antigen receptor for Chemokines (DARC) Single Nucleotide polymorphisms (SNPs), (75%) of Haitian women sampled exhibited the CC genotype (absent among women of European ancestry) at levels comparable to US African-Americans (73%), but more than Jamaican females (63%).
Under colonial rule, Haitian mulattoes were generally privileged above the black majority, though they possessed fewer rights than the white population. Following the country's independence, they became the nation's social elite. Numerous leaders throughout Haiti's history have been mulattoes. During this time, the slaves and the affranchis were given limited opportunities toward education, income, and occupations, but even after gaining independence, the social structure remains a legacy today as the disparity between the upper and lower classes have not been reformed significantly since the colonial days. Comprising 5% of the nation's population, mulattoes have retained their preeminence, evident in the political, economic, social and cultural hierarchy in Haiti. As a result, the elite class today consists of a small group of influential people who are generally light in color and continue to establish themselves in high, prestigious positions.
The 2017 CIA Factbook reported that around 54.7% of Haitians profess to being Catholics while Protestants made up about 28.5% of the population (Baptist 15.4%, Pentecostal 7.9%, Seventh-day Adventist 3%, Methodist 1.5%, other 0.7%). Other sources put the Protestant population higher than this, suggesting that it might have formed one-third of the population in 2001. Like other countries in Latin America, Haiti has witnessed a general Protestant expansion, which is largely Evangelical and Pentecostal in nature. Haitian Cardinal Chibly Langlois is president of the National Bishops Conference of the Catholic Church.
Vodou, a religion with West African roots similar to those of Cuba and Brazil, originated during colonial times in which slaves were obliged to disguise their loa, or spirits, as Roman Catholic saints, an element of a process called syncretism and is still practiced by some Haitians today. Due to the religious syncretism between Catholicism and Vodou, it is difficult to estimate the number of Vodouists in Haiti. The religion has historically been persecuted and misrepresented in popular media, however in 2003 the Government recognized the faith as an official religion of the nation.
Minority religions in Haiti include Islam, Bahá'í Faith, Judaism, and Buddhism.
The two official languages of Haiti are French and Haitian Creole. French is the principal written and administratively authorized language (as well as the main language of the press) and is spoken by 42% of Haitians. It is spoken by all educated Haitians, is the medium of instruction in most schools, and is used in the business sector. It is also used in ceremonial events such as weddings, graduations and church Masses. Haiti is one of two independent nations in the Americas (along with Canada) to designate French as an official language; the other French-speaking areas are all overseas "départements", or "collectivités," of France.
Haitian Creole, which has recently undergone a standardization, is spoken by virtually the entire population of Haiti. Haitian Creole is one of the French-based creole languages. Its vocabulary is 90% derived from French, but its grammar resembles that of some West African languages. It also has influences from Taino, Spanish, and Portuguese. Haitian Creole is related to the other French creoles, but most closely to the Antillean Creole and Louisiana Creole variants.
There is a large Haitian diaspora community, predominantly based in the US and Canada, France, and the wealthier Caribbean islands.
Emigrants from Haiti have constituted a segment of American and Canadian society since before the independence of Haiti from France in 1804. Many influential early American settlers and black freemen, including Jean Baptiste Point du Sable and W. E. B. Du Bois, were of Haitian origin.
Jean Baptiste Point du Sable, an immigrant from Saint-Domingue (now the Republic of Haiti), founded the first nonindigenous settlement in what is now Chicago, Illinois, the third largest city in the United States. The state of Illinois and city of Chicago declared du Sable the founder of Chicago on 26 October 1968.
Haiti has a rich and unique cultural identity, consisting of a blend of traditional French and African customs, mixed with sizeable contributions from the Spanish and indigenous Taíno cultures. Haiti's culture is greatly reflected in its paintings, music, and literature. Galleries and museums in the United States and France have exhibited the works of the better-known artists to have come out of Haiti.
Haitian art is distinctive, particularly through its paintings and sculptures. Brilliant colors, naïve perspectives, and sly humor characterize Haitian art. Frequent subjects in Haitian art include big, delectable foods, lush landscapes, market activities, jungle animals, rituals, dances, and gods. As a result of a deep history and strong African ties, symbols take on great meaning within Haitian society. For example, a rooster often represents Aristide and the red and blue colors of the Haitian flag often represent his Lavalas party. Many artists cluster in 'schools' of painting, such as the Cap-Haïtien school, which features depictions of daily life in the city, the Jacmel School, which reflects the steep mountains and bays of that coastal town, or the Saint-Soleil School, which is characterised by abstracted human forms and is heavily influenced by Vodou symbolism.
In the 1920s the "indigéniste" movement gained international acclaim, with its expressionist paintings inspired by Haiti's culture and African roots. Notable painters of this movement include Hector Hyppolite, Philomé Oban and Préfète Duffaut. Some notable artists of more recent times include Edouard Duval-Carrié, Frantz Zéphirin, Leroy Exil, Prosper Pierre Louis and Louisiane Saint Fleurant. Sculpture is also practised in Haiti; noted artists in this form include George Liautaud and Serge Jolimeau.
Haitian music combines a wide range of influences drawn from the many people who have settled here. It reflects French, African and Spanish elements and others who have inhabited the island of Hispaniola, and minor native Taino influences. Styles of music unique to the nation of Haiti include music derived from Vodou ceremonial traditions, Rara parading music, Twoubadou "ballads", mini-jazz rock bands, Rasin movement, Hip hop kreyòl, méringue, and compas. Youth attend parties at nightclubs called "discos", (pronounced "deece-ko"), and attend "Bal". This term is the French word for ball, as in a formal dance.
"Compas (konpa)" (also known as "compas direct" in French, or "konpa dirèk" in creole) is a complex, ever-changing music that arose from African rhythms and European ballroom dancing, mixed with Haiti's bourgeois culture. It is a refined music, with méringue as its basic rhythm. Haiti had no recorded music until 1937 when Jazz Guignard was recorded non-commercially.
Haiti has always been a literary nation that has produced poetry, novels, and plays of international recognition. The French colonial experience established the French language as the venue of culture and prestige, and since then it has dominated the literary circles and the literary production. However, since the 18th century there has been a sustained effort to write in Haitian Creole. The recognition of Creole as an official language has led to an expansion of novels, poems, and plays in Creole. In 1975, Franketienne was the first to break with the French tradition in fiction with the publication of "Dezafi," the first novel written entirely in Haitian Creole; the work offers a poetic picture of Haitian life. Other well known Haitian authors include Jean Price-Mars, Jacques Roumain, Marie Vieux-Chauvet, Pierre Clitandre, René Depestre, Edwidge Danticat, Lyonel Trouillot and Dany Laferrière.
Haiti has a small though growing cinema industry. Well-known directors working primarily in documentary film-making include Raoul Peck and Arnold Antonin. Directors producing fictional films include Patricia Benoît, Wilkenson Bruna and Richard Senecal.
Haiti is famous for its creole cuisine (which related to Cajun cuisine), and its soup joumou.
Monuments include the Sans-Souci Palace and the Citadelle Laferrière, inscribed as a World Heritage site in 1982. Situated in the Northern Massif du Nord, in one of Haiti's National Parks, the structures date from the early 19th century. The buildings were among the first built after Haiti's independence from France.
The Citadelle Laferrière, is the largest fortress in the Americas, is located in northern Haiti. It was built between 1805 and 1820 and is today referred to by some Haitians as the eighth wonder of the world.
The Institute for the Protection of National Heritage has preserved 33 historical monuments and the historic center of Cap-Haïtien.
Jacmel, a colonial city that was tentatively accepted as a World Heritage site, was extensively damaged by the 2010 Haiti earthquake.
The anchor of Christopher Columbus's largest ship, the "Santa María" now rests in the Musée du Panthéon National Haïtien (MUPANAH), in Port-au-Prince, Haiti.
Haiti is known for its folklore traditions. Much of this is rooted in Haitian Vodou tradition. Belief in zombies is also common. Other folkloric creatures include the lougarou.
The most festive time of the year in Haiti is during Carnival (referred to as "Kanaval" in Haitian Creole or Mardi Gras) in February. There is music, parade floats, and dancing and singing in the streets. Carnival week is traditionally a time of all-night parties.
Rara is a festival celebrated before Easter. The festival has generated a style of Carnival music.
Football is the most popular sport in Haiti with hundreds of small football clubs competing at the local level. Basketball is growing in popularity. Stade Sylvio Cator is the multi-purpose stadium in Port-au-Prince, where it is currently used mostly for association football matches that fits a capacity of 10,000 people. In 1974, the Haiti national football team were only the second Caribbean team to make the World Cup (after Cuba's entry in 1938). They lost in the opening qualifying stages against three of the pre-tournament favorites; Italy, Poland, and Argentina. The national team won the 2007 Caribbean Nations Cup.
Haiti has participated in the Olympic Games since the year 1900 and won a number of medals. Haitian footballer Joe Gaetjens played for the United States national team in the 1950 FIFA World Cup, scoring the winning goal in the 1–0 upset of England.
The educational system of Haiti is based on the French system. Higher education, under the responsibility of the Ministry of Education, is provided by universities and other public and private institutions.
More than 80% of primary schools are privately managed by nongovernmental organizations, churches, communities, and for-profit operators, with minimal government oversight. According to the 2013 Millennium Development Goals (MDG) Report, Haiti has steadily boosted net enrollment rate in primary education from 47% in 1993 to 88% in 2011, achieving equal participation of boys and girls in education. Charity organizations, including Food for the Poor and Haitian Health Foundation, are building schools for children and providing necessary school supplies.
According to CIA 2015 World Factbook, Haiti's literacy rate is now 60.7% (est. 2015).
The January 2010 earthquake, was a major setback for education reform in Haiti as it diverted limited resources to survival.
Many reformers have advocated the creation of a free, public and universal education system for all primary school-age students in Haiti. The Inter-American Development Bank estimates that the government will need at least US$3 billion to create an adequately funded system.
Upon successful graduation of secondary school, students may continue into higher education. The higher education schools in Haiti include the University of Haiti. There are also medical schools and law schools offered at both the University of Haiti and abroad. Presently, Brown University is cooperating with L'Hôpital Saint-Damien in Haiti to coordinate a pediatric health care curriculum.
In the past, children's vaccination rates have been low , 60% of the children in Haiti under the age of 10 were vaccinated, compared to rates of childhood vaccination in other countries in the 93–95% range. Recently there have been mass vaccination campaigns claiming to vaccinate as many as 91% of a target population against specific diseases (measles and rubella in this case). Most people have no transportation or access to Haitian hospitals.
The World Health Organization cites diarrheal diseases, HIV/AIDS, meningitis, and respiratory infections as common causes of death in Haiti. Ninety percent of Haiti's children suffer from waterborne diseases and intestinal parasites. HIV infection is found in 1.71% of Haiti's population (est. 2015). The incidence of tuberculosis (TB) in Haiti is more than ten times as high as in the rest of Latin America. Approximately 30,000 Haitians fall ill with malaria each year.
Most people living in Haiti are at high risk for major infectious diseases. Food or water-borne diseases include bacterial and protozoal diarrhea, typhoid fever and hepatitis A and E; common vector-borne diseases are dengue fever and malaria; water-contact diseases include leptospirosis. Roughly 75% of Haitian households lack running water. Unsafe water, along with inadequate housing and unsanitary living conditions, contributes to the high incidence of infectious diseases. There is a chronic shortage of health care personnel and hospitals lack resources, a situation that became readily apparent after the January 2010 earthquake. The infant mortality rate in Haiti in 2013 was 55 deaths per 1,000 live births, compared to a rate of 6 per 1,000 in other countries.
After the 2010 earthquake, Partners In Health founded the Hôpital Universitaire de Mirebalais, the largest solar-powered hospital in the world. | https://en.wikipedia.org/wiki?curid=13373 |
History of Haiti
The recorded history of Haiti began on 5 December 1492, when the European navigator Christopher Columbus happened upon a large island in the region of the western Atlantic Ocean that later came to be known as the Caribbean. It was inhabited by the Taíno and Arawakan people, who variously called their island "Ayiti", "Bohio", and "Kiskeya" "(Quisqueya)". Columbus promptly claimed the island for the Spanish Crown, naming it "La Isla Española" ("the Spanish Island"), later Latinized to "Hispaniola". French influence began in 1625, and French control of what was called Saint-Domingue in modern-day Haiti began in 1660. From 1697 on, the western part of the island was French, and the eastern part was Spanish. Haiti became one of the wealthiest of France's colonies, producing vast quantities of sugar and coffee and depending on a brutal slave system for the necessary labor. Inspired by the message of the French Revolution, Haitian slaves rose up in revolt in 1791, and after decades of struggle, the independent republic of Haiti was officially proclaimed in 1804.
Successive waves of Arawak migrants, moving northward from the Orinoco delta in South America, settled the islands of the Caribbean. Around A.D. 600, the Taíno, an Arawak culture, arrived on the island, displacing the previous inhabitants. They were organized into "cacicazgos" (chiefdoms), each led by a "cacique" (chief).
Christopher Columbus established the settlement, La Navidad, near the modern town of Cap-Haïtien. It was built from the timbers of his wrecked ship, Santa María, during his first voyage in December 1492. When he returned in 1493 on his second voyage he found the settlement had been destroyed and all 39 settlers killed. Columbus continued east and founded a new settlement at La Isabela on the territory of the present-day Dominican Republic in 1493. The capital of the colony was moved to Santo Domingo in 1496, on the south west coast of the island also in the territory of the present-day Dominican Republic. The Spanish returned to western Hispaniola in 1502, establishing a settlement at Yaguana near modern-day Léogâne. A second settlement was established on the north coast in 1504 called Puerto Real near modern Fort-Liberté – which in 1578 was relocated to a nearby site and renamed Bayaja.
Following the arrival of Europeans, La Hispaniola's indigenous population suffered greatly to near extinction, in possibly the worst case of depopulation in the Americas. A commonly accepted hypothesis attributes the high mortality of this colony in part to European diseases to which the natives had no immunity. A small number of Taínos were able to survive and set up villages elsewhere. Spanish interest in Hispaniola began to wane in the 1520s, as more lucrative gold and silver deposits were found in Mexico and South America. Thereafter, the population of Spanish Hispaniola grew at a slow pace.
The settlement of Yaguana was burnt to the ground three times in its just over a century long existence as a Spanish settlement, first by French pirates in 1543, again on May 27, 1592, by a 110-strong landing party from a four-ship English naval squadron led by Christopher Newport in his flagship Golden Dragon, who destroyed all 150 houses in the settlement, and finally by the Spanish themselves in 1605, for reasons set out below.
In 1595, the Spanish, frustrated by the twenty-year rebellion of their Dutch subjects, closed their home ports to rebel shipping from the Netherlands, cutting them off from the critical salt supplies necessary for their herring industry. The Dutch responded by sourcing new salt supplies from Spanish America where colonists were more than happy to trade. So large numbers of Dutch traders/pirates joined their English and French brethren trading on the remote coasts of Hispaniola. In 1605, Spain was infuriated that Spanish settlements on the northern and western coasts of the island persisted in carrying out large scale and illegal trade with the Dutch, who were at that time fighting a war of independence against Spain in Europe and the English, a very recent enemy state, and so decided to forcibly resettle their inhabitants closer to the city of Santo Domingo. This action, known as the "Devastaciones de Osorio", proved disastrous; more than half of the resettled colonists died of starvation or disease, over 100,000 cattle were abandoned, and many slaves escaped. Five of the existing thirteen settlements on the island were brutally razed by Spanish troops including the two settlements on the territory of present-day Haiti, La Yaguana, and Bayaja. Many of the inhabitants fought, escaped to the jungle, or fled to the safety of passing Dutch ships.
This Spanish action was counterproductive as English, Dutch, and French pirates were now free to establish bases on the island's abandoned northern and western coasts, where wild cattle were now plentiful and free. In 1697, after decades of fighting over the territory, the Spanish ceded the western
part of the island to the French, who henceforth called it Saint-Domingue. Saint-Domingue developed into a highly lucrative colony for France. Its economy was based on a labor-intensive sugar industry which rested on vast numbers of African slaves. Meanwhile, the situation on the Spanish part of the island deteriorated. The entire Spanish empire sank into a deep economic crisis, and Santo Domingo was in addition struck by earthquakes, hurricanes and a shrinking population.
In 1711, the city of Cap-Français was formally established by Louis XIV and took over as capital of the colony from Port-de-Paix. In 1726, the city of Les Cayes was founded on the Southern coast which became the biggest settlement in the south. In 1749, the city of Port-au-Prince was established on the West coast, which in 1770 took over as the capital of the colony from Cap-Français, however that same year the 1770 Port-au-Prince earthquake and tsunami destroyed the city killing 200 people immediately, and 30,000 later from famine and disease brought on by the natural disaster. This was the second major earthquake to hit Saint-Domingue as it followed the 1751 Port-au-Prince earthquake which had left only a single stone built building standing in the town.
Prior to the Seven Years' War (1756–63), the economy of Saint-Domingue gradually expanded, with sugar and, later, coffee becoming important export crops. After the war, which disrupted maritime commerce, the colony underwent rapid expansion. In 1767, it exported 72 million pounds of raw sugar and 51 million pounds of refined sugar, one million pounds of indigo, and two million pounds of cotton. Saint-Domingue became known as the "Pearl of the Antilles" – the richest colony in the 18th century French empire. By the 1780s, Saint-Domingue produced about 40 percent of all the sugar and 60 percent of all the coffee consumed in Europe. This single colony, roughly the size of Hawaii or Belgium, produced more sugar and coffee than all of Britain's West Indian colonies combined.
In the second half of the 1780s, Saint-Domingue accounted for a third of the entire Atlantic slave trade. The population of the African slaves imported for these plantations is estimated to have been 790,000. Between 1764 and 1771, the average importation of slaves varied between 10,000–15,000, by 1786 about 28,000, and, from 1787 onward, the colony received more than 40,000 slaves a year. However, the inability to maintain slave numbers without constant resupply from Africa meant the slave population, by 1789, totaled 500,000, ruled over by a white population that, by 1789, numbered only 32,000. At all times, a majority of slaves in the colony were African-born, as the brutal conditions of slavery prevented the population from experiencing growth through natural increase . African culture thus remained strong among slaves to the end of French rule, in particular the folk-religion of Vodou, which commingled Catholic liturgy and ritual with the beliefs and practices of Guinea, Congo, and Dahomey. Slave traders scoured the Atlantic coast of Africa, and the slaves who arrived came from hundreds of different tribes, their languages often mutually incomprehensible.
To regularize slavery, in 1685 Louis XIV enacted the "Code Noir", which accorded certain human rights to slaves and responsibilities to the master, who was obliged to feed, clothe, and provide for the general well-being of their slaves. The "code noir" also sanctioned corporal punishment, allowing masters to employ brutal methods to instill in their slaves the necessary docility, while ignoring provisions intended to regulate the administration of punishments. A passage from Henri Christophe's personal secretary, who lived more than half his life as a slave, describes the crimes perpetrated against the slaves of Saint-Domingue by their French masters:
Thousands of slaves found freedom by fleeing from their masters, forming communities of maroons and raiding isolated plantations. The most famous was Mackandal, a one-armed slave, originally from Guinea, who escaped in 1751. A Vodou Houngan (priest), he united many of the different maroon bands. He spent the next six years staging successful raids and evading capture by the French, reputedly killing over 6,000 people, while preaching a fanatic vision of the destruction of white civilization in St. Domingue. In 1758, after a failed plot to poison the drinking water of the plantation owners, he was captured and burned alive at the public square in Cap-Français.
Saint-Domingue also had the largest and wealthiest free population of color in the Caribbean, the "gens de couleur" (French, "people of color"). The mixed-race community in Saint-Domingue numbered 25,000 in 1789. First-generation gens de couleur were typically the offspring of a male, French slaveowner and an African slave chosen as a concubine. In the French colonies, the semi-official institution of "plaçage" defined this practice. By this system, the children were free people and could inherit property, thus originating a class of "mulattos" with property and some with wealthy fathers. This class occupied a middle status between African slaves and French colonists. Africans who attained freedom also enjoyed status as gens de couleur.
As numbers of "gens de couleur" grew, the French rulers enacted discriminatory laws. Statutes forbade "gens de couleur" from taking up certain professions, marrying whites, wearing European clothing, carrying swords or firearms in public, or attending social functions where whites were present. However, these regulations did not restrict their purchase of land, and many accumulated substantial holdings and became slave-owners. By 1789, they owned one-third of the plantation property and one-quarter of the slaves of Saint-Domingue. Central to the rise of the "gens de couleur" planter class was the growing importance of coffee, which thrived on the marginal hillside plots to which they were often relegated. The largest concentration of "gens de couleur" was in the southern peninsula, the last region of the colony to be settled, owing to its distance from Atlantic shipping lanes and its formidable terrain, with the highest mountain range in the Caribbean.
The outbreak of revolution in France in the summer of 1789 had a powerful effect on the colony. While the French settlers debated how new revolutionary laws would apply to Saint-Domingue, outright civil war broke out in 1790 when the free men of color claimed they too were French citizens under the terms of the Declaration of the Rights of Man and of the Citizen. Ten days before the fall of the Bastille, in July 1789, the French National Assembly had voted to seat six delegates from Saint-Domingue. In Paris, a group of wealthy mulattoes, led by Julien Raimond and Vincent Ogé, unsuccessfully petitioned the white planter delegates to support mulatto claims for full civil and political rights. Through the efforts of a group called "Société d'Amis des Noirs", of which Raimond and Ogé were prominent leaders, in March 1790 the National Assembly granted full civic rights to the "gens de couleur."
Vincent Ogé traveled to St. Domingue to secure the promulgation and implementation of this decree, landing near Cap-Français (now Cap-Haïtien) in October 1790 and petitioning the royal governor, the Comte de Peynier. After his demands were refused, he attempted to incite the "gens de couleur" to revolt. Ogé and Jean-Baptiste Chavennes, a veteran of the Siege of Savannah during the American Revolution, attempted to attack Cap-Français. However, the mulatto rebels refused to arm or free their slaves, or to challenge the status of slavery, and their attack was defeated by a force of white militia and black volunteers (including Henri Christophe). Afterwards, they fled across the frontier to Hinche, at the time in the Spanish part of the island. However, they were captured, returned to the French authorities, and both Ogé and Chavennes were executed in February 1791.
On 14 August 1791, slaves in the northern region of the colony staged a revolt that began the Haitian Revolution. Tradition marks the beginning of the revolution at a vodou ceremony at Bois Caïman (Alligator Woods) near Cap-Français. The call to arms was issued by a Houngan (Vodou priest) named Dutty Boukman. Within hours, the northern plantations were in flames. The rebellion spread through the entire colony. Boukman was captured and executed, but the rebellion continued to spread rapidly.
In 1792, Léger-Félicité Sonthonax was sent to the colony by the French Legislative Assembly as part of the Revolutionary Commission. His main goal was to maintain French control of Saint-Domingue, stabilize the colony, and enforce the social equality recently granted to free people of color by the National Convention of France.
On 29 August 1793, Sonthonax took the radical step of proclaiming the freedom of the slaves in the north province (with severe limits on their freedom). In September and October, emancipation was extended throughout the colony. The French National Convention, the first elected Assembly of the First Republic (1792–1804), on 4 February 1794, under the leadership of Maximilien de Robespierre, abolished slavery by law in France and all its colonies. The constitution of 1793, which was never applied, and the constitution of 1795, which was put into effect, did both contain an explicit ban on slavery.
The slaves did not immediately flock to Sonthonax's banner, however. White colonists continued to fight Sonthonax, with assistance from the British. They were joined by many of the free men of color who opposed the abolition of slavery. It was not until word of France's ratification of emancipation arrived back in the colony that Toussaint Louverture and his corps of well disciplined, battle-hardened former slaves came over to the French Republican side in early May 1794. A change in the political winds in France caused Sonthonax to be recalled in 1796, but not before taking the step of arming the former slaves.
When the radical revolutionaries in Paris declared war against Spain in January 1793, the Spanish Crown sent its forces in Santo Domingo into battle on the side of the slaves. By the end of 1793, Spain controlled most of the north, except British-held Môle-Saint-Nicolas and French-held Le Cap François and Port-de-Paix. In 1795, Spain ceded Santo Domingo to France and Spanish attacks on Saint-Domingue ceased.
In the south the British suffered a series of defeats at the hands of the mulatto General André Rigaud. On October 6, 1794, Rigaud took Léogane. On December 26, 1794, he attacked the British-held Tiburon, routing and decimating the British garrison along with black troops under the command of Jean Kina fighting with them. In 1798, having lost territory and thousands of men, the British were forced to withdraw.
In the meantime, Rigaud had set up a mulatto separatist movement in the south and Pétion had joined him. With the British out, Toussaint swung into action against them. As he sent General Dessalines against Grand and Petit Goâve and General Christophe against the mulatto stronghold of Jacmel, heavily armed American ships bombarded mulatto fortifications and destroyed Rigaud's transport barges. The display of American force and the fierce fighting of Toussaint's troops brought victory.
By 1801, Toussaint was in control of all of Hispaniola, after conquering French Santo Domingo and proclaiming the abolition of slavery there. He did not, however, proclaim full independence for the country, nor did he seek reprisals against the country's former white slaveholders, convinced that the French would not restore slavery and "that a population of slaves recently landed from Africa could not attain to civilization by 'going it alone.'"
Toussaint, however, asserted so much independence that in 1802, Napoleon sent a massive invasion force, under his brother-in-law Charles Leclerc, to increase French control. For a time, Leclerc met with some success; he also brought the eastern part of the island of Hispaniola under the direct control of France in accordance with the terms of the 1795 Treaties of Bâle with Spain. With a large expedition that eventually included 40,000 European troops, and receiving help from white colonists and mulatto forces commanded by Alexandre Pétion, a former lieutenant of Rigaud, the French won several victories after severe fighting. Two of Toussaint's chief lieutenants, Dessalines and Christophe, recognizing their untenable situation, held separate parleys with the invaders, and agreed to transfer their allegiance. At this point, Leclerc invited Toussaint to negotiate a settlement. It was a deception; Toussaint was seized and deported to France, where he died of pneumonia while imprisoned at Fort de Joux in the Jura Mountains in April 1803.
On 20 May 1802, Napoleon signed a law to maintain slavery where it had not yet disappeared, namely Martinique, Tobago, and Saint Lucia. A confidential copy of this decree was sent to Leclerc, who was authorized to restore slavery in Saint-Domingue when the time was opportune. At the same time, further edicts stripped the "gens de couleur" of their newly won civil rights. None of these decrees were published or executed in St. Domingue, but, by midsummer, word began to reach the colony of the French intention to restore slavery. The betrayal of Toussaint and news of French actions in Martinique undermined the collaboration of leaders such as Dessalines, Christophe, and Pétion. Convinced that the same fate lay in store for Saint-Domingue, these commanders and others once again battled Leclerc. With the French intent on reconquest and re-enslavement of the colony's black population, the war became a bloody struggle of atrocity and attrition. The rainy season brought yellow fever and malaria, which took a heavy toll on the invaders. By November, when Leclerc died of yellow fever, 24,000 French soldiers were dead and 8,000 were hospitalized, the majority from disease.
Afterwards, Leclerc was replaced by Donatien-Marie-Joseph de Vimeur, vicomte de Rochambeau. Rochambeau wrote to Napoleon that, to reclaim Saint-Domingue, France must 'declare the negroes slaves, and destroy at least 30,000 negroes and negresses.' In his desperation, he turned to increasingly wanton acts of brutality; the French burned alive, hanged, drowned, and tortured black prisoners, reviving such practices as burying blacks in piles of insects and boiling them in cauldrons of molasses. One night, at Port-Républican, he held a ball to which he invited the most prominent mulatto ladies and, at midnight, announced the death of their husbands. However, each act of brutality was repaid by the Haitian rebels. After one battle, Rochambeau buried 500 prisoners alive; Dessalines responded by hanging 500 French prisoners. Rochambeau's brutal tactics helped unite black, mulatto, and mestizo soldiers against the French.
As the tide of the war turned toward the former slaves, Napoleon abandoned his dreams of restoring France's New World empire. In 1803, war resumed between France and Britain, and with the Royal Navy firmly in control of the seas, reinforcements and supplies for Rochambeau never arrived in sufficient numbers. To concentrate on the war in Europe, Napoleon signed the Louisiana Purchase in April, selling France's North American possessions to the United States. The Haitian army, now led by Dessalines, devastated Rochembeau and the French army at the Battle of Vertières on 18 November 1803.
On 1 January 1804 Dessalines then declared independence, reclaiming the indigenous Taíno name of Haiti ("Land of Mountains") for the new nation. Most of the remaining French colonists fled ahead of the defeated French army, many migrating to Louisiana or Cuba. Unlike Toussaint, Dessalines showed little equanimity with regard to the whites. In a final act of retribution, the remaining French were slaughtered by Haitian military forces. Some 2,000 Frenchmen were massacred at Cap-Français, 900 in Port-au-Prince, and 400 at Jérémie. He issued a proclamation declaring, "we have repaid these cannibals, war for war, crime for crime, outrage for outrage."
One exception was a military force of Poles from the Polish Legions that had fought in Napoleon's army. A majority of Polish soldiers refused to fight against the Black inhabitants. At the time, there was a familiar situation going on back in their homeland, as these Polish soldiers were fighting for their liberty from the invading Russia, Prussia and Austria that began in 1772. As hopeful as the Haitians, many Poles were seeking union amongst themselves to win back their homeland. As a result, many Polish soldiers admired their enemy and decided to turn on the French army and join the Haitian slaves, and participated in the Haitian revolution of 1804, supporting the principles of liberty for all the people. Władysław Franciszek Jabłonowski who was half-Black was one of the Polish generals at the time. Polish soldiers had a remarkable input in helping the Haitians in the retaliation fights against the French oppressor. They were spared the fate of other Europeans. For their loyalty and support for overthrowing the French, some Poles acquired Haitian citizenship after Haiti gained its Independence, and many of them settled there to never return to Poland. It is estimated that around 500 of the 5280 Poles chose this option. Of the remainder, 700 returned to France to eventually return to Poland, and some – after capitulation – were forced to serve in British units. 160 Poles were later given permission to leave Haiti and some particular ones were sent to France at Haitian expense. To this day, many Polish Haitians still live in Haiti and are of mixed racial origin, however some have blonde hair, light eyes, and other European features. Today, descendants of those Poles who stayed are living in Cazale, Fond-des-Blancs, La Vallée-de-Jacmel, La Baleine, Port-Salut and Saint-Jean-du-Sud.
Despite the Haitian victory, France refused to recognize the newly independent country's sovereignty until 1825, in exchange for 150 million gold francs. This fee, demanded as retribution for the "lost property,"—slaves, land, equipment etc.—of the former colonialists, was later reduced to 90 million. Haiti agreed to pay the price to lift a crippling embargo imposed by France, Britain, and the United States— but to do so, the Haitian government had to take out high interest loans. The debt was not repaid in full until 1947.
Haiti is the world's oldest black republic and one of the oldest republics in the Western Hemisphere. Although Haiti actively assisted the independence movements of many Latin American countries – and secured a promise from the great liberator, Simón Bolívar, that he would free their slaves after winning independence from Spain – the nation of former slaves was excluded from the hemisphere's first regional meeting of independent nations, held in Panama in 1826. Furthermore, owing to entrenched opposition from Southern slave states, Haiti did not receive U.S. diplomatic recognition until 1862 (after those states had seceded from the Union) – largely through the efforts of anti-slavery senator Charles Sumner of Massachusetts.
Upon assuming power, General Dessalines authorized the Constitution of 1804. This constitution, in terms of social freedoms, called for:
On 22 September 1804, Dessalines, preferring Napoleon's style rather than the more liberal yet vulnerable type of political government of the French Republican Radicals (see liberalism and radicalism in France), proclaimed himself Emperor Jacques I. Yet two of his own advisers, Henri Christophe and Alexandre Pétion, helped provoke his assassination in 1806. The conspirators ambushed him north of Port-au-Prince at Pont Larnage (now known as Pont-Rouge) on 17 October 1806 en route to battle rebels to his regime.
The state created under Dessalines was the opposite of what the Haitian mass or the peasantry preferred. While both the elite leaders, such as Dessalines, as well as the Haitian population agreed the state should be built on the ideals of freedom and democracy, these ideals in practice looked very different for both groups. The main reason for this difference in viewpoints of nationalisms come from the fact that one group lived as slaves, and the other did not. For one, the economic and agricultural practices of Dessalines, and leaders after him, were based on the need to create a strong economic state, that was capable of maintaining a strong military. For the elite leaders of Haiti, maintaining a strong military to ward off either the French or other colonial powers and ensure independence would create a free state. The leaders of Haiti tied independence from other powers as their notion of freedom.
However, the Haitian peasantry tied their notion of freedom to the land. Because of the mountainous terrain, Haitian slaves were able to cultivate their own small tracts of land. Thus, freedom for them was the ability to cultivate their own land within a subsistence economy. Unfortunately, because of the leaders’ desires, a system of coerced plantation agriculture emerged. Furthermore, while all Haitians desired a black republic, the cultural practices of African-Americans were a point of contention. Many within the Haitian population wanted to maintain their African heritage, which is a logical connection to wanting a black republic. However, the elites typically tried to prove the sophistication of Haitians through literature. Some authors wrote that the barbarism of Africa must be expelled, while maintaining African roots.
Furthermore, other authors tried to prove the civility of the elite Haitians by arguing that blacks were capable of establishing and running a government by changing and augmenting the history of the revolution to favor the mulatto and black elites, rather than the bands of slaves. Furthermore, to maintain freedom and independence, the elites failed to provide a civil society that the Haitian mass desired. The Haitian peasants desired not only land freedom but also civil rights, such as voting and political participation as well as access to education. The state failed to provide those basic rights.
The state was essentially run by the military, which meant that it was very difficult for the Haitian population to participate in democratic processes. Most importantly, the state failed to provide proper access to education that a state consisting of former slaves would need. It was nearly impossible for the former slaves to participate effectively because they lacked the basic literacy that was intentionally denied to them by French colonial rule. Through differing views on Haitian nationalism and freedom, the elites created a state that greatly favored them, over the Haitian population and the Haitian peasantry.
After the Dessalines coup d'état, the two main conspirators divided the country in two rival regimes. Christophe created the authoritarian State of Haiti in the north, and the Gens de couleur Pétion helped establish the Republic of Haiti in the south. Christophe attempted to maintain a strict system of labor and agricultural production akin to the former plantations. Although, strictly speaking, he did not establish slavery, he imposed a semi-feudal system, fumage, in which every able man was required to work in plantations (similar to latifundios) to produce goods for the fledgling country. His method, though undoubtedly oppressive, produced the most revenues of the two governments.
By contrast, Pétion broke up the former colonial estates and parceled out the land into small holdings. In Pétion's south, the gens de couleur minority led the government and feared losing popular support, and thus, sought to assuage class tensions with land redistribution. Because of the weak international position and its labor policies (most peasants lived through a subsistence economy), Pétion's government was perpetually on the brink of bankruptcy. Yet, for most of its time, it produced one of the most liberal and tolerant Haitian governments ever. In 1815, at a key period of Bolívar's fight for Venezuelan independence, he gave the Venezuelan leader asylum and provided him soldiers and substantial material support. It also had the least of internal military skirmishes, despite its continuous conflicts with Christophe's northern kingdom. In 1816, however, after finding the burden of the Senate intolerable, he suspended the legislature and turned his post into President for Life. Not long after, he died of yellow fever, and his assistant Jean-Pierre Boyer replaced him.
In this period, the eastern part of the island rose against the new powers following general Juan Sánchez Ramírez's claims of independence from France, which broke the Treaties of Bâle attacking Spain and prohibited commerce with Haiti. In the Palo Hincado battle (7 November 1808), all the remaining French forces were defeated by Spanish-creole insurrectionists. On 9 July 1809, Santo Domingo was born. The government put itself under the control of Spain, earning it the nickname of "España Boba" (meaning "The Idiot Spain").
In 1811, Christophe proclaimed himself King Henri I in the North and commissioned several extraordinary buildings. He even created a nobility class in the fashion of European monarchies. Yet in 1820, weakened by illness and with a decreasing support for his authoritarian regime, he killed himself with a silver bullet rather than face a coup d'état. Immediately after, Pétion's successor, Boyer, reunited Haiti through diplomatic tactics, and ruled as president until his overthrow in 1843.
Almost two years after Boyer had consolidated power in the west, in 1821, Santo Domingo declared independence from Spain and requested from Simón Bolívar inclusion in the Gran Colombia. Boyer, however, responding to a party on the east that preferred Haiti over Colombia, occupied the ex-Spanish colony in January 1822, encountering no military resistance. In this way he accomplished the unity of the island, which was only carried out for a short period of time by Toussaint Louverture in 1801. Boyer's occupation of the Spanish side also responded to internal struggles among Christophe's generals, to which Boyer gave extensive powers and lands in the east. This occupation, however, pitted the Spanish white elite against the iron fisted Haitian administration, and stimulated the emigration of many white wealthy families. The entire island remained under Haitian rule until 1844, when in the east a nationalist group called La Trinitaria led a revolt that partitioned the island into Haiti on the west and Dominican Republic on the east, based on what would appear to be a riverine territorial 'divide' from the pre-contact period.
From 1824 to 1826, while the island was under one government, Boyer promoted the largest single free-Black immigration from the United States in which more than 6,000 immigrants settled in different parts of the island. Today remnants of these immigrants live throughout the island, but the larger number reside in Samaná, a peninsula on the Dominican side of the island. From the government's perspective, the intention of the immigration was to help establish commercial and diplomatic relationships with the US, and to increase the number of skilled and agricultural workers in Haiti.
In exchange for diplomatic recognition from France, Boyer was forced to pay a huge indemnity for the loss of French property during the revolution. To pay for this, he had to float loans in France, putting Haiti into a state of debt. Boyer attempted to enforce production through the "Code Rural", enacted in 1826, but peasant freeholders, mostly former revolutionary soldiers, had no intention of returning to the forced labor they fought to escape. By 1840, Haiti had ceased to export sugar entirely, although large amounts continued to be grown for local consumption as "taffia"-a raw rum. However, Haiti continued to export coffee, which required little cultivation and grew semi-wild.
The 1842 Cap-Haïtien earthquake destroyed the city, and the Sans-Souci Palace, killing 10,000 people. This was the third major earthquake to hit Western Hispaniola following the 1751 and 1770 Port-au-Prince earthquakes, and the last until the devastating earthquake of 2010.
Haiti went through a long period of oppression and instability following the presidency of Jean-Pierre Boyer. Likewise, in the Dominican Republic, a succession of strongmen ruled harshly during the last half of the 19th century, crushed frequent uprisings, and repelled Haitian invasions.
In 1843, a revolt, led by Charles Rivière-Hérard, overthrew Boyer and established a brief parliamentary rule under the Constitution of 1843. Revolts soon broke out and the country descended into near chaos, with a series of transient presidents until March 1847, when General Faustin Soulouque, a former slave who had fought in the rebellion of 1791, became president. He purged the military high command, established a secret police, and eliminated mulatto opponents. In August 1849, he grandiosely proclaimed himself as Haiti's second emperor, Faustin I. Soulouque's expansive ambitions led him to mount several invasions of the Dominican Republic. The new emperor had been called a "rey de farsa" by the Dominicans. The white and mulatto rulers of the Dominican Republic he considered as his "natural" enemies, and he could never consolidate his rule without this conquest, for his reign had been founded on domination and would stand only by it. When in 1849 Soulouque led his first invasion into the Dominican Republic, President Buenaventura Báez declared war on Haiti. The invasion included two marine campaigns. Soulouque launched his last campaign in December 1855. In January of the following year, a Haitian contingent of 6,000 soldiers was terribly defeated in the border town of Ouanaminthe. More than 1,000 men were killed, and many were wounded and declared missing on the way back to the capital. The failure of that expedition hurt Soulouque's image at home. When he rode into Port-au-Prince with what remained of his army he was loudly cursed by women who had lost their sons, brothers, and husbands in the war. Four years later, he was deposed by General Fabre Geffrard, styled the Duke of Tabara.
Geffrard's military government held office until 1867, and he encouraged a successful policy of national reconciliation. In 1860, he reached an agreement with the Vatican, reintroducing official Roman Catholic institutions, including schools, to the nation. In 1867 an attempt was made to establish a constitutional government, but successive presidents Sylvain Salnave and Nissage Saget were overthrown in 1869 and 1874 respectively. A more workable constitution was introduced under Michel Domingue in 1874, leading to a long period of democratic peace and development for Haiti. The debt to France was finally repaid in 1879, and Michel Domingue's government peacefully transferred power to Lysius Salomon, one of Haiti's abler leaders. Monetary reform and a cultural renaissance ensued with a flowering of Haitian art.
The last two decades of the 19th century were also marked by the development of a Haitian intellectual culture. Major works of history were published in 1847 and 1865. Haitian intellectuals, led by Louis-Joseph Janvier and Anténor Firmin, engaged in a war of letters against a tide of racism and Social Darwinism that emerged during this period.
The Constitution of 1867 saw peaceful and progressive transitions in government that did much to improve the economy and stability of the Haitian nation and the condition of its people. Constitutional government restored the faith of the Haitian people in legal institutions. The development of industrial sugar and rum industries near Port-au-Prince made Haiti, for a while, a model for economic growth in Latin American countries.
This period of relative stability and prosperity ended in 1911, when revolution broke out and the country slid once again into disorder and debt.
From 1911 to 1915, there were six different presidents, each of whom was killed or forced into exile. The revolutionary armies were formed by "cacos", peasant brigands from the mountains of the north, along the porous Dominican border, who were enlisted by rival political factions with promises of money to be paid after a successful revolution and an opportunity to plunder.
The United States was particularly apprehensive about the role of the German community in Haiti (approximately 200 in 1910), who wielded a disproportionate amount of economic power. Germans controlled about 80% of the country's international commerce; they also owned and operated utilities in Cap Haïtien and Port-au-Prince, the main wharf and a tramway in the capital, and a railroad serving the Plaine de Cul-du-Sac.
The German community proved more willing to integrate into Haitian society than any other group of white foreigners, including the French. A number married into the nation's most prominent mulatto families, bypassing the constitutional prohibition against foreign land-ownership. They also served as the principal financiers of the nation's innumerable revolutions, floating innumerable loans-at high interest rates-to competing political factions.
In an effort to limit German influence, in 1910–11, the US State Department backed a consortium of American investors, assembled by the National City Bank of New York, in acquiring control of the "Banque Nationale d'Haïti", the nation's only commercial bank and the government treasury.
In February 1915, Vilbrun Guillaume Sam established a dictatorship, but in July, facing a new revolt, he massacred 167 political prisoners, all of whom were from elite families, and was lynched by a mob in Port-au-Prince.
In 1915 the United States, responding to complaints to President Woodrow Wilson from American banks to which Haiti was deeply in debt, occupied the country. The occupation of Haiti lasted until 1934. The US occupation was resented by Haitians as a loss of sovereignty and there were revolts against US forces. Reforms were carried out despite this.
Under the supervision of the United States Marines, the Haitian National Assembly elected Philippe Sudré Dartiguenave president. He signed a treaty that made Haiti a "de jure" US protectorate, with American officials assuming control over the Financial Advisory, Customs Receivership, the Constabulary, the Public Works Service, and the Public Health Service for a period of ten years. The principal instrument of American authority was the newly created "Gendarmerie d'Haïti", commanded by American officers. In 1917, at the demand of US officials, the National Assembly was dissolved, and officials were designated to write a new constitution, which was largely dictated by officials in the US State Department and US Navy Department. Franklin D. Roosevelt, Under-Secretary for the Navy in the Wilson administration, claimed to have personally written the new constitution. This document abolished the prohibition on foreign ownership of land – the most essential component of Haitian law. When the newly elected National Assembly refused to pass this document and drafted one of its own preserving this prohibition, it was forcibly dissolved by "Gendarmerie" commandant Smedley Butler. This constitution was approved by a plebiscite in 1919, in which less than 5% of the population voted. The US State Department authorized this plebiscite presuming that "the people casting ballots would be 97% illiterate, ignorant in most cases of what they were voting for."
The Marines and "Gendarmerie" initiated an extensive road-building program to enhance their military effectiveness and open the country to US investment. Lacking any source of adequate funds, they revived an 1864 Haitian law, discovered by Butler, requiring peasants to perform labor on local roads in lieu of paying a road tax. This system, known as the corvée, originated in the unpaid labor that French peasants provided to their feudal lords. In 1915, Haiti had of road usable by automobile, outside the towns. By 1918, more than of road had been built or repaired through the corvée system, including a road linking Port-au-Prince to Cap-Haïtien. However, Haitians forced to work in the corvée labor-gangs, frequently dragged from their homes and harassed by armed guards, received few immediate benefits and saw this system of forced labor as a return to slavery at the hands of white men.
In 1919, a new "caco" uprising began, led by Charlemagne Péralte, vowing to 'drive the invaders into the sea and free Haiti.' The Cacos attacked Port-au-Prince in October but were driven back with heavy casualties. Afterwards, a Creole-speaking American "Gendarmerie" officer and two US marines infiltrated Péralte's camp, killing him and photographing his corpse in an attempt to demoralize the rebels. Leadership of the rebellion passed to Benoît Batraville, a Caco chieftain from Artibonite, who also launched an assault on the capital. His death in 1920 marked the end of hostilities. During Senate hearings in 1921, the commandant of the Marine Corps reported that, in the twenty months of active resistance, 2,250 Haitians had been killed. However, in a report to the Secretary of the Navy he reported the death toll as being 3,250. Haitian historians have estimated the true number was much higher; one suggested, "the total number of battle victims and casualties of repression and consequences of the war might have reached, by the end of the pacification period, four or five times that – somewhere in the neighborhood of 15,000 persons."
In 1922, Dartiguenave was replaced by Louis Borno, who ruled without a legislature until 1930. That same year, General John H. Russell, Jr., was appointed High Commissioner. The Borno-Russel dictatorship oversaw the expansion of the economy, building over of road, establishing an automatic telephone exchange, modernizing the nation's port facilities, and establishing a public health service. Sisal was introduced to Haiti, and sugar and cotton became significant exports. However, efforts to develop commercial agriculture had limited success, in part because much of Haiti's labor force was employed at seasonal work in the more established sugar industries of Cuba and the Dominican Republic. An estimated 30,000–40,000 Haitian laborers, known as "braceros", went annually to the Oriente Province of Cuba between 1913 and 1931. Most Haitians continued to resent the loss of sovereignty. At the forefront of opposition among the educated elite was "L'Union Patriotique," which established ties with opponents of the occupation in the US itself, in particular the National Association for the Advancement of Colored People (NAACP).
The Great Depression decimated the prices of Haiti's exports and destroyed the tenuous gains of the previous decade. In December 1929, Marines in Les Cayes killed ten Haitians during a march to protest local economic conditions. This led Herbert Hoover to appoint two commissions, including one headed by a former US governor of the Philippines William Cameron Forbes, which criticized the exclusion of Haitians from positions of authority in the government and constabulary, now known as the "Garde d'Haïti". In 1930, Sténio Vincent, a long-time critic of the occupation, was elected president, and the US began to withdraw its forces. The withdrawal was completed under US President Franklin D. Roosevelt (FDR), in 1934, under his "Good Neighbor policy". The US retained control of Haiti's external finances until 1947. All three rulers during the occupation came from the country's small mulatto minority. At the same time, many in the growing black professional classes departed from the traditional veneration of Haiti's French cultural heritage and emphasized the nation's African roots, most notably ethnologist Jean Price-Mars and the journal "Les Griots", edited by Dr. François Duvalier.
The transition government was left with a better infrastructure, public health, education, and agricultural development as well as a democratic system. The country had fully democratic elections in 1930, won by Sténio Vincent. The Garde was a new kind of military institution in Haiti. It was a force manned overwhelmingly by blacks, with a United States-trained black commander, Colonel Démosthènes Pétrus Calixte. Most of the Garde's officers, however, were mulattoes. The Garde was a national organization; it departed from the regionalism that had characterized most of Haiti's previous armies. In theory, its charge was apolitical—to maintain internal order, while supporting a popularly elected government. The Garde initially adhered to this role.
President Vincent took advantage of the comparative national stability, which was being maintained by a professionalized military, to gain absolute power. A plebiscite permitted the transfer of all authority in economic matters from the legislature to the executive, but Vincent was not content with this expansion of his power. In 1935 he forced through the legislature a new constitution, which was also approved by plebiscite. The constitution praised Vincent, and it granted the executive sweeping powers to dissolve the legislature at will, to reorganize the judiciary, to appoint ten of twenty-one senators (and to recommend the remaining eleven to the lower house), and to rule by decree when the legislature was not in session. Although Vincent implemented some improvements in infrastructure and services, he brutally repressed his opposition, censored the press, and governed largely to benefit himself and a clique of merchants and corrupt military officers.
Under Calixte the majority of Garde personnel had adhered to the doctrine of political nonintervention that their Marine Corps trainers had stressed. Over time, however, Vincent and Dominican dictator Rafael Leónidas Trujillo Molina sought to buy adherents among the ranks. Trujillo, determined to expand his influence over all of Hispaniola, in October 1937 ordered the indiscriminate butchery by the Dominican army of an estimated 14,000 to 40,000 Haitians on the Dominican side of the Massacre River. Some observers claim that Trujillo supported an abortive coup attempt by young Garde officers in December 1937. Vincent dismissed Calixte as commander and sent him abroad, where he eventually accepted a commission in the Dominican military as a reward for his efforts while on Trujillo's payroll. The attempted coup led Vincent to purge the officer corps of all members suspected of disloyalty, marking the end of the apolitical military.
In 1941 Vincent showed every intention of standing for a third term as president, but after almost a decade of disengagement, the United States made it known that it would oppose such an extension. Vincent accommodated the Roosevelt administration and handed power over to Elie Lescot.
Lescot was a mulatto who had served in numerous government posts. He was competent and forceful, and many considered him a sterling candidate for the presidency, despite his elitist background. Like the majority of previous Haitian presidents, however, he failed to live up to his potential. His tenure paralleled that of Vincent in many ways. Lescot declared himself commander in chief of the military, and power resided in a clique that ruled with the tacit support of the Garde. He repressed his opponents, censored the press, and compelled the legislature to grant him extensive powers. He handled all budget matters without legislative sanction and filled legislative vacancies without calling elections. Lescot commonly said that Haiti's declared state-of-war against the Axis powers during World War II justified his repressive actions. Haiti, however, played no role in the war except for supplying the United States with raw materials and serving as a base for a United States Coast Guard detachment.
Aside from his authoritarian tendencies, Lescot had another flaw: his relationship with Rafael Trujillo. While serving as Haitian ambassador to the Dominican Republic, Lescot fell under the sway of Trujillo's influence and wealth. In fact, it was Trujillo's money that reportedly bought most of the legislative votes that brought Lescot to power. Their clandestine association persisted until 1943, when the two leaders parted ways for unknown reasons. Trujillo later made public all his correspondence with the Haitian leader. The move undermined Lescot's already dubious popular support.
In January 1946, events came to a head when Lescot jailed the Marxist editors of a journal called "La Ruche" (The Beehive). This action precipitated student strikes and protests by government workers, teachers, and shopkeepers in the capital and provincial cities. In addition, Lescot's mulatto-dominated rule had alienated the predominantly black Garde. His position became untenable, and he resigned on 11 January. Radio announcements declared that the Garde had assumed power, which it would administer through a three-member junta.
The Revolution of 1946 was a novel development in Haiti's history, as the Garde assumed power as an institution, not as the instrument of a particular commander. The members of the junta, known as the Military Executive Committee (Comité Exécutif Militaire), were Garde commander Colonel Franck Lavaud, Major Antoine Levelt, and Major Paul E. Magloire, commander of the Presidential Guard. All three understood Haiti's traditional way of exercising power, but they lacked a thorough understanding of what would be required to make the transition to an elected civilian government. Upon taking power, the junta pledged to hold free elections. The junta also explored other options, but public clamor, which included public demonstrations in support of potential candidates, eventually forced the officers to make good on their promise.
Haiti elected its National Assembly in May 1946. The Assembly set 16 August 1946, as the date on which it would select a president. The leading candidates for the office—all of whom were black—were Dumarsais Estimé, a former school teacher, assembly member, and cabinet minister under Vincent; Félix d'Orléans Juste Constant, leader of the Haitian Communist Party (Parti Communiste d'Haïti—PCH); and former Garde commander Démosthènes Pétrus Calixte, who stood as the candidate of a progressive coalition that included the Worker Peasant Movement (Mouvement Ouvrier Paysan—MOP). MOP chose to endorse Calixte, instead of a candidate from its own ranks, because the party's leader, Daniel Fignolé, was only thirty-three years old—too young to stand for the nation's highest office. Estimé, politically the most moderate of the three, drew support from the black population in the north, as well as from the emerging black middle class. The leaders of the military, who would not countenance the election of Juste Constant and who reacted warily to the populist Fignolé, also considered Estimé the safest candidate. After two rounds of polling, legislators gave Estimé the presidency.
Estimé's election represented a break with Haiti's political tradition. Although he was reputed to have received support from commanders of the Garde, Estimé was a civilian. Of humble origins, he was passionately anti-elitist and therefore generally antimulatto. He demonstrated, at least initially, a genuine concern for the welfare of the people. Operating under a new constitution that went into effect in November 1946, Estimé proposed, but never secured passage of, Haiti's first social- security legislation. He did, however, expand the school system, encourage the establishment of rural cooperatives, raise the salaries of civil servants, and increase the representation of middle-class and lower-class blacks in the public sector. He also attempted to gain the favor of the Garde—renamed the Haitian Army (Armée d'Haïti) in March 1947—by promoting Lavaud to brigadier general and by seeking United States military assistance.
Estimé eventually fell victim to two of the time-honored pitfalls of Haitian rule: elite intrigue and personal ambition. The elite had a number of grievances against Estimé. Not only had he largely excluded them from the often lucrative levers of government, but he also enacted the country's first income tax, fostered the growth of labor unions, and suggested that vodou be considered as a religion equivalent to Roman Catholicism—a notion that the Europeanized elite abhorred. Lacking direct influence in Haitian affairs, the elite resorted to clandestine lobbying among the officer corps. Their efforts, in combination with deteriorating domestic conditions, led to a coup in May 1950.
To be sure, Estimé had hastened his own demise in several ways. His nationalization of the Standard Fruit banana concession sharply reduced the firm's revenues. He alienated workers by requiring them to invest between 10 percent and 15 percent of their salaries in national-defense bonds. The president sealed his fate by attempting to manipulate the constitution in order to extend his term in office. Seizing on this action and the popular unrest it engendered, the army forced the president to resign on 10 May 1950. The same junta that had assumed power after the fall of Lescot reinstalled itself. An army escort conducted Estimé from the National Palace and into exile in Jamaica. The events of May 1946 made an impression upon the deposed minister of labor, François Duvalier. The lesson that Duvalier drew from Estimé's ouster was that the military could not be trusted. It was a lesson that he would act upon when he gained power.
The power balance within the junta shifted between 1946 and 1950. Lavaud was the preeminent member at the time of the first coup, but Magloire, now a colonel, dominated after Estimé's overthrow. When Haiti announced that its first direct elections (all men twenty-one or over were allowed to vote) would be held on 8 October 1950, Magloire resigned from the junta and declared himself a candidate for president. In contrast to the chaotic political climate of 1946, the campaign of 1950 proceeded under the implicit understanding that only a strong candidate backed by both the army and the elite would be able to take power. Facing only token opposition, Magloire won the election and assumed office on 6 December.
Magloire restored the elite to prominence. The business community and the government benefited from favorable economic conditions until Hurricane Hazel hit the island in 1954. Haiti made some improvements on its infrastructure, but most of these were financed largely by foreign loans. By Haitian standards, Magloire's rule was firm, but not harsh: he jailed political opponents, including Fignolé, and shut down their presses when their protests grew too strident, but he allowed labor unions to function, although they were not permitted to strike. It was in the arena of corruption, however, that Magloire overstepped traditional bounds. The president controlled the sisal, cement, and soap monopolies. He and other officials built imposing mansions. The injection of international hurricane relief funds into an already corrupt system boosted graft to levels that disillusioned all Haitians. To make matters worse, Magloire followed in the footsteps of many previous presidents by disputing the termination date of his stay in office. Politicians, labor leaders, and their followers flocked to the streets in May 1956 to protest Magloire's failure to step down. Although Magloire declared martial law, a general strike essentially shut down Port-au-Prince. Again like many before him, Magloire fled to Jamaica, leaving the army with the task of restoring order.
The period between the fall of Magloire and the election of Duvalier in September 1957 was a chaotic one, even by Haitian standards. Three provisional presidents held office during this interval; one resigned and the army deposed the other two, Franck Sylvain and Fignolé. Duvalier is said to have engaged actively in the behind-the-scenes intrigue that helped him to emerge as the presidential candidate that the military favored. The military went on to guide the campaign and the elections in a way that gave Duvalier every possible advantage. Most political actors perceived Duvalier—a medical doctor who had served as a rural administrator of a United States-funded anti-yaws campaign before entering the cabinet under Estimé—as an honest and fairly unassuming leader without a strong ideological motivation or program. When elections were finally organized, this time under terms of universal suffrage (both men and women now had the vote), Duvalier, a black, painted himself as the legitimate heir to Estimé. This approach was enhanced by the fact that Duvalier's only viable opponent, Louis Déjoie, was a mulatto and the scion of a prominent family. Duvalier scored a decisive victory at the polls. His followers took two-thirds of the legislature's lower house and all of the seats in the Senate.
A former Minister of Health who had earned a reputation as a humanitarian while serving as an administrator in a U.S.-funded anti-yaws campaign, François Duvalier (known as ""Papa Doc"") soon established another dictatorship. His regime is regarded as one of the most repressive and corrupt of modern times, combining violence against political opponents with exploitation of Vodou to instill fear in the majority of the population. Duvalier's paramilitary police, officially the Volunteers for National Security (Volontaires de la Sécurité Nationale – VSN) but more commonly known as the Tonton Macoutes, named for a Vodou monster, carried out political murders, beatings, and intimidation. An estimated 30,000 Haitians were killed by his government. Duvalier employed rape as a political tool to silence political opposition. Incorporating many "houngans" into the ranks of the Macoutes, his public recognition of Vodou and its practitioners and his private adherence to Vodou ritual, combined with his reputed private knowledge of magic and sorcery, enhanced his popular persona among the common people and served as a peculiar form of legitimization.
Duvalier's policies, designed to end the dominance of the mulatto elite over the nation's economic and political life, led to massive emigration of educated people, deepening Haiti's economic and social problems. However, Duvalier appealed to the black middle class of which he was a member by introducing public works into middle-class neighborhoods that previously had been unable to have paved roads, running water, or modern sewage systems. In 1964, Duvalier proclaimed himself "President for Life".
The Kennedy administration suspended aid in 1961, after allegations that Duvalier had pocketed aid money and intended to use a Marine Corps mission to strengthen the Macoutes. Duvalier also clashed with Dominican President Juan Bosch in 1963, after Bosch provided aid and asylum to Haitian exiles working to overthrow his regime. He ordered the Presidential Guard to occupy the Dominican chancery in Pétion-Ville to apprehend an officer involved in a plot to kidnap his children, leading Bosch to publicly threaten to invade Haiti. However, the Dominican army, which distrusted Bosch's leftist leanings, expressed little support for an invasion, and the dispute was settled by OAS emissaries.
In 1971, Papa Doc entered into a 99-year contract with Don Pierson representing Dupont Caribbean Inc. of Texas for a free port project on the old buccaneer stronghold of Tortuga island located some off the north coast of the main Haitian island of Hispaniola.
On Duvalier's death in April 1971, power passed to his 19-year-old son Jean-Claude Duvalier (known as ""Baby Doc""). Under Jean-Claude Duvalier, Haiti's economic and political condition continued to decline, although some of the more fearsome elements of his father's regime were abolished. Foreign officials and observers also seemed more tolerant toward Baby Doc, in areas such as human-rights monitoring, and foreign countries were more generous to him with economic assistance. The United States restored its aid program in 1971. In 1974, Baby Doc expropriated the Freeport Tortuga project and this caused the venture to collapse. Content to leave administrative matters in the hands of his mother, Simone Ovid Duvalier, while living as a playboy, Jean-Claude enriched himself through a series of fraudulent schemes. Much of the Duvaliers' wealth, amounting to hundreds of millions of dollars over the years, came from the Régie du Tabac (Tobacco Administration), a tobacco monopoly established by Estimé, which expanded to include the proceeds from all government enterprises and served as a slush fund for which no balance sheets were ever kept. His marriage, in 1980, to a beautiful mulatto divorcée, Michèle Bennett, in a $3 million ceremony, provoked widespread opposition, as it was seen as a betrayal of his father's antipathy towards the mulatto elite. At the request of Michèle, Papa Doc's widow Simone was expelled from Haiti. Baby Doc's kleptocracy left the regime vulnerable to unanticipated crises, exacerbated by endemic poverty, most notably the epidemic of African swine fever virus—which, at the insistence of USAID officials, led to the slaughter of the creole pigs, the principal source of income for most Haitians; and the widely publicized outbreak of AIDS in the early 1980s. Widespread discontent in Haiti began in 1983, when Pope John Paul II condemned the regime during a visit, finally provoking a rebellion, and in February 1986, after months of disorder, the army forced Duvalier to resign and go into exile.
From 1986 to early 1988 Haiti was ruled by a provisional military government under General Namphy. was ratified, providing for an elected bicameral parliament, an elected president, and a prime minister, cabinet, ministers, and supreme court appointed by the president with parliament's consent. The Constitution also provided for political decentralization through the election of mayors and administrative bodies responsible for local government. The November 1987 elections were cancelled after troops massacred 30–300 voters on election day. Jimmy Carter later wrote that "Citizens who lined up to vote were mowed down by fusillades of terrorists’ bullets. Military leaders, who had either orchestrated or condoned the murders, moved in to cancel the election and retain control of the Government." The election was followed several months later by the Haitian presidential election, 1988, which was boycotted by almost all the previous candidates, and saw turnout of just 4%.
The 1988 elections led to Professor Leslie Manigat becoming president, but three months later he too was ousted by the military. Further instability ensued, with several massacres, including the St Jean Bosco massacre in which the church of Jean-Bertrand Aristide was attacked and burned down. During this period, the Haitian National Intelligence Service (SIN), which had been set up and financed in the 80s by the Central Intelligence Agency as part of the war on drugs, participated in drug trafficking and political violence.
In December 1990, Jean-Bertrand Aristide, a liberation theology Roman Catholic (Salesian) priest, won 67% of the vote in elections that international observers deemed largely free and fair.
Aristide's radical populist policies and the violence of his bands of supporters alarmed many of the country's elite, and, in September 1991, he was overthrown in the 1991 Haitian coup d'état, which brought General Raoul Cédras to power. The coup saw hundreds killed, and Aristide was forced into exile, his life saved by international diplomatic intervention.
An estimated 3,000–5,000 Haitians were killed during the period of military rule. The coup created a large-scale exodus of refugees to the United States. The United States Coast Guard interdicted (in many cases, rescued) a total of 41,342 Haitians during 1991 and 1992. Most were denied entry to the United States and repatriated back to Haiti. Aristide has accused the United States of backing the 1991 coup. In response to the coup, the United Nations Security Council passed Resolution 841 imposing international sanctions and an arms embargo on Haiti.
On 16 February 1993, the ferry "Neptune" sank, drowning an estimated 700 passengers. This was the worst ferry disaster in Haitian history.
The military regime governed Haiti until 1994, and according to some sources included drug trafficking led by Chief of National Police Michel François. Various initiatives to end the political crisis through the peaceful restoration of the constitutionally elected government failed. In July 1994, as repression mounted in Haiti and a civilian human rights monitoring mission was expelled from the country, the United Nations Security Council adopted United Nations Security Council Resolution 940, which authorized member states to use all necessary means to facilitate the departure of Haiti's military leadership and to restore Haiti's constitutionally elected government to power.
In mid-September 1994, with U.S. troops prepared to enter Haiti by force for Operation Uphold Democracy, President Bill Clinton dispatched a negotiating team led by former President Jimmy Carter to persuade the authorities to step aside and allow for the return of constitutional rule. With intervening troops already airborne, Cédras and other top leaders agreed to step down. In October, Aristide was able to return. The Haitian general election, 1995 in June 1995 saw Aristide's coalition, the Lavalas (Waterfall) Political Organization, gain a sweeping victory, and René Préval, a prominent Aristide political ally, elected President with 88% of the vote. When Aristide's term ended in February 1996, this was Haiti's first ever transition between two democratically elected presidents.
In late 1996, Aristide broke with Préval and formed a new political party, the Lavalas Family (Fanmi Lavalas, FL), which won elections in April 1997 for one-third of the Senate and local assemblies, but these results were not accepted by the government. The split between Aristide and Préval produced a dangerous political deadlock, and the government was unable to organize the local and parliamentary elections due in late 1998. In January 1999, Préval dismissed legislators whose terms had expired – the entire Chamber of Deputies and all but nine members of the Senate, and Préval then ruled by decree.
In May 2000 the Haitian legislative election, 2000 for the Chamber of Deputies and two-thirds of the Senate took place. The election drew a voter turnout of more than 60%, and the FL won a virtual sweep. However, the elections were marred by controversy in the Senate race over the calculation of whether Senate candidates had achieved the majority required to avoid a run-off election (in Haiti, seats where no candidate wins an absolute majority of votes cast has to enter a second-round run-off election). The validity of the Electoral Council's post-ballot calculations of whether a majority had been attained was disputed. The Organization of American States complained about the calculation and declined to observe the July run-off elections. The opposition parties, regrouped in the Democratic Convergence (Convergence Démocratique, CD), demanded that the elections be annulled, and that Préval stand down and be replaced by a provisional government. In the meantime, the opposition announced it would boycott the November presidential and senatorial elections. Haiti's main aid donors threatened to cut off aid. At the November 2000 elections, boycotted by the opposition, Aristide was again elected president, with more than 90% of the vote, on a turnout of around 50% according to international observers. The opposition refused to accept the result or to recognize Aristide as president.
Allegations emerged of drug trafficking reaching into the upper echelons of government, as it had done under the military regimes of the 1980s and early 1990s (illegal drug trade in Haiti). Canadian police arrested Oriel Jean, Aristide's security chief and one of his most trusted friends, for money laundering. Beaudoin Ketant, a notorious international drug trafficker, Aristide's close partner, and his daughter's godfather, claimed that Aristide "turned the country into a narco-country; it's a one-man show; you either pay (Aristide) or you die".
Aristide spent years negotiating with the Convergence Démocratique on new elections, but the Convergence's inability to develop a sufficient electoral base made elections unattractive, and it rejected every deal offered, preferring to call for a US invasion to topple Aristide.
Anti-Aristide protests in January 2004 led to violent clashes in Port-au-Prince, causing several deaths. In February, a revolt broke out in the city of Gonaïves, which was soon under rebel control. The rebellion then began to spread, and Cap-Haïtien, Haiti's second-largest city, was captured. A mediation team of diplomats presented a plan to reduce Aristide's power while allowing him to remain in office until the end of his constitutional term. Although Aristide accepted the plan, it was rejected by the opposition, which mostly consisted of Haitian businessmen and former members of the army (who sought to reinstate the military following Aristide's disbandment of it).
On 29 February 2004, with rebel contingents marching towards Port-au-Prince, Aristide departed from Haiti. Aristide insists that he was essentially kidnapped by the U.S., while the U.S. State Department maintains that he resigned from office. Aristide and his wife left Haiti on an American airplane, escorted by American diplomats and military personnel, and were flown directly to Bangui, capital of the Central African Republic, where he stayed for the following two weeks, before seeking asylum in a less remote location. This event was later characterized by Aristide as a kidnapping.
Though this has never been proven, many observers in the press and academia believe that the US has not provided convincing answers to several of the more suspicious details surrounding the coup, such as the circumstances under which the US obtained Aristide's purported letter of "resignation" (as presented by the US) which, translated from Kreyòl, may not have actually read as a resignation.
Aristide has accused the U.S. of deposing him in concert with the Haitian opposition. In a 2006 interview, he said the U.S. went back on their word regarding compromises he made with them over privatization of enterprises to ensure that part of the profits would go to the Haitian people and then "relied on a disinformation campaign" to discredit him.
Political organizations and writers, as well as Aristide himself, have suggested that the rebellion was in fact a foreign controlled coup d'état. Caricom, which had been backing the peace deal, accused the United States, France, and the International community of failing in Haiti because they allegedly allowed a controversially elected leader to be violently forced out of office. The international community stated that the crisis was of Aristide's making and that he was not acting in the best interests of his country. They have argued that his removal was necessary for future stability in the island nation.
Some investigators claimed to have discovered extensive embezzlement, corruption, and money laundering by Aristide. It was claimed Aristide had stolen tens of millions of dollars from the country, though specific bank account documents proving this have yet to be presented. None of the allegations about Aristide's involvement in embezzlement, corruption, or money laundering schemes could be proven. The criminal court case brought against Aristide was quietly shelved, though various members of his Lavalas party languished for years in prison without charge or trial due to similar accusations The Haitian government suspended the suit against Aristide on 30 Jun 2006 to prevent it from being thrown out for want of prosecution.
The government was taken over by Supreme Court Chief Justice Boniface Alexandre. Alexandre petitioned the United Nations Security Council for the intervention of an international peacekeeping force. The Security Council passed a resolution the same day "[t]aking note of the resignation of Jean-Bertrand Aristide as President of Haiti and the swearing-in of President Boniface Alexandre as the acting President of Haiti in accordance with the Constitution of Haiti" and authorized such a mission. As a vanguard of the official U.N. force, a force of about 1,000 U.S. Marines arrived in Haiti within the day, and Canadian and French troops arrived the next morning; the United Nations indicated it would send a team to assess the situation within days. These international troops have been criticized for cooperating with rebel forces, refusing to disarm them, and integrating former military and death-squad (FRAPH) members into the re-militarized Haitian National Police force following the coup.
On 1 June 2004, the peacekeeping mission was passed to MINUSTAH and comprised a 7,000 strength force led by Brazil and backed by Argentina, Chile, Jordan, Morocco, Nepal, Peru, Philippines, Spain, Sri Lanka, and Uruguay.
Brazilian forces led the United Nations peacekeeping troops in Haiti composed of United States, France, Canada, and Chile deployments. These peacekeeping troops were a part of the ongoing MINUSTAH operation.
In November 2004, the University of Miami School of Law carried out a Human Rights Investigation in Haiti and documented serious human rights abuses. It stated that "summary executions are a police tactic." It also suggested a "disturbing pattern."
In March 2004, the Haiti Commission of Inquiry, headed by former US attorney-general Ramsey Clark, published its findings : "Noting that 200 US special forces had travelled to the Dominican Republic for “military exercises” in February 2003, the commission accused the US of arming and training Haitian rebels there. With permission from the Dominican president, Hipólito Mejía, US forces trained near the border, in an area used by former soldiers of the disbanded Haitian army to launch attacks on Haitian state property."
On 15 October 2005, Brazil called for more troops to be sent due to the worsening situation in the country.
After Aristide's overthrow, the violence in Haiti continued, despite the presence of peacekeepers. Clashes between police and Fanmi Lavalas supporters were common, and peacekeeping forces were accused of conducting a massacre against the residents of Cité Soleil in July 2005. Several of the protests resulted in violence and deaths.
In the midst of the ongoing controversy and violence, however, the interim government planned legislative and executive elections. After being postponed several times, these were held in February 2006. The elections were won by René Préval, who had a strong following among the poor, with 51% of the votes. Préval took office in May 2006.
In the spring of 2008, Haitians demonstrated against rising food prices. In some instances, the few main roads on the island were blocked with burning tires and the airport at Port-au-Prince was closed. Protests and demonstrations by Fanmi Lavalas continued in 2009.
On 12 January 2010, Haiti suffered a devastating earthquake, magnitude 7.0 with a death toll estimated by the Haitian government at over 300,000, and by non-Haitian sources from 50,000 to 220,000. Aftershocks followed, including one of magnitude 5.9. The capital city, Port-au-Prince, was effectively leveled. A million Haitians were left homeless, and hundreds of thousands starved. The earthquake caused massive devastation, with most buildings crumbled, including Haiti's presidential palace. The enormous death toll made it necessary to bury the dead in mass graves. Most bodies were unidentified and few pictures were taken, making it impossible for families to identify their loved ones. The spread of disease was a major secondary disaster. Many survivors were treated for injuries in emergency makeshift hospitals, but many more died of gangrene, malnutrition, and infectious diseases.
On 4 April 2011, a senior Haitian official announced that Michel Martelly had won the second round of the election against candidate Mirlande Manigat. Michel Martelly also known by his stage name "Sweet Micky" is a former musician and businessman. Martelly's administration was met with both anger and acclaim. On one hand, he and his associates were accused of being involved in money laundering and various other crimes resulting in countless demonstrations (which on many occasions would become violent). Many criticized him for the slow progression of the reconstruction phase following the recent quake, or for taking credit for projects started in previous administrations. Some disliked him for his vulgar language and risque past which didn't seem to completely go away upon taking presidency. On the other hand, many believe that he was the most productive Haitian president since the Duvalier era. Under his administration, the majority of those left homeless following the quake were given new housing. He offered free education programs to large portions of the Haitian youth as well as an income program for Haitian mothers and students. The administration launched a massive reconstruction program involving principle administration district, Champs-de-Mars, that would modernize and rehabilitate various government buildings, public places, and parks. Michel Martelly put emphasis on foreign investment and business with his slogan "Haiti is Open for Business". Perhaps one of the more major contributions made for the revitalization of the Haitian economy was their push for tourists. Minister of Tourism, Stéphanie Villedrouin, embarked on various competitive tourist projects, including the development of Ile-a-Vache, Jacmel, the north, south-west, and Cotes-des-Arcadins. Tourism had risen significantly between 2012 and 2016. On 8 February 2016, Michel Martelly stepped down at the end of his term without a successor in place.
After the 2016 elections following Hurricane Mathew, Haitian voters re-visited the polls and elected Martelly's appointed successor, Jovenel Moise, as president. He was inaugurated on the grounds where the national palace had been on February 7, 2017. He went on to start the "Caravan de Changement" project, which aims to revitalize the industries and infrastructure of Haiti's less popular areas however the actual impact of these efforts is debated. In recent months, Moïse has been implicated in the embezzlement of funds from the PetroCaribe program, as has his predecessor, Martelly.
On 7 July 2018 protests led by opposition politician Jean-Charles Moïse began, demanding the resignation of Jovenel Moïse. Released in November 2017, a Senate probe of the period 2008-2016 (concerning the René Préval and Michel Martelly administrations, as well as the chief of staff of then-sitting President Jovenel Moïse) revealed significant corruption had been funded with Venezuelan loans through the Petrocaribe program. Significant protests broke out in February 2019 following a report from the court investigating the Petrocaribe Senate probe. | https://en.wikipedia.org/wiki?curid=13374 |
Geography of Haiti
The Republic of Haiti comprises the western three-eighths of the island of Hispaniola, west of the Dominican Republic. Haiti is positioned east of the neighboring island of Cuba, between the Caribbean Sea and the North Atlantic Ocean. Haiti's geographic coordinates are at a longitude of 72° 25′ west and a latitude of 19° 00′ north.
Haiti's total area is , of which is land and is water. Haiti has of coastline and a -border with the Dominican Republic.
The climate is tropical with some variation depending on altitude. Port-au-Prince ranges in January from an average minimum of to an average maximum of ; in July, from . The rainfall pattern is varied, with rain heavier in some of the lowlands and on the northern and eastern slopes of the mountains.
Port-au-Prince receives an average annual rainfall of . There are two rainy seasons, April–June and October–November. Haiti is subject to periodic droughts and floods, made more severe by deforestation. Hurricanes are also a menace.
Haiti's terrain varies, with more than three fourths of the territory above . Its climate is predominantly tropical, with some smaller areas of semi-arid, subtropical, and oceanic climate. Fertile valleys are interspersed between the mountain ranges forming vast areas of contrast between elevations in many areas throughout the territory. Haiti (and Hispaniola) are separated from Cuba by way of the Windward Passage, a wide strait that passes between the two countries.
Haiti's lowest elevation is reported by one source to be sea level (the Caribbean Sea), by another source to be below sea level (Gheskio clinic, Port-au-Prince or in Gonaïves, Several sources, such as http://www.france24.com/en/20080911-disaster-aftermath-hurrican-ike-hanna-gonaives-haiti ), while its highest point is Pic la Selle at .
Numerous smaller islands make up a part of Haiti's total territory. The most notable islands are:
Haiti also has several lakes. The largest lake in Haiti, and the second largest lake of the island of Hispaniola and the West Indies, is Lake Azuei. It is located in the Cul-de-Sac Depression with an area of 170 km2. It is a saline lake with a higher concentration of salt than the sea water and harbors numerous fauna such as American crocodiles and American flamingos.
Lake Péligre is an artificial lake created by the construction of the Peligre Hydroelectric Dam.
Trou Caïman is a saltwater lake with a total area of 16.2 km2. Lake Miragoâne is one of the largest natural freshwater lakes in the Caribbean, with an area of 25 km2. | https://en.wikipedia.org/wiki?curid=13375 |
Demographics of Haiti
Although Haiti averages approximately 402 people per square kilometer (1041 per sq. mi.), its population is concentrated most heavily in urban areas, coastal plains, and valleys. The majority of Haitians, 95%, are of predominantly African descent. The remaining of the population is primarily mulattoes, Europeans, Asians and Arabs. Hispanic residents in Haiti are mostly Cuban and Dominican. About two thirds of the Haitian population live in rural areas.
Although there was a national census taken in Haiti in 2003, much of that data has not been released to the public. Several demographic studies, including those by social work researcher Athena Kolbe, have shed light on the current status of urban residents. In 2006, households averaged 4.5 members. The median age was 25 years with a mean average age of 27 years. People aged 15 and younger counted for roughly a third of the population. Overall, 52.7 percent of the population was female.
According to the total population was in , compared to 3,221,000 in 1950. The proportion of children below the age of 15 in 2010 was 36.2%, 59.7% was between 15 and 65 years of age, while 4.5% was 65 years or older
. According to the World Bank, Haiti's dependency rate is 7.51 dependents per 100 working age persons.
Structure of the population (01.07.2010) (Estimates) :
Structure of the population (01.07.2011) (Estimates) :
Structure of the population (DHS 2012) (Males 28 122, Females 29 844 = 57 966) :
Registration of vital events in Haiti is not complete. The Population Department of the United Nations prepared the following estimates.
While limited, evidence suggests that disasters can cause human populations to increase in the long term, rather than decrease. Documented fertility spikes followed the Khmer Rouge conflict and 2004 Indian Ocean tsunami - potential causes may include reduced access to contraception and families desiring more children following child death. In Haiti's case, the fertility rate nearly tripled after the quake, and is likely to remain elevated (above pre-quake levels) for long after.
Number of births and deaths are calculated - based on Crude Birth and Death Rates.
The Total Fertility Rate (TFR) (Wanted Fertility Rate) and Crude Birth Rate (CBR):
Demographic statistics according to the World Population Review in 2019.
Demographic statistics according to the CIA World Factbook, unless otherwise indicated.
definition: age 15 and over can read and write (2015 est.)
Taíno was the major pre-Columbian language in the region of what is "Haiti" (or "Ayti"), a name rooted in the language to refer to the entire island of Hispaniola to mean, "land of high mountains."
Today, the Republic of Haiti has two official languages. They are French and Haitian Creole; the latter, a French-based creole where 90% of its vocabulary is derived from, with influences from Portuguese, Spanish, Taíno, and West African languages. French is the principal written and administratively authorized language (as well as the main language of the press) and is spoken by 42% of Haitians. It is spoken by most educated Haitians, is the medium of instruction in most schools, and is used in the business sector. It is also used in ceremonial events such as weddings, graduations and church masses.
Haiti is one of two independent nations in the Americas (along with Canada) to designate French as an official language; the other French-speaking areas are all overseas "départements", or "collectivités," of France. Haitian Creole, which has recently undergone a standardization, is spoken by virtually the entire population of Haiti. It is related to the other French creoles, but most closely to Antillean Creole and Louisiana Creole variants.
Spanish, though not official, is spoken by a growing amount of the population, and is spoken more frequently near the border with the Dominican Republic. English is increasingly spoken among the young and in the business sector.
The state religion is Roman Catholicism which 55–60% of the population professes. 30–35% of Haitians practice Protestantism(mostly Pentecostalism.An important percentage of the population practice Vodou, mostly along with another religion. The main religions practiced in Haiti are Roman Catholicism, Pentecostalism, Islam, and Judaism. In addition, the protestant population is continuing to grow, along with Islam and Judaism. Almost 99% of Haitians claim at least one religion, with a part of them practicing some part of voodoo.
Voodoo is rare among the urban elite and is often compared to Cuban Santeria due to the large Cuban population in Haiti. The practice of voodoo revolves around family spirits called "Loua" that protect children. To repay the spirits, children perform two ceremonies where the "Loua" are given gifts like food and drinks. That does depend of the monetary status of these families, poorer families wait until there is a need to perform the rituals.
Voodoo in relation to Christianity came along two different paths, the path with the Catholics and the path with the protestants. For the Catholic path; under the French the slaves were not allowed to practice Voodoo, but they were allowed to occasionally have dances on the weekend. These dances turned out to be Voodoo services, until they were liberated in 1804. Most Haitians see practicing Voodoo and Christianity as normal due to the many components they share. The catholic church wasn't always as accepting of Voodoo as it is now, in 1941-1942 a holy war was waged against Voodoo killing many of the higher ups in the Voodoo religion. This war ended around 1950 when the Catholics decided give up the prosecution of those who practiced Voodoo and to have a relative peace. The path with the protestants was less peaceful. The Protestants came to Haiti in 1970 and since then they have been bitter enemies of Voodoo, most often calling it devil worship.
Although public education at the primary level is now free, private and parochial schools provide around 75% of educational programs offered.
In recent years, several annual literacy campaigns launched in by the Martelly administration has increased overall literacy among adults in Haiti. UNESCO projects an overall literacy rate of 61.1% in Haiti by 2015.
As of December 2014, World Bank has reported that school enrollment has increased from 78% to 90% in Haiti, very close to the goal of universal child enrollment.
In 2004, 300,000 children were restavecs, the practice of which is comparable to indentured service of minors.
Large-scale emigration, principally to the Dominican Republic, United States, and Canada (predominantly to Quebec, with other areas of the country) – but also to Cuba, other areas of Europe and the Americas (like Argentina) such as France (with French Guiana), Spain, Belgium, the United Kingdom and Ireland; and Venezuela, Brazil, Chile, the Bahamas and other Caribbean neighbors – has created what Haitians refer to as the Eleventh Department or the Diaspora. About one of every six Haitians live abroad.
45,000 Westerners from the United States live in Haiti. They represent 0.4% of its total population. | https://en.wikipedia.org/wiki?curid=13376 |
Economy of Haiti
The economy of Haiti is a free market economy with low labor costs. Haiti's major trading partner is the United States (US), which provides the country with preferential trade access to the US market through the Haiti Hemispheric Opportunity through Partnership Encouragement (HOPE) and the Haiti Economic Lift Program Encouragement Acts (HELP) legislation.
Haiti has an agricultural economy. Over half of the world's vetiver oil (an essential oil used in high-end perfumes) comes from Haiti, and bananas, cocoa, and mangoes are important export crops. Haiti has also moved to expand to higher-end manufacturing, producing Android-based tablets and current sensors and transformers.
Vulnerability to natural disasters, as well as poverty and limited access to education are among Haiti's most serious disadvantages. Two-fifths of all Haitians depend on the agriculture sector, mainly small-scale subsistence farming, and remain vulnerable to damage from frequent natural disasters, exacerbated by the country's widespread deforestation. Haiti suffers from a severe trade deficit, which it is working to address by moving into higher-end manufacturing and more value-added products in the agriculture sector. Remittances are the primary source of foreign exchange, equaling nearly 20% of GDP. Haiti's economy was severely impacted by the 2010 Haiti earthquake which occurred on 12 January 2010.
Before Haiti established its independence from French administration in 1804, Haiti ranked as the world's richest and most productive colony. In the formative years of independence, Haiti suffered from isolation on the international stage, as evidenced by the early lack of diplomatic recognition accorded to it by Europe and the United States; this had a negative impact on willingness of foreigners to invest in Haiti. One very significant economic obstacle in Haiti's early independence was its necessary payment of 150 million francs to France beginning in 1825; this did much to drain the country of its capital stock.
According to a 2014 study, the Haitian economy stagnated due to a combination of weak state power and adverse international relations. The authors write:For the newborn ‘Negro republic’, it was hard to become recognised as a sovereign nation state, it was difficult to form strategic alliances, to get access to foreign loans, and to safeguard trade interests, and it was overloaded with debt under threat of external violence (the French indemnity). Self-chosen isolation, for instance by prohibiting foreign landownership, further reduced the choice set of successive Haitian administrations. When opportunities for export-led growth opened up in the late 19th century, the odds were stacked against Haiti.Under President René Préval (President from 1996 to 2001 and from 2006 until 14 May 2011), the country's economic agenda included trade and tariff liberalization, measures to control government expenditure and increase tax revenues, civil-service downsizing, financial-sector reform, and the modernization of state-owned enterprises through their sale to private investors, the provision of private sector management contracts, or joint public-private investment. Structural adjustment agreements with the International Monetary Fund, World Bank, Inter-American Development Bank, and other international financial institutions aiming at creating necessary conditions for private sector growth, have proved only partly successful.
In the aftermath of the 1994 restoration of constitutional governance, Haitian officials have indicated their commitment to economic reform through the implementation of sound fiscal and monetary policies and the enactment of legislation mandating the modernization of state-owned enterprises. A council to guide the modernization program (CMEP) was established and a timetable was drawn up to modernize nine key parastatals. Although the state-owned flour-mill and cement plants have been transferred to private owners, progress on the other seven parastatals has stalled. The modernization of Haiti's state-enterprises remains a controversial political issue in Haiti.
Comparative social and economic indicators show Haiti falling behind other low-income developing countries (particularly in the Western hemisphere) since the 1980s. Haiti's economic stagnation results from earlier inappropriate economic policies, political instability, a shortage of good arable land, environmental deterioration, continued use of traditional technologies, under-capitalization and lack of public investment in human resources, migration of large portions of the skilled population, and a weak national savings rate.
Haiti continues to suffer the consequences of the 1991 coup. The irresponsible economic and financial policies of "de facto" authorities greatly accelerated Haiti's economic decline. Following the coup, the United States adopted mandatory sanctions, and the OAS instituted voluntary sanctions aimed at restoring constitutional government. International sanctions culminated in the May 1994 United Nations embargo of all goods entering Haiti except humanitarian supplies, such as food and medicine. The assembly sector, heavily dependent on U.S. markets for its products, employed nearly 80,000 workers in the mid-1980s. During the embargo, employment fell from 33,000 workers in 1991 to 400 in October 1995. Private, domestic and foreign investment has been slow to return to Haiti. Since the return of constitutional rule, assembly sector employment has gradually recovered with over 20,000 now employed, but further growth has been stalled by investor concerns over safety and supply reliability.
Remittances from abroad have consistently constituted a significant source of financial support for many Haitian households.
The Haitian Ministry of Economy and Finance designed the Haiti economic reforms of 1996 to rebuild the economy of Haiti after significant downturns suffered in the previous years. The primary reforms centered around the Emergency Economic Recovery Plan (EERP) and were followed by budget reforms.
Haiti's real GDP growth turned negative in FY 2001 after six years of growth. Real GDP fell by 1.1% in FY 2001 and 0.9% in FY 2002. Macroeconomic stability was adversely affected by political uncertainty, the collapse of informal banking cooperatives, high budget deficits, low investment, and reduced international capital flows, including suspension of IFI lending as Haiti fell into arrears with the Inter-American Development Bank (IDB) and World Bank.
Haiti's economy stabilized in 2003. Although FY 2003 began with the rapid decline of the gourde due to rumors that U.S. dollar deposit accounts would be nationalized and due to the withdrawal of fuel subsidies, the government successfully stabilized the gourde as it took the politically difficult decisions to float fuel prices freely according to world market prices and to raise interest rates. Government agreement with the International Monetary Fund (IMF) on a staff monitored program (SMP), followed by its payment of its $32 million arrears to the IDB in July, paved the way for renewed IDB lending. The IDB disbursed $35 million of a $50 million policy-based loan in July and began disbursing four previously approved project loans totaling $146 million. The IDB, IMF, and World Bank also discussed new lending with the government. Much of this would be contingent on government adherence to fiscal and monetary targets and policy reforms, such as those begun under the SMP, and Haiti's payment of its World Bank arrears ($30 million at 9/30/03).
The IMF estimated that real GDP was flat in FY 2003 and projected 1% real GDP growth for FY 2004. However, GDP per capita— amounting to $425 in FY 2002 — will continue to decline as population growth is estimated at 1.3% p.a. While implementation of governance reforms and peaceful resolution of the political stalemate are key to long-term growth, external support remains critical in avoiding economic collapse. The major element is foreign remittances, reported as $931 million in 2002, primarily from the U.S. Foreign assistance, meanwhile, was $130 million in FY 2002. Overall foreign assistance levels have declined since FY 1995, the year elected government was restored to power under a United Nations mandate, when the international community provided over $600 million in aid.
A legal minimum wage of 36 gourdes a day (about U.S. $1.80) was set in 1995, and applies to most workers in the formal sector. It was later raised to 70 gourdes per day. This minimum is 200 gourdes a day (about U.S. $4.80). 39.175 gourds= a U.S dollar.
Haiti's economy suffered a severe setback in January 2010 when a 7.0 magnitude earthquake destroyed much of its capital city, Port-au-Prince, and neighboring areas. Already the poorest country in the Americas with 80% of the population living under the poverty line and 54% in abject poverty, the earthquake inflicted $7.8 billion in damage and caused the country's GDP to contract 5.4% in 2010. Following the earthquake, Haiti received $4.59 billion in international pledges for reconstruction, which has proceeded slowly.
US economic engagement under the Haitian Hemispheric Opportunity through Partnership Encouragement (HOPE) Act, passed in December 2006, has boosted apparel exports and investment by providing duty-free access to the US. Congress voted in 2010 to extend the legislation until 2020 under the HELP Act; the apparel sector accounts for about 90% of Haitian exports and nearly one-tenth of GDP. Remittances are the primary source of foreign exchange, equaling nearly 20% of GDP and more than twice the earnings from exports. Haiti suffers from a lack of investment, partly because of limited infrastructure and a lack of security. In 2005, Haiti paid its arrears to the World Bank, paving the way for reengagement with the Bank. Haiti received debt forgiveness for over $1 billion through the Highly-Indebted Poor Country initiative in mid-2009. The remainder of its outstanding external debt was cancelled by donor countries following the 2010 earthquake but has since risen to over $600 million. The government relies on formal international economic assistance for fiscal sustainability, with over half of its annual budget coming from outside sources. The Michel Martelly administration in 2011 launched a campaign aimed at drawing foreign investment into Haiti as a means for sustainable development.
In 2005 Haiti's total external debt reached an estimated US$1.3 billion, which corresponds to debt per capita of US$169, in contrast to the debt per capita of the United States which is US$40,000. Following the democratic election of Aristide in December 1990, many international creditors responded by cancelling significant amounts of Haiti's debt, bringing the total down to US$777 million in 1991. However, new borrowing during the 1990s swelled the debt to more than US$1 billion.
At peak, Haiti's total external debt was estimated at 1.8 billion dollars, including half a billion dollars to the Inter-American Development Bank, Haiti's largest creditor. In September 2009, Haiti met the conditions set out by the IMF and World Bank's Heavily Indebted Poor Countries program, qualifying it for cancellation of some of its external debt. This amounted to a cancellation of $1.2 billion. Despite this as of 2010 calls for cancellation of its remaining $1 billion debts came strongly from civil society groups such as the Jubilee Debt Campaign in reaction to the effects of the earthquake that hit the country.
Primary industries include the following:
Although many Haitians make their living through subsistence farming, Haiti also has an agricultural export sector. Agriculture, together with forestry and fishing, accounts for about one-quarter (28% in 2004) of Haiti's annual gross domestic product and employs about two-thirds (66% in 2004) of the labor force. However, expansion has been difficult because mountains cover much of the countryside and limit the land available for cultivation. Of the total arable land of 550,000 hectares, 125,000 hectares are suited for irrigation, and of those only 75,000 hectares actually have been improved with irrigation. Haiti's dominant cash crops include coffee, mangoes, and cocoa. Haiti has decreased its production of sugarcane, traditionally an important cash crop, because of declining prices and fierce international competition. Because Haiti's forests have thinned dramatically, timber exports have declined. Roundwood removals annually total about 1,000 kilograms. Haiti also has a small fishing industry. Annual catches in recent years have totaled about 5,000 tons.
Haiti has a mining industry which extracted minerals worth approximately US$13 million in 2013. Bauxite, copper, calcium carbonate, gold, and marble were the most extensively extracted minerals in Haiti. Lime and aggregates and to a lesser extent marble are extracted. Gold was mined by the Spanish in early colonial times. Bauxite was mined for a number of years in recent times at a site near Miragoâne on the Southern peninsula. Operating from 1960 to 1972 International Halliwell Mines, Ltd. ("Halliwell"), a Canadian corporation, through its wholly owned Haitian subsidiary, La Societe d'Exploitation et de Developpement Economique et Natural d'Haiti ("Sedren") mined copper near Gonaïves.
0.5 million tons of ore were exported. The copper ore was valued at about $83.5 million. The government of Haiti received about $3 million. As of 2012 there was promise of gold and copper mining in northern Haiti.
In 2012, it was reported that confidential agreements and negotiations had been entered into by the Haitian government granting licenses for exploration or mining of gold and associated metals such as copper for over in the mineralized zone stretching from east to west across northern Haiti. Estimates for the value of the gold which might be extracted through open-pit mining are as high as US$20 billion. Eurasian Minerals and Newmont Mining Corporation are two of the firms involved. According to Alex Dupuy, Chair of African American Studies and John E. Andrus Professor of Sociology at Wesleyan University the ability of Haiti to adequately manage the mining operations or to obtain and use funds obtained from the operations for the benefit of its people is untested and seriously questioned. Lakwèv, where earth dug from hand-made tunnels is washed for specks of free gold by local residents, is one of the locations. In the same mineralized zone in the Dominican Republic Barrick Gold and Goldcorp are planning on reopening the Pueblo Viejo mine.
Secondary industries include the following:
The leading industries in Haiti produce beverages, butter, cement, detergent, edible oils, flour, refined sugar, soap, and textiles. Growth in both manufacturing and industry as a whole has been slowed by a lack of capital investment. Grants from the United States and other countries have targeted this problem, but without much success. Private home building and construction appear to be one subsector with positive prospects for growth.
In 2004 industry accounted for about 20 percent of the gross domestic product (GDP), and less than 10 percent of the labor force worked in industrial production. As a portion of the GDP, the manufacturing sector has contracted since the 1980s. The United Nations embargo of 1994 put out of work most of the 80,000 workers in the assembly sector. Additionally, the years of military rule following the presidential coup in 1991 resulted in the closure of most of Haiti's offshore assembly plants in the free zones surrounding Port-au-Prince. When President Aristide returned to Haiti, some improvements did occur in the manufacturing sector.
Haiti's cheaper labor brought some textile and garment assembly work back to the island in the late 1990s. Although these gains were undercut by international competition, the apparel sector in 2008 made up two-thirds of Haiti's annual 490 million US dollars exports. USA economic engagement under the HOPE Act, from December 2006, increased apparel exports and investment by providing tariff-free access to the USA. HOPE II, in October 2008, further improved the situation by extending preferences to 2018.
Haiti uses very little energy, the equivalent of approximately 250 kilograms of oil per head per year. In 2003, Haiti produced 546 million kilowatt-hours of electricity while consuming 508 million kilowatt-hours. In 2013, it stood 135th out of 135 countries in net total consumption of electricity.
Most of the country's energy comes from the burning of wood. Haiti imports oil, consuming about , as of 2003. The Péligre Dam, the country's largest, provides the capital city of Port-au-Prince with energy. Thermal plants provide electricity to the rest of the country. Even with the country's low level of demand for energy, the supply of electricity traditionally has been sporadic and prone to shortages. Mismanagement by the state has offset more than US$100 million in foreign investment targeted at improving Haiti's energy infrastructure. Businesses have resorted to securing back-up power sources to deal with the regular outages. The potential for greater hydropower exists, should Haiti have the desire and means to develop it. The government controls oil and gas prices, to an extent insulating Haitians from international price fluctuations.
Tertiary industries include the following:
Haiti's services sector made up 52 percent of the country's gross domestic product in 2004 and employed 25 percent of the labor force. According to World Bank statistics, the services sector is one of the few sectors of Haiti's economy that sustained steady, if modest, growth throughout the 1990s.
Lack of a stable and trustworthy banking system has impeded Haiti's economic development. Banks in Haiti have collapsed on a regular basis. Most Haitians do not have access to loans of any sort. When reelected in 2000, President Aristide promised to remedy this situation but instead introduced a non-sustainable plan of "cooperatives" that guaranteed investors a 10 percent rate of return. By 2000, the cooperatives had crumbled and Haitians had collectively lost more than US$200 million in savings.
Haiti's central bank, the Bank of the Republic of Haiti, oversees 10 commercial banks and two foreign banks operating in the country. Most banking takes place in the capital city of Port-au-Prince. The United Nations and the International Monetary Fund have led efforts to diversify and expand the finance sector, making credit more available to rural populations. In 2002, the Canadian International Development Agency led a training program for Haitian Credit Unions. Haiti has no stock exchange.
Tourism in Haiti has suffered from the country's political upheaval. Inadequate infrastructure also has limited visitors to the island. In the 1970s and 1980s, however, tourism was an important industry, drawing an average of 150,000 visitors annually. Since the 1991 coup, tourism has recovered slowly. The Caribbean Tourism Organization (CTO) has joined the Haitian government in an effort to restore the island's image as a tourist destination. In 2001, 141,000 foreigners visited Haiti. Most came from the United States. To make tourism a major industry for Haiti, further improvements in hotels, restaurants and other infrastructure still are needed.
The following table shows the main economic indicators in 1980–2017.
Much of this article is based on public domain material from the U.S. government. See:
US Dept of State | https://en.wikipedia.org/wiki?curid=13378 |
Telecommunications in Haiti
Telecommunications in Haiti Internet, radio, television, fixed and mobile telephones.
There are 3 Internet service providers serving the country – NATCOM and Access Haiti and Hainet. The Haitian telecommunications authority, CONATEL, decided in October 2010 to allow the introduction of 3G services by the mobile telephone service providers. This will enable them to deploy faster mobile internet access speeds throughout their networks than what is currently available with GPRS/EDGE.
NATCOM is the leading internet company in Haiti with a wide range of internet connectivity solutions. From 4G LTE, Fiber to the home and to the business, Wireless point to point and point to multi point solutions. NATCOM offers guaranteed SLA's thanks to its robust local network and exclusive 4 international links to the global undersea fiber networks.
As of September 2017, Taxes are included.
Fiber Optic Consumer Pricing
Pricing per month by download speed and provider
Local Taxes are not Included in the prices Above
There are no government restrictions on access to the Internet or credible reports that the government monitors e-mail or Internet chat rooms without judicial oversight.
The law provides for freedom of speech and press, and the government and elected officials generally respect these rights in practice. The independent media are active and express a wide variety of views without restriction. However, there have been incidents of local officials harassing or threatening journalists and others who criticized the government. Journalists complain about defamation lawsuits that the government threatens or files against the press for statements made about public officials or private figures in the public arena.
Defamation carries both criminal and civil penalties. Some journalists practice self-censorship on stories related to drug trafficking or allegations of business and political corruption, likely due to past patterns of retribution against activists and journalists engaged in investigative reporting. The law prohibits arbitrary interference with privacy, family, home, or correspondence, but the government does not always respect these prohibitions in practice.
Tele Haiti is a television broadcasting network providing paid television services with over 140 local and international channels on its network TeleHaiti.
In 2012, there were 50,000 main lines in use ranking Haiti 163rd in the world.
Natcom, the result of the privatization of Télécommunications d'Haiti S.A.M. (Teleco) in 2010, has a monopoly on the provision of landline services throughout the country. The Vietnamese company Viettel bought a 60% share, with the Haitian government keeping the remaining 40% of the company.
Teleco was constantly hobbled by political interference which affected its performance. A net generator of revenues for the government in the 1970s and early 1980s, Teleco's fortunes then began to decline.
Despite wide-ranging poverty, Haiti increased its mobile phone coverage rate from 6% to 30% in one year (May 2006 to May 2007). Haiti is now the driving force in mobile phone growth in the Caribbean, while radio remains the primary information medium for most Haitians.
In May 2006, Comcel and Haitel had a total of about 500,000 subscribers - a cell phone coverage rate of 6% for a population of 8.2 million. Digicel entered the market in May 2006. After one year of operations, May 2006-May 2007, Digicel went from zero to 1.4 million subscribers. The other two cell phone providers, Comcel and Haitel, responded by cutting their prices and offering new services such as Voilà, a GSM service by Comcel, and CDMA 2000 by Haitel. As a result, Comcel and Haitel increased their subscribers from 500,000 to 1 million. As of April 2012, Digicel has about 3.5 million cell phone subscribers in Haiti. In May 2007, Digicel started offering two BlackBerry services with Internet, one for enterprises and one for individuals. On March 30, 2012, Digicel completed the acquisition of Comcel / Voila, its main competitor in the Haitian market. | https://en.wikipedia.org/wiki?curid=13379 |
Transport in Haiti
All of the major transportation systems in Haiti are located near or run through the capital, Port-au-Prince.
Haiti's network of roads consists of National Roads, Department Roads, and county roads. The hub of the road network is located at the old airport (at the intersection of Boulevard Jean-Jacques Dessalines and Autoroute de Delmas). From this intersection, Route Nationale #1 and Route Nationale #2 commence.
Maintenance for RN1 and RN2 lapsed after the 1991 coup, prompting the World Bank to loan US$50 million that was designated for road repairs. The project was cancelled in January 1999. The World Bank, who reasoned that the cancellation of those projects would ruin Haiti's road infrastructure progress created the FER (Fond d’Etretient Routiers) in 2003. This was a way to cut down corruption, get local companies involved, and in restraining any stopping of these projects because of political instability or protests. President Rene Preval, on his campaign for his second term, vowed on his Maillages Routiers to rebuild the majority of these roads that had disintegrated rapidly and build new ones that would enable the country to move forward. When he wasn't able to get the funds from the World Bank, he pleaded to the international donors for assistance, which was heavily criticized by many politicians in the media, but was greatly embraced by a population desperate to see road infrastructure development come to their towns. Therefore, the European Union pledged to help build RN6, then RN3. In the meantime, the World Bank loaned Haiti US$200 Million to rebuild RN2, from River Froide, which is the starting point of RN2, all the way to Aquin and repair RN1 from Titanyen to Cap-Haïtien. The hurricane season of 2008 was a major setback in development, since many bridges in multiple areas had either collapse or suffered extensive damage and was in immediate need of repair. Most of those work on RN1 and RN2, that were already halted, suffered a major setback during the earthquake of January 12, 2010. For the construction of RN7, Canada pledged US$75 million and the IDB US$31 million for the construction of RN7, which started in 2009. It, too, suffered major setbacks because of the January 12 earthquake.
The public transportation is mostly privately owned in Haiti, previously it was an individual business, with the new generation of entrepreneurs, it is mainly association. The most common form of public transportation in Haiti is the use of brightly painted pickup trucks as taxis called "tap-taps". They are named this because when a passenger needs to be let off they use their coin money to tap the side of the vehicle and the driver usually stops. Most tap-taps are fairly priced at around 10-15 gourdes per ride within a city. The catch to the price is that the driver will often fill a truck to maximum capacity, which is nearly 20-30 people. The Government in an effort to structure the public transportation has attempted several time to bring BUS, in around 1979, It was the BUS called CONATRA a contract between the government and association of driver which quickly failed because of sabotage from different factor and poor maintenance. In 1998 another attempt was made with the Service Plus and Dignite for student and teacher. Sabotage, poor maintenance and the overthrow of Aristide in 2004 had severely undermined the effort, in 2006 at the return of Preval in power another effort was made to recover the majority of the bus left, and a Gift of 300 new bus from Taiwan an effort to bring back Service Plus in association of the drivers. Mini-vans are frequently used to cover towns close to Port-au-Prince, such as Pétion-Ville, Jacmel, Leogane and others. Today throughout the island, motorcycles are widely used as a form of taxi.
The port at Port-au-Prince, Port international de Port-au-Prince, has more registered shipping than any of the over one dozen other ports in the country. Its facilities include cranes, large berths, and warehouses, but these facilities are in universally poor shape. The port is under-used, possibly due to the substantially higher port fees compared to ports in the Dominican Republic.
The port of Saint-Marc is currently the preferred port of entry for consumer goods entering Haiti. Reasons for this may include its location away from volatile and congested Port-au-Prince, as well as its central location relative to a large number of Haitian cities, including Cap-Haïtien, Carrefour, Delmas, Desarmes, Fonds-Parisien, Fort-Liberté, Gonaïves, Hinche, Artibonite, Limbe, Pétion-Ville, Port-de-Paix, and Verrettes. These cities, together with their surrounding areas, contain about six million of Haiti's eight million people.
The islands of Île-à-Vâche, Île de la Tortue, Petite and Grand Cayemite, Grosse Caye, and Île de la Gonâve are reachable only by ferry or small sailing boat (except for Île de la Gonâve, which has an airstrip that is rarely used). The majority of towns near the coast of Haiti are also accessible primarily by small sailing boats. Such boats are usually cheaper and more available than is public ground transportation, which is commonly limited to trucks loaded with merchandise and passengers on market days.
150 km navigable
none (1999 est.)
Cap-Haïtien, Gonaïves, Jacmel, Jérémie, Les Cayes, Miragoâne, Port-au-Prince, Port-de-Paix, Saint-Marc, Fort-Liberté
Haiti has one of the oldest maritime histories in the Americas.
The Panama Canal Railway Company ran a shipping line with three ocean liners that traveled between New York City (USA) - Port-au-Prince (Haiti) - Cristobal (Panama). The company had facilities in Port-au-Prince and their ocean liners stopped there. It has not had any known railroad operations in Haiti.
The three ocean liners were:
" International flights: " Toussaint Louverture International Airport (formerly known as Port-au-Prince International Airport), which opened in 1965 (as François Duvalier International Airport), is located 10 km North/North East of Port-au-Prince. It is Haiti's only jetway, and as such, handles the vast majority of the country's international flights. Air Haïti, Tropical Airways and a handful of major airlines from Europe, the Caribbean, and the Americas serve the airport.
" Domestic flights: " are available through Sunrise Airways which is Haiti's largest airline for the general public offering scheduled, as well as, charter flights. Another domestic company is, Mission Aviation Fellowship catering to non-Catholic registered Christians.
14 (2007 est.)
Railroads ran in Haiti Between 1876 and 1991. Haiti was the first country in the Caribbean with a railway system, in the urban area of Port-au-Prince and later a project that was supposed to be run by The McDonald company from Port-au-Prince to Cap-Haïtien, and from Port-au-Prince to Les Cayes, however it was not completed. Most of the disoperation of the railroad in Haiti is due to bankruptcy and closure of the company who supported the construction of the railroad. | https://en.wikipedia.org/wiki?curid=13380 |
Heard Island and McDonald Islands
The Territory of Heard Island and McDonald Islands (HIMI) is an Australian external territory comprising a volcanic group of mostly barren Antarctic islands, about two-thirds of the way from Madagascar to Antarctica. The group's overall size is in area and it has of coastline. Discovered in the mid-19th century, the islands have been an Australian territory since 1947 and contain the country's two only active volcanoes. The summit of one, Mawson Peak, is higher than any mountain on the Australian mainland. The islands lie on the Kerguelen Plateau in the Indian Ocean.
The islands are among the most remote places on Earth: They are located approximately southwest of Perth, southwest of Cape Leeuwin, Australia, southeast of South Africa, southeast of Madagascar, north of Antarctica, and southeast of the Kerguelen Islands. The islands are currently uninhabited.
Heard Island, by far the largest of the group, is a mountainous island covered by 41 glaciers (the island is 80% covered with ice) and dominated by the Big Ben massif. It has a maximum elevation of at Mawson Peak, the historically active volcanic summit of Big Ben. A July 2000 satellite image from the University of Hawaii's Institute of Geophysics and Planetology (HIGP) Thermal Alert Team showed an active and 50- to 90-metre-wide (164–295 ft) lava flow trending south-west from the summit of Big Ben.
The much smaller and rocky McDonald Islands are located to the west of Heard Island. They consist of McDonald Island ( high), Flat Island ( high) and Meyer Rock ( high). They total approximately in area, where McDonald Island is . There is a small group of islets and rocks about north of Heard Island, consisting of Shag Islet, Sail Rock, Morgan Island and Black Rock. They total about in area.
Mawson Peak and McDonald Island are the only two active volcanoes in Australian territory. Mawson Peak is also one of the highest Australian mountains (higher than Mount Kosciuszko); surpassed only by Mount McClintock range in the Antarctic territory. Mawson Peak has erupted several times in the last decade; the most recent eruption was filmed on 2 February 2016. The volcano on McDonald Island, after being dormant for 75,000 years, became active in 1992 and has erupted several times since, the most recent in 2005.
Heard Island and the McDonald Islands have no ports or harbours; ships must anchor offshore. The coastline is , and a territorial sea and exclusive fishing zone are claimed.
Heard Island has a number of small wetland sites scattered around its coastal perimeter, including areas of wetland vegetation, lagoons or lagoon complexes, rocky shores and sandy shores, including the Elephant Spit. Many of these wetland areas are separated by active glaciers. There are also several short glacier-fed streams and glacial pools. Some wetland areas have been recorded on McDonald Island but, due to substantial volcanic activity since the last landing was made in 1980, their present extent is unknown.
The HIMI wetland is listed on the Directory of Important Wetlands in Australia and, in a recent analysis of Commonwealth-managed wetlands, was ranked highest for nomination under the Convention on Wetlands of International Importance Especially as Waterfowl Habitat (Ramsar Convention) as an internationally important wetland.
Six wetland types have been identified from HIMI covering approximately 1860 ha: coastal ‘pool complex’ (237 ha); inland ‘pool complex’ (105 ha); vegetated seeps mostly on recent glaciated areas (18 ha); glacial lagoons (1103 ha); non-glacial lagoons (97ha); Elephant Spit (300 ha) plus some coastal areas. On Heard Island, the majority of these types suites are found below 150 m asl. The wetland vegetation occurs in the ‘wet mixed herbfield’ and ‘coastal biotic vegetation’ communities described above.
The wetlands provide important breeding and feeding habitat for a number of Antarctic and subantarctic wetland animals. These include the southern elephant seal and macaroni, gentoo, king and southern rockhopper penguins, considered to be wetland species under the Ramsar Convention. Non-wetland vegetated parts of the islands also support penguin and other seabird colonies.
The islands have an Antarctic climate, tempered by their maritime setting. The weather is marked by low seasonal and daily temperature ranges, persistent and generally low cloud cover, frequent precipitation and strong winds. Snowfall occurs throughout the year. Monthly average temperatures at Atlas Cove (at the northwestern end of Heard Island) range from , with an average daily range of in summer and in winter. The winds are predominantly westerly and persistently strong. At Atlas Cove, monthly average wind speeds range between around . Gusts in excess of have been recorded. Annual precipitation at sea level on Heard Island is in the order of ; rain or snow falls on about 3 out of 4 days.
Meteorological records at Heard Island are incomplete.
The islands are part of the Southern Indian Ocean Islands tundra ecoregion that includes several subantarctic islands. In this cold climate plant life is mainly limited to grasses, lichens, and mosses. Low plant diversity reflects the islands’ isolation, small size, severe climate, the short, cool growing season and, for Heard Island, substantial permanent ice cover. The main environmental determinants of vegetation on subantarctic islands are wind exposure, water availability, parent soil composition, salt spray exposure, nutrient availability, disturbance by trampling (from seabirds and seals) and, possibly, altitude. At Heard Island, exposure to salt spray and the presence of breeding and moulting seabirds and seals are particularly strong influences on vegetation composition and structure in coastal areas.
Evidence from microfossil records indicates that ferns and woody plants were present on Heard Island during the Tertiary (a period with a cool and moist climate). Neither group of plants is present today, although potential Tertiary survivors include the vascular plant "Pringlea antiscorbutica" and six moss species. Volcanic activity has altered the distribution and abundance of the vegetation. The vascular flora covers a range of environments and, although only six species are currently widespread, glacial retreat and the consequent connection of previously separate ice-free areas is providing opportunities for further distribution of vegetation into adjacent areas.
Low-growing herbaceous flowering plants and bryophytes are the major vegetation components. The vascular flora comprises the smallest number of species of any major subantarctic island group, reflecting its isolation, small ice-free area and severe climate. Twelve vascular species are known from Heard Island, of which five have also been recorded on McDonald Island. None of the vascular species is endemic, although "Pringlea antiscorbutica", "Colobanthus kerguelensis", and "Poa kerguelensis" occur only on subantarctic islands in the southern Indian Ocean.
The plants are typically subantarctic, but with a higher abundance of the cushion-forming "Azorella selago" than other subantarctic islands. Heard Island is the largest subantarctic island with no confirmed human-introduced plants. Areas available for plant colonisation on Heard Island are generally the result of retreating glaciers or new ice-free land created by lava flows. Today, substantial vegetation covers over 20 km2 of Heard Island, and is best developed on coastal areas at elevations below 250 m.
Bryophytes (mosses and liverworts) contribute substantially to the overall biodiversity of Heard Island, with 43 mosses and 19 liverworts being recorded, often occupying habitats unsuitable for vascular plants, such as cliff faces. Bryophytes are present in most of the major vegetation communities including several soil and moss-inhabiting species. A 1980 survey of McDonald Island found lower diversity than that on Heard Island; four mosses and a number of algal species are recorded from there.
At least 100 species of terrestrial algae are known from Heard Island, commonly in permanently moist and ephemeral habitats. Forests of the giant Antarctic kelp "Durvillaea antarctica" occur at a number of sites around Heard Island and at least 17 other species of seaweed are known, with more to be added following the identification of recent collections. Low seaweed diversity is due to the island's isolation from other land masses, unsuitable beach habitat, constant abrasion by waves, tides and small stones, and the extension of glaciers into the sea in many areas.
Heard Island has a range of terrestrial environments in which vegetation occurs. Seven general vegetation communities are currently recognised, although vegetation composition is considered more of a continuum than discrete units:
One of the most rapidly changing physical settings in the subantarctic has been produced on Heard Island by a combination of rapid glacial recession and climate warming. The consequent increase in habitat available for plant colonisation, plus the coalescing of previously discrete ice-free areas, has led to marked changes in the vegetation of Heard Island in the last 20 years or so. Other species and vegetation communities found on subantarctic islands north of the Antarctic Convergence now absent from the Heard Island flora may colonise the island if climate change produces more favourable conditions.
Some plant species are spreading and modifying the structure and composition of communities, some of which are also increasing in distribution. It is likely that further changes will occur, and possibly at an accelerated rate. Changes in population numbers of seal and seabird species are also expected to affect the vegetation by changing nutrient availability and disturbance through trampling.
One plant species on Heard Island, "Poa annua", a cosmopolitan grass native to Europe, was possibly introduced by humans, though is more likely to have arrived naturally, probably by skuas from the Kerguelen Islands where it is widespread. It was initially recorded in 1987 in two deglaciated areas of Heard Island not previously exposed to human visitors, while being absent from known sites of past human habitation. Since 1987 "Poa annua" populations have increased in density and abundance within the original areas and have expanded beyond them. Expeditioner boot traffic during the Australian Antarctic program expedition in 1987 may be at least partly responsible for the spread, but it is probably mainly due to dispersal by wind and the movement of seabirds and seals around the island.
The potential for introducing plant species (including invasive species not previously found on subantarctic islands) by both natural and human-induced means is high. This is due to the combination of low species diversity and climatic amelioration. During the 2003/04 summer a new plant species, "Cotula plumosa", was recorded. Only one small specimen was found growing on a coastal river terrace that had experienced substantial development and expansion of vegetation over the past decade. The species has a circumantarctic distribution and occurs on many subantarctic islands.
71 species of lichens have been recorded from Heard Island and they are common on exposed rock, dominating the vegetation in some areas. As with plants, a 1980 survey of McDonald Island found lower diversity there, with just eight lichen species and a number of non-lichenized fungi recorded.
The main indigenous animals are insects along with large populations of ocean-going seabirds, seals and penguins.
Sealing at Heard Island lasted from 1855 to 1910, during which time 67 sealing vessels are recorded visiting, nine of which were wrecked off the coast. Relics that survive from that time include trypots, casks, hut ruins, graves and inscriptions. This caused the seal populations there to either become locally extinct or reduced to levels too low to exploit economically. Modern sealers visited from Cape Town in the 1920s. Since then the populations have generally increased and are protected. Seals breeding on Heard include the southern elephant seal, the Antarctic fur seal and the subantarctic fur seal. Leopard seals visit regularly in winter to haul-out though they do not breed on the islands. Crabeater, Ross and Weddell seals are occasional visitors.
Heard Island and the McDonald Islands are free from introduced predators and provide crucial breeding habitat in the middle of the vast Southern Ocean for a range of birds. The surrounding waters are important feeding areas for birds and some scavenging species also derive sustenance from their cohabitants on the islands. The islands have been identified by BirdLife International as an Important Bird Area because they support very large numbers of nesting seabirds.
Nineteen species of birds have been recorded as breeding on Heard Island and the McDonald Islands, although recent volcanic activity at the McDonald Islands in the last decade is likely to have reduced vegetated and un-vegetated nesting areas.
Penguins are by far the most abundant birds on the islands, with four breeding species present, comprising king, gentoo, macaroni and eastern rockhopper penguins. The penguins mostly colonise the coastal tussock and grasslands of Heard Island, and have previously been recorded as occupying the flats and gullies on McDonald Island.
Other seabirds recorded as breeding at Heard Island include three species of albatross (wandering, black-browed and light-mantled albatrosses, southern giant petrels, Cape petrels, four species of burrowing petrels Antarctic and Fulmar prions, common and South Georgian diving-petrels), Wilson's storm-petrels, kelp gulls, subantarctic skuas, Antarctic terns and the Heard shag. Although not a true seabird, the Heard Island subspecies of the black-faced sheathbill also breeds on the island. Both the shag and the sheathbill are endemic to Heard Island.
A further 28 seabird species are recorded as either non-breeding visitors or have been noted during 'at-sea surveys' of the islands. All recorded breeding species, other than the Heard Island sheathbill, are listed marine species under the Australian Environmental Protection and Biodiversity Act (1999, four are listed as threatened species and five are listed migratory species. Under the EPBC Act a recovery plan has been made for albatrosses and giant petrels, which calls for ongoing population monitoring of the species found at HIMI, and at the time of preparing this plan a draft recovery plan has also been made for the Heard Island cormorant (or shag) and Antarctic tern.
The recorded populations of some seabird species found in the Reserve have shown marked change. The king penguin population is the best studied seabird species on Heard Island and has shown a dramatic increase since first recorded in 1947/48, with the population doubling every five years or so for more than 50 years.
A paper reviewing population data for the black-browed albatross between 1947 and 2000/01 suggested that the breeding population had increased to approximately three times that present in the late 1940s, although a Convention for the Conservation of Antarctic Marine Living Resources CCAMLR) Working Group was cautious about the interpretation of the increasing trend given the disparate nature of the data, as discussed in the paper. The discovery of a large, previously unknown, colony of Heard shags in 2000/01 at Cape Pillar raised the known breeding population from 200 pairs to over 1000 pairs. The breeding population of southern giant petrels decreased by more than 50% between the early 1950s and the late 1980s.
Heard Island supports a relatively low number of terrestrial invertebrate species compared to other Southern Ocean islands, in parallel with the low species richness in the flora–that is, the island's isolation and limited ice-free area. Endemism is also generally low and the invertebrate fauna is exceptionally pristine with few, if any, (successful) human-induced introductions of alien species.
Two species, including the thrips "Apterothrips apteris" and the mite "Tyrophagus putrescentiae" are thought to be recent, possibly natural, introductions. An exotic species of earthworm "Dendrodrilus rubidus" was also collected in 1929 from a dump near Atlas Cove, and has recently been collected from a variety of habitats including wallows, streams and lakes on Heard Island.
The arthropods of Heard Island are comparatively well known with 54 species of mite and tick, one spider and eight springtails recorded. A study over summer at Atlas Cove in 1987/88 showed overall densities of up to 60 000 individual springtails per square metre in soil under stands of "Pringlea antiscorbutica". Despite a few recent surveys, the non-arthropod invertebrate fauna of Heard Island remain poorly known.
Beetles and flies dominate Heard Island's known insect fauna, which comprises up to 21 species of ectoparasite (associated with birds and seals) and up to 13 free-living species. Approximately half of the free-living insects are habitat-specific, while the remainder are generalists found in a variety of habitats, being associated with either supralittoral or intertidal zones, "Poa cookii" and "Pringlea antiscorbutica" stands, bryophytes, lichen-covered rocks, exposed rock faces or the underside of rocks. There is a pronounced seasonality to the insect fauna, with densities in winter months dropping to a small percentage (between 0.75%) of the summer maximum. Distinct differences in relative abundances of species between habitats has also been shown, including a negative relationship between altitude and body size for Heard Island weevils.
The fauna of the freshwater pools, lakes, streams and mires found in the coastal areas of Heard Island are broadly similar to those on other subantarctic islands of the southern Indian Ocean. Many species reported from Heard Island are found elsewhere. Some sampling of freshwater fauna has been undertaken during recent expeditions and records to date indicate that the freshwater fauna includes a species of Protista, a gastrotrich, two species of tardigrade, at least four species of nematode, 26 species of rotifer, six species of annelid and 14 species of arthropod.
As with the other shore biota, the marine macro-invertebrate fauna of Heard Island is similar in composition and local distribution to other subantarctic islands, although relatively little is known about the Heard Island communities compared with the well-studied fauna of some other islands in the subantarctic region, such as Macquarie and Kerguelen.
Despite Heard Island's isolation, species richness is considered to be moderate, rather than depauperate, although the number of endemic species reported is low. The large macro-alga "Durvillaea antarctica" supports a diverse array of invertebrate taxa and may play an important role in transporting some of this fauna to Heard Island.
The rocky shores of Heard Island exhibit a clear demarcation between fauna of the lower kelp holdfast zone and the upper shore zone community, probably due to effects of desiccation, predation and freezing in the higher areas. The limpet "Nacella kerguelensis" is abundant in the lower part of the shore, being found on rock surfaces and on kelp holdfasts. Other common but less abundant species in this habitat include the chiton "Hemiarthrum setulosum" and the starfish "Anasterias mawsoni". The amphipod "Hyale" sp. and the isopod "Cassidinopsis" sp. are closely associated with the kelp. Above the kelp holdfast zone, the littornid "Laevilitorina (Corneolitorina) heardensis" and the bivalve mollusc "Kidderia bicolor" are found in well-sheltered situations, and another bivalve "Gaimardia trapesina trapesina" has been recorded from immediately above the holdfast zone. Oligochaetes are also abundant in areas supporting porous and spongy layers of algal mat.
Neither island-cluster had recorded visitors until the mid-1850s.
An American sailor, John Heard, on the ship "Oriental", sighted Heard Island on 25 November 1853, en route from Boston to Melbourne. He reported the discovery one month later and had the island named after him. William McDonald aboard the "Samarang" discovered the nearby McDonald Islands six weeks later, on 4 January 1854.
No landing took place on the islands until March 1855, when sealers from the "Corinthian", led by Erasmus Darwin Rogers, went ashore at a place called Oil Barrel Point. In the sailing period from 1855 to 1880 a number of American sealers spent a year or more on the island, living in appalling conditions in dark smelly huts, also at Oil Barrel Point. At its peak the community consisted of 200 people. By 1880 sealers had wiped out most of the seal population and then left the island. In all the islands furnished more than 100,000 barrels of elephant-seal oil during this period.
A number of wrecks have occurred in the vicinity of the islands. There is also a discarded building left from John Heard's sealing station which is situated near Atlas Cove.
The first recorded landing on McDonald Island was made by Australian scientists Grahame Budd and Hugh Thelander on 12 February 1971, using a helicopter.
The islands have been a territory of Australia since 1947, when they were transferred from the UK. The archipelago became a World Heritage Site in 1997.
There were at least five private expeditions to Heard Island between 1965 and 2000. Several amateur radio operators have visited Heard, often associated with scientific expeditions. The first activity there was in 1947 by Alan Campbell-Drury. Two amateur radio DXpeditions to the island took place in 1983 using the callsigns VK0HI (the "Anaconda" expedition) and VK0JS and VK0NL (the "Cheynes II" expedition), with a further operation in January 1997 (VK0IR). The recent DXpedition in March 2016 (VK0EK) was organised by Cordell Expeditions, and made over 75,000 radio contacts.
Mawson Peak atop Big Ben was first climbed on 25 January 1965 by five members of the Southern Indian Ocean Expedition to Heard Island (sometimes referred to as the "Patanela" expedition). The second ascent was made by five members of the Heard Island Expedition 1983 (sometimes referred to as the "Anaconda" expedition). A helicopter landing was made at the summit by an ANARE team on 21 Dec 1986. An Australian Army team were successful in making the third ascent in 2000.
In 1991, the islands were the location for the Heard Island feasibility test, an experiment in very long-distance transmission of low frequency sound through the ocean. The US Navy vessels and were used to transmit signals which could be detected as far away as both ocean coasts of the US and Canada. The "Cory Chouest" was chosen because of its central moon pool and because it was already equipped with an array of low frequency transmitters. A phase-modulated 57 Hz signal was used. The experiment was successful and demonstrated that such sound waves could travel as far as the antipodes. Planned transmissions had been for ten days, although owing to the bad weather conditions and the high failure rate of the transmitter elements, used at a frequency below their design frequency, the transmissions were terminated on the sixth day, when only two of the original ten transducers were still working.
Heard Island is a heavily glacierized, subantarctic volcanic island located in the Southern Ocean, roughly 4000 kilometers southwest of Australia. 80% of the island is covered in ice, with glaciers descending from 2400 meters to sea level. Due to the steep topography of Heard Island, most of its glaciers are relatively thin (averaging only about 55 meters in depth). The presence of glaciers on Heard Island provides an excellent opportunity to measure the rate of glacial retreat as an indicator of climate change.
Available records show no apparent change in glacier mass balance between 1874 and 1929. Between 1949 and 1954, marked changes were observed to have occurred in the ice formations above 5000 feet on the southwestern slopes of Big Ben, possibly as a result of volcanic activity. By 1963, major recession was obvious below 2000 feet on almost all glaciers, and minor recession was evident as high as 5000 feet.
The coastal ice cliffs of Brown and Stephenson Glaciers, which in 1954 were over 50 feet high, had disappeared by 1963 when the glaciers terminated as much as 100 yards inland. Baudissin Glacier on the north coast, and Vahsel Glacier on the west coast have lost at least 100 and 200 vertical feet of ice, respectively. Winston Glacier, which retreated approximately one mile between 1947 and 1963, appears to be a very sensitive indicator of glacier change on the island. The young moraines flanking Winston Lagoon show that Winston Glacier has lost at least 300 vertical feet of ice within a recent time period. Jacka Glacier on the east coast of Laurens Peninsula has also demonstrated marked recession since 1955.
Retreat of glacier fronts across Heard Island is evident when comparing aerial photographs taken in December 1947 with those taken on a return visit in early 1980. Retreat of Heard Island glaciers is most dramatic on the eastern section of the island, where the termini of former tidewater glaciers are now located inland. Glaciers on the northern and western coasts have narrowed significantly, while the area of glaciers and ice caps on Laurens Peninsula have shrunk by 30% - 65%.
During the time period between 1947 and 1988, the total area of Heard Island's glaciers decreased by 11%, from 288 km2 (roughly 79% of the total area of Heard Island) to only 257 km2. A visit to the island in the spring of 2000 found that the Stephenson, Brown and Baudissin glaciers, among others, had retreated even further. The terminus of Brown Glacier has retreated approximately 1.1 kilometres since 1950. The total ice-covered area of Brown Glacier is estimated to have decreased by roughly 29% between 1947 and 2004. This degree of loss of glacier mass is consistent with the measured increase in temperature of +0.9 °C over that time span.
Possible causes of glacier recession on Heard Island include:
The Australian Antarctic Division conducted an expedition to Heard Island during the austral summer of 2003–04. A small team of scientists spent two months on the island, conducting studies on avian and terrestrial biology and glaciology. Glaciologists conducted further research on the Brown Glacier, in an effort to determine whether glacial retreat is rapid or punctuated. Using a portable echo sounder, the team took measurements of the volume of the glacier. Monitoring of climatic conditions continued, with an emphasis on the impact of Foehn winds on glacier mass balance. Based on the findings of that expedition, the rate of loss of glacier ice on Heard Island appears to be accelerating. Between 2000 and 2003, repeat GPS surface surveys revealed that the rate of loss of ice in both the ablation zone and the accumulation zone of Brown Glacier was more than double average rate measured from 1947 to 2003. The increase in the rate of ice loss suggests that the glaciers of Heard Island are reacting to ongoing climate change, rather than approaching dynamic equilibrium. The retreat of Heard Island's glaciers is expected to continue for the foreseeable future.
The United Kingdom formally established its claim to Heard Island in 1910, marked by the raising of the Union Flag and the erection of a beacon by Captain Evensen, master of the "Mangoro". Effective government, administration and control of Heard Island and the McDonald Islands was transferred to the Australian government on 26 December 1947 at the commencement of the first Australian National Antarctic Research Expedition (ANARE) to Heard Island, with a formal declaration that took place at Atlas Cove. The transfer was confirmed by an exchange of letters between the two governments on 19 December 1950.
The islands are a territory (Territory of Heard Island and McDonald Islands) of Australia administered from Hobart by the Australian Antarctic Division of the Australian Department of the Environment and Energy. The administration of the territory is established in the "Heard Island and McDonald Islands Act 1953", which places it under the laws of the Australian Capital Territory and the jurisdiction of the Supreme Court of the Australian Capital Territory. The islands are contained within a marine reserve and are primarily visited for research, meaning that there is no permanent human habitation.
From 1947 until 1955 there were camps of visiting scientists on Heard Island (at Atlas Cove in the northwest, which was in 1969 again occupied by American scientists and expanded in 1971 by French scientists) and in 1971 on McDonald Island (at Williams Bay). Later expeditions used a temporary base at Spit Bay in the east, such as in 1988, 1992–93 and 2004–05.
The islands' only natural resource is fish; the Australian government allows limited fishing in the surrounding waters. Despite the lack of population, the islands have been assigned the country code HM in ISO 3166-1 () and therefore the Internet top-level domain .hm. The time zone of the islands is UTC+5. | https://en.wikipedia.org/wiki?curid=13383 |
Holy See
The Holy See (, ; ), also called the See of Rome, is the jurisdiction of the Bishop of Rome, known as the pope, which includes the apostolic episcopal see of the Diocese of Rome with universal ecclesiastical jurisdiction of the worldwide Catholic Church, as well as a sovereign entity of international law, governing the Vatican City.
Founded in the first century by Saints Peter and Paul, by virtue of Petrine and papal primacy according to Catholic tradition, it is the focal point of full communion for Catholic Christians around the world. As a sovereign entity, the Holy See is headquartered in, operates from, and exercises "exclusive dominion" over the independent Vatican City State enclave in Rome, of which the pope is sovereign. It is organized into polities of the Latin Church and the 23 Eastern Catholic Churches, and their dioceses and religious institutes.
The Holy See is administered by the Roman Curia ("Latin" for "Roman Court"), which is the central government of the Catholic Church. The Roman Curia includes various dicasteries, comparable to ministries and executive departments, with the Cardinal Secretary of State as its chief administrator. Papal elections are carried out by the College of Cardinals.
Although the Holy See is sometimes metonymically referred to as the "Vatican", the Vatican City State was distinctively established with the Lateran Treaty of 1929, between the Holy See and Italy, to ensure the temporal, diplomatic, and spiritual independence of the papacy. As such, papal nuncios, who are papal diplomats to states and international organizations, are recognized as representing the Holy See not the Vatican City State, as prescribed in the Canon law of the Catholic Church. The Holy See is thus viewed as the central government of the Catholic Church. The Catholic Church, in turn, is the largest non-government provider of education and health care in the world. The diplomatic status of the Holy See facilitates the access of its vast international network of charities.
The Holy See maintains bilateral diplomatic relations with 172 sovereign states, signs concordats and treaties, and performs multilateral diplomacy with multiple intergovernmental organizations, including the United Nations and its agencies, the Council of Europe, the European Communities, the Organization for Security and Co-operation in Europe, and the Organization of American States.
The word "see" comes from the Latin word "sedes", meaning 'seat', which refers to the episcopal throne (cathedra). The term "Apostolic See" can refer to any see founded by one of the Twelve Apostles, but, when used with the definite article, it is used in the Catholic Church to refer specifically to the see of the Bishop of Rome, whom that Church sees as successor of Saint Peter. While Saint Peter's Basilica in Vatican City is perhaps the church most associated with the papacy, the actual cathedral of the Holy See is the Archbasilica of Saint John Lateran in the city of Rome.
Every see is considered holy. In Greek, the adjective "holy" or "sacred" ( transliterated as "hiera") is constantly applied to all such sees as a matter of course. In the West, the adjective is not commonly added, but it does form part of an official title of two sees: besides the Holy See, the Bishopric of Mainz (the former Archbishopric of Mainz, which was also of electoral and primatial rank) bears the title of "the Holy See of Mainz" (Latin: "Sancta Sedes Moguntina").
The apostolic see of Diocese of Rome was established in the 1st century by Saint Peter and Saint Paul, then the capital of the Roman Empire, according to Catholic tradition. The legal status of the Catholic Church and its property was recognised by the Edict of Milan in 313 by Roman Emperor Constantine the Great, and it became the state church of the Roman Empire by the Edict of Thessalonica in 380 by Emperor Theodosius I.
After the Fall of the Western Roman Empire in 476, the temporal legal jurisdisction of the papal primacy was further recognised as promulgated in Canon law. The Holy See was granted territory in Duchy of Rome by the Donation of Sutri in 728 of King Liutprand of the Lombards, and sovereignty by the Donation of Pepin in 756 by King Pepin of the Franks.
The Papal States thus held extensive territory and armed forces in 756–1870. Pope Leo III crowned Charlemagne as Roman Emperor by "translatio imperii" in 800. The pope's temporal power peaked around the time of the papal coronations of the emperors of the Holy Roman Empire from 858, and the "Dictatus papae" in 1075, which conversely also described Papal deposing power. Several modern states still trace their own sovereignty to recognition in medieval papal bulls.
The sovereignty of the Holy See was retained despite multiple sacks of Rome during the Early Middle Ages. Yet, relations with the Kingdom of Italy and the Holy Roman Empire were at times strained, reaching from the "Diploma Ottonianum" and "Libellus de imperatoria potestate in urbe Roma" regarding the "Patrimony of Saint Peter" in the 10th century, to the Investiture Controversy in 1076–1122, and settled again by the Concordat of Worms in 1122. The exiled Avignon Papacy during 1309–1376 also put a strain on the Papacy, which however finally returned to Rome. Pope Innocent X was critical of the Peace of Westphalia in 1648 as it weakened the authority of the Holy See throughout much of Europe. Following the French Revolution, the Papal States were briefly occupied as the "Roman Republic" from 1798 to 1799 as a sister republic of the First French Empire under Napoleon, before their territory was reestablished.
Notwithstanding, the Holy See was represented in and identified as a "permanent subject of general customary international law vis-à-vis all states" in the Congress of Vienna (1814–1815). The Papal States were recognised under the rule of the Papacy and largely restored to their former extent. Despite the Capture of Rome in 1870 by the Kingdom of Italy and the Roman Question during the Savoyard era (which made the pope a "prisoner in the Vatican" from 1870 to 1929), its international legal subject was "constituted by the ongoing reciprocity of diplomatic relationships" that not only were maintained but multiplied.
The Lateran Treaty on 11 February 1929 between the Holy See and Italy recognised Vatican City as an independent city-state, along with extraterritorial properties around the region. Since then, Vatican City is distinct from yet under "full ownership, exclusive dominion, and sovereign authority and jurisdiction" of the Holy See ().
The Holy See is one of the last remaining seven absolute monarchies in the world, along with Saudi Arabia, Eswatini, United Arab Emirates, Qatar, Brunei and Oman. The pope governs the Catholic Church through the Roman Curia. The Curia consists of a complex of offices that administer church affairs at the highest level, including the Secretariat of State, nine Congregations, three Tribunals, eleven Pontifical Councils, and seven Pontifical Commissions. The Secretariat of State, under the Cardinal Secretary of State, directs and coordinates the Curia. The incumbent, Cardinal Pietro Parolin, is the See's equivalent of a prime minister. Archbishop Paul Gallagher, Secretary of the Section for Relations with States of the Secretariat of State, acts as the Holy See's minister of foreign affairs. Parolin was named in his role by Pope Francis on 31 August 2013.
The Secretariat of State is the only body of the Curia that is situated within Vatican City. The others are in buildings in different parts of Rome that have extraterritorial rights similar to those of embassies.
Among the most active of the major Curial institutions are the Congregation for the Doctrine of the Faith, which oversees the Catholic Church's doctrine; the Congregation for Bishops, which coordinates the appointment of bishops worldwide; the Congregation for the Evangelization of Peoples, which oversees all missionary activities; and the Pontifical Council for Justice and Peace, which deals with international peace and social issues.
Three tribunals exercise judicial power. The Roman Rota handles normal judicial appeals, the most numerous being those that concern alleged nullity of marriage. The Apostolic Signatura is the supreme appellate and administrative court concerning decisions even of the Roman Rota and administrative decisions of ecclesiastical superiors (bishops and superiors of religious institutes), such as closing a parish or removing someone from office. It also oversees the work of other ecclesiastical tribunals at all levels. The Apostolic Penitentiary deals not with external judgments or decrees, but with matters of conscience, granting absolutions from censures, dispensations, commutations, validations, condonations, and other favors; it also grants indulgences.
The Prefecture for the Economic Affairs of the Holy See coordinates the finances of the Holy See departments and supervises the administration of all offices, whatever be their degree of autonomy, that manage these finances. The most important of these is the Administration of the Patrimony of the Apostolic See.
The Prefecture of the Papal Household is responsible for the organization of the papal household, audiences, and ceremonies (apart from the strictly liturgical part).
The Holy See does not dissolve upon a pope's death or resignation. It instead operates under a different set of laws "sede vacante". During this interregnum, the heads of the dicasteries of the Curia (such as the prefects of congregations) cease immediately to hold office, the only exceptions being the Major Penitentiary, who continues his important role regarding absolutions and dispensations, and the Camerlengo of the Holy Roman Church, who administers the temporalities ("i.e.", properties and finances) of the See of St. Peter during this period. The government of the See, and therefore of the Catholic Church, then falls to the College of Cardinals. Canon law prohibits the College and the Camerlengo from introducing any innovations or novelties in the government of the Church during this period.
In 2001, the Holy See had a revenue of 422.098 billion Italian lire (about US$202 million at the time), and a net income of 17.720 billion Italian lire (about US$8 million). According to an article by David Leigh in the "Guardian" newspaper, a 2012 report from the Council of Europe identified the value of a section of the Vatican's property assets as an amount in excess of €680m (£570m); as of January 2013, Paolo Mennini, a papal official in Rome, manages this portion of the Holy See's assets—consisting of British investments, other European holdings and a currency trading arm. The "Guardian" newspaper described Mennini and his role in the following manner: "... Paolo Mennini, who is in effect the pope's merchant banker. Mennini heads a special unit inside the Vatican called the extraordinary division of APSA – "Amministrazione del Patrimonio della Sede Apostolica" – which handles the "patrimony of the Holy See"."
The Orders, decorations, and medals of the Holy See are conferred by the pope as temporal sovereign and "fons honorum" of the Holy See, similar to the orders awarded by other heads of state.
The Holy See has been recognized, both in state practice and in the writing of modern legal scholars, as a subject of public international law, with rights and duties analogous to those of States. Although the Holy See, as distinct from the Vatican City State, does not fulfill the long-established criteria in international law of statehood—having a permanent population, a defined territory, a stable government and the capacity to enter into relations with other states—its possession of full legal personality in international law is shown by the fact that it maintains diplomatic relations with 180 states, that it is a "member-state" in various intergovernmental international organizations, and that it is: "respected by the international community of sovereign States and treated as a subject of international law having the capacity to engage in diplomatic relations and to enter into binding agreements with one, several, or many states under international law that are largely geared to establish and preserving peace in the world."
Since medieval times the episcopal see of Rome has been recognized as a sovereign entity. The Holy See (not the State of Vatican City) maintains formal diplomatic relations with and for the most recent establishment of diplomatic relations with sovereign states, and also with the European Union, and the Sovereign Military Order of Malta, as well as having relations of a special character with the Palestine Liberation Organization; 69 of the diplomatic missions accredited to the Holy See are situated in Rome. The Holy See maintains 180 permanent diplomatic missions abroad, of which 74 are non-residential, so that many of its 106 concrete missions are accredited to two or more countries or international organizations. The diplomatic activities of the Holy See are directed by the Secretariat of State (headed by the Cardinal Secretary of State), through the Section for Relations with States. There are 13 internationally recognized states with which the Holy See does not have relations. The Holy See is the only European subject of international law that has diplomatic relations with the government of the Republic of China (Taiwan) as representing China, rather than the government of the People's Republic of China (see Holy See–Taiwan relations).
The British Foreign and Commonwealth Office speaks of Vatican City as the "capital" of the Holy See, although it compares the legal personality of the Holy See to that of the Crown in Christian monarchies and declares that the Holy See and the state of Vatican City are two international identities. It also distinguishes between the employees of the Holy See (2,750 working in the Roman Curia with another 333 working in the Holy See's diplomatic missions abroad) and the 1,909 employees of the Vatican City State. The British Ambassador to the Holy See uses more precise language, saying that the Holy See "is not the same as the Vatican City State. … (It) is the universal government of the Catholic Church and "operates from" the Vatican City State." This agrees exactly with the expression used by the website of the United States Department of State, in giving information on both the Holy See and the Vatican City State: it too says that the Holy See "operates from the Vatican City State".
The Holy See is a member of various international organizations and groups including the International Atomic Energy Agency (IAEA), International Telecommunication Union, the Organization for Security and Co-operation in Europe (OSCE), the Organisation for the Prohibition of Chemical Weapons (OPCW) and the United Nations High Commissioner for Refugees (UNHCR). The Holy See is also a permanent observer in various international organizations, including the United Nations General Assembly, the Council of Europe, UNESCO (United Nations Educational, Scientific and Cultural Organization), the World Trade Organization (WTO), and the Food and Agriculture Organization (FAO).
The Holy See participates as an observer to African Union, Arab League, Council of Europe, Organization of American States, International Organization for Migration, and in the United Nations and its agencies FAO, ILO, UNCTAD, UNEP, UNESCO, UN-HABITAT, UNHCR, UNIDO, UNWTO, WFP, WHO, WIPO. It participates as a guest in the Non-Aligned Movement (NAM), and as a full member in IAEA, OPCW, Organization for Security and Co-operation in Europe (OSCE).
Although the Holy See is closely associated with the Vatican City, the independent territory over which the Holy See is sovereign, the two entities are separate and distinct. After the Italian seizure of the Papal States in 1870, the Holy See had no territorial sovereignty. In spite of some uncertainty among jurists as to whether it could continue to act as an independent personality in international matters, the Holy See continued in fact to exercise the right to send and receive diplomatic representatives, maintaining relations with states that included the major powers Russia, Prussia, and Austria-Hungary. Where, in accordance with the decision of the 1815 Congress of Vienna, the Nuncio was not only a member of the Diplomatic Corps but its dean, this arrangement continued to be accepted by the other ambassadors. In the course of the 59 years during which the Holy See held no territorial sovereignty, the number of states that had diplomatic relations with it, which had been reduced to 16, actually increased to 29.
The State of the Vatican City was created by the Lateran Treaty in 1929 to "ensure the absolute and visible independence of the Holy See" and "to guarantee to it indisputable sovereignty in international affairs." Archbishop Jean-Louis Tauran, the Holy See's former Secretary for Relations with States, said that the Vatican City is a "minuscule support-state that guarantees the spiritual freedom of the pope with the minimum territory".
The Holy See, not the Vatican City, maintains diplomatic relations with states. Foreign embassies are accredited to the Holy See, not to the Vatican City, and it is the Holy See that establishes treaties and concordats with other sovereign entities. When necessary, the Holy See will enter a treaty on behalf of the Vatican City.
Under the terms of the Lateran Treaty, the Holy See has extraterritorial authority over various sites in Rome and two Italian sites outside of Rome, including the Pontifical Palace at Castel Gandolfo. The same authority is extended under international law over the Apostolic Nunciature of the Holy See in a foreign country.
Though, like various European powers, earlier popes recruited Swiss mercenaries as part of an army, the Pontifical Swiss Guard was founded by Pope Julius II on 22 January 1506 as the personal bodyguards of the pope and continues to fulfill that function. It is listed in the "Annuario Pontificio" under "Holy See", not under "State of Vatican City". At the end of 2005, the Guard had 134 members. Recruitment is arranged by a special agreement between the Holy See and Switzerland. All recruits must be Catholic, unmarried males with Swiss citizenship who have completed their basic training with the Swiss Armed Forces with certificates of good conduct, be between the ages of 19 and 30, and be at least 175 cm (5 ft 9 in) in height. Members are armed with small arms and the traditional halberd (also called the Swiss voulge), and trained in bodyguarding tactics.
The police force within Vatican City, known as the Corps of Gendarmerie of Vatican City, belongs to the city state, not to the Holy See.
Holy See signed the UN treaty on the Prohibition of Nuclear Weapons, a binding agreement for negotiations for the total elimination of nuclear weapons.
The difference between the two coats of arms is that the arms of the Holy See have the gold key in bend and the silver key in bend sinister (as in the sede vacante coat of arms and in the external ornaments of the papal coats of arms of individual popes), while the reversed arrangement of the keys was chosen for the arms of the newly founded Vatican City State in 1929. | https://en.wikipedia.org/wiki?curid=13393 |
Honduras
Honduras (, ; ), officially the Republic of Honduras (), is a country in Central America. The republic of Honduras is bordered to the west by Guatemala, to the southwest by El Salvador, to the southeast by Nicaragua, to the south by the Pacific Ocean at the Gulf of Fonseca, and to the north by the Gulf of Honduras, a large inlet of the Caribbean Sea.
Honduras was home to several important Mesoamerican cultures, most notably the Maya, before the Spanish Colonization in the sixteenth century. The Spanish introduced Roman Catholicism and the now predominant Spanish language, along with numerous customs that have blended with the indigenous culture. Honduras became independent in 1821 and has since been a republic, although it has consistently endured much social strife and political instability, and remains one of the poorest countries in the Western Hemisphere. In 1960, the northern part of what was the Mosquito Coast was transferred from Nicaragua to Honduras by the International Court of Justice.
The nation's economy is primarily agricultural, making it especially vulnerable to natural disasters such as Hurricane Mitch in 1998. The lower class is primarily agriculturally based while wealth is concentrated in the country's urban centers. Honduras has a Human Development Index of 0.625, classifying it as a nation with medium development. When the Index is adjusted for income inequality, its Inequality-adjusted Human Development Index is 0.443.
Honduran society is predominantly Mestizo; however, American Indian, black and white individuals also live in Honduras (2017). The nation had a relatively high political stability until its 2009 coup and again with the 2017 presidential election.
Honduras spans about and has a population exceeding floor(/1e6) million. Its northern portions are part of the Western Caribbean Zone, as reflected in the area's demographics and culture. Honduras is known for its rich natural resources, including minerals, coffee, tropical fruit, and sugar cane, as well as for its growing textiles industry, which serves the international market.
The literal meaning of the term "Honduras" is "depths" in Spanish. The name could either refer to the bay of Trujillo as an anchorage, "fondura" in the Leonese dialect of Spanish, or to Columbus's alleged quote that ""Gracias a Dios que hemos salido de esas Honduras"" ("Thank God we have departed from those depths").
It was not until the end of the 16th century that "Honduras" was used for the whole province. Prior to 1580, "Honduras" referred to only the eastern part of the province, and "Higueras" referred to the western part. Another early name is Guaymuras, revived as the name for the political dialogue in 2009 that took place in Honduras as opposed to Costa Rica.
Hondurans are often referred to as "Catracho" or "Catracha" (fem) in Spanish. The word was coined by Nicaraguans and derives from the last name of the Spanish Honduran General Florencio Xatruch, who in 1857 led Honduran armed forces against an attempted invasion by North American adventurer William Walker. The nickname is considered complimentary, not derogatory.
In pre-Columbian times, almost all of modern Honduras was part of the Mesoamerican cultural area, with the exception of La Mosquitia in the extreme east, which seems to have been more connected to the Isthmo-Colombian area although also in contact with and influenced by Mesoamerican societies. In the extreme west, Mayan civilization flourished for hundreds of years. The dominant and most well-known and well-studied state within Honduras' borders was in Copán, which was located in a mainly non-Maya area, or on the frontier between Maya and non-Maya areas. Copán declined with other Lowland centres during the conflagrations of the Terminal Classic in the 9th century. The Maya of this civilization survive in western Honduras as the Ch'orti', isolated from their Choltian linguistic peers to the west.
However, Copán represents only a fraction of Honduran pre-Columbian history. Remnants of other civilizations are found throughout the country. Archaeologists have studied sites such as and La Sierra in the Naco Valley, Los Naranjos on Lake Yojoa, Yarumela in the Comayagua Valley, La Ceiba and Salitron Viejo (both now under the Cajón Dam reservoir), Selin Farm and Cuyamel in the Aguan valley, Cerro Palenque, Travesia, Curruste, Ticamaya, Despoloncal, and Playa de los Muertos in the lower Ulúa River valley, and many others.
In 2012, LiDAR scanning revealed that several previously unknown high density settlements existed in La Mosquitia, corresponding to the legend of "La Ciudad Blanca". Excavation and study has since improved knowledge of the region's history. It is estimated that these settlements reached their zenith from 500 to 1000 AD.
On his fourth and the final voyage to the New World in 1502, Christopher Columbus landed near the modern town of Trujillo, near Guaimoreto Lagoon, becoming the first European to visit the Bay Islands on the coast of Honduras. On 30 July 1502, Columbus sent his brother Bartholomew to explore the islands and Bartholomew encountered a Mayan trading vessel from Yucatán, carrying well-dressed Maya and a rich cargo. Bartholomew's men stole the cargo they wanted and kidnapped the ship's elderly captain to serve as an interpreter in the first recorded encounter between the Spanish and the Maya.
In March 1524, Gil González Dávila became the first Spaniard to enter Honduras as a conquistador. followed by Hernán Cortés, who had brought forces down from Mexico. Much of the conquest took place in the following two decades, first by groups loyal to Cristóbal de Olid, and then by those loyal to Francisco de Montejo but most particularly by those following Alvarado. In addition to Spanish resources, the conquerors relied heavily on armed forces from Mexico—Tlaxcalans and Mexica armies of thousands who remained garrisoned in the region.
Resistance to conquest was led in particular by Lempira. Many regions in the north of Honduras never fell to the Spanish, notably the Miskito Kingdom. After the Spanish conquest, Honduras became part of Spain's vast empire in the New World within the Kingdom of Guatemala. Trujillo and Gracias were the first city-capitals. The Spanish ruled the region for approximately three centuries.
Honduras was organized as a province of the Kingdom of Guatemala and the capital was fixed, first at Trujillo on the Atlantic coast, and later at Comayagua, and finally at Tegucigalpa in the central part of the country.
Silver mining was a key factor in the Spanish conquest and settlement of Honduras. Initially the mines were worked by local people through the encomienda system, but as disease and resistance made this option less available, slaves from other parts of Central America were brought in. When local slave trading stopped at the end of the sixteenth century, African slaves, mostly from Angola, were imported. After about 1650, very few slaves or other outside workers arrived in Honduras.
Although the Spanish conquered the southern or Pacific portion of Honduras fairly quickly, they were less successful on the northern, or Atlantic side. They managed to found a few towns along the coast, at Puerto Caballos and Trujillo in particular, but failed to conquer the eastern portion of the region and many pockets of independent indigenous people as well. The Miskito Kingdom in the northeast was particularly effective at resisting conquest. The Miskito Kingdom found support from northern European privateers, pirates and especially the British formerly English colony of Jamaica, which placed much of the area under its protection after 1740.
Honduras gained independence from Spain in 1821 and was a part of the First Mexican Empire until 1823, when it became part of the United Provinces of Central America. It has been an independent republic and has held regular elections since 1838. In the 1840s and 1850s Honduras participated in several failed attempts at Central American unity, such as the Confederation of Central America (1842–1845), the covenant of Guatemala (1842), the Diet of Sonsonate (1846), the Diet of Nacaome (1847) and National Representation in Central America (1849–1852). Although Honduras eventually adopted the name Republic of Honduras, the unionist ideal never waned, and Honduras was one of the Central American countries that pushed the hardest for a policy of regional unity.
Policies favoring international trade and investment began in the 1870s, and soon foreign interests became involved, first in shipping from the north coast, especially tropical fruit and most notably bananas, and then in building railroads. In 1888, a projected railroad line from the Caribbean coast to the capital, Tegucigalpa, ran out of money when it reached San Pedro Sula. As a result, San Pedro grew into the nation's primary industrial center and second-largest city. Comayagua was the capital of Honduras until 1880, when the capital moved to Tegucigalpa.
Since independence, nearly 300 small internal rebellions and civil wars have occurred in the country, including some changes of régime.
In the late nineteenth century, Honduras granted land and substantial exemptions to several US-based fruit and infrastructure companies in return for developing the country's northern regions. Thousands of workers came to the north coast as a result to work in banana plantations and other businesses that grew up around the export industry. Banana-exporting companies, dominated until 1930 by the Cuyamel Fruit Company, as well as the United Fruit Company, and Standard Fruit Company, built an enclave economy in northern Honduras, controlling infrastructure and creating self-sufficient, tax-exempt sectors that contributed relatively little to economic growth. American troops landed in Honduras in 1903, 1907, 1911, 1912, 1919, 1924 and 1925.
In 1904, the writer O. Henry coined the term "banana republic" to describe Honduras, publishing a book called "Cabbages and Kings", about a fictional country, Anchuria, inspired by his experiences in Honduras, where he had lived for six months. In "The Admiral", O.Henry refers to the nation as a "small maritime banana republic"; naturally, the fruit was the entire basis of its economy. According to a literary analyst writing for "The Economist", "his phrase neatly conjures up the image of a tropical, agrarian country. But its real meaning is sharper: it refers to the fruit companies from the United States that came to exert extraordinary influence over the politics of Honduras and its neighbors." In addition to drawing Central American workers north, the fruit companies encouraged immigration of workers from the English-speaking Caribbean, notably Jamaica and Belize, which introduced an African-descended, English-speaking and largely Protestant population into the country, although many of these workers left following changes to immigration law in 1939.
Honduras joined the Allied Nations after Pearl Harbor, on 8 December 1941, and signed the Declaration by United Nations on 1 January 1942, along with twenty-five other governments.
Constitutional crises in the 1940s led to reforms in the 1950s. One reform gave workers permission to organize, and a 1954 general strike paralyzed the northern part of the country for more than two months, but led to reforms. In 1963 a military coup unseated democratically elected President Ramón Villeda Morales. In 1960, the northern part of what was the Mosquito Coast was transferred from Nicaragua to Honduras by the International Court of Justice.
In 1969, Honduras and El Salvador fought what became known as the Football War. Border tensions led to acrimony between the two countries after Oswaldo López Arellano, the president of Honduras, blamed the deteriorating Honduran economy on immigrants from El Salvador. The relationship reached a low when El Salvador met Honduras for a three-round football elimination match preliminary to the World Cup.
Tensions escalated and on 14 July 1969, the Salvadoran army invaded Honduras. The Organization of American States (OAS) negotiated a cease-fire which took effect on 20 July and brought about a withdrawal of Salvadoran troops in early August. Contributing factors to the conflict were a boundary dispute and the presence of thousands of Salvadorans living in Honduras illegally. After the week-long war, as many as 130,000 Salvadoran immigrants were expelled.
Hurricane Fifi caused severe damage when it skimmed the northern coast of Honduras on 18 and 19 September 1974. Melgar Castro (1975–78) and Paz Garcia (1978–82) largely built the current physical infrastructure and telecommunications system of Honduras.
In 1979, the country returned to civilian rule. A constituent assembly was popularly elected in April 1980 to write a new constitution, and general elections were held in November 1981. The constitution was approved in 1982 and the PLH government of Roberto Suazo won the election with a promise to carry out an ambitious program of economic and social development to tackle the recession in which Honduras found itself. He launched ambitious social and economic development projects sponsored by American development aid. Honduras became host to the largest Peace Corps mission in the world, and nongovernmental and international voluntary agencies proliferated. The Peace Corps withdrew its volunteers in 2012, citing safety concerns.
During the early 1980s, the United States established a continuing military presence in Honduras to support El Salvador, the Contra guerrillas fighting the Nicaraguan government, and also develop an airstrip and modern port in Honduras. Though spared the bloody civil wars wracking its neighbors, the Honduran army quietly waged campaigns against Marxist–Leninist militias such as the Cinchoneros Popular Liberation Movement, notorious for kidnappings and bombings, and against many non-militants as well. The operation included a CIA-backed campaign of extrajudicial killings by government-backed units, most notably Battalion 316.
In 1998, Hurricane Mitch caused massive and widespread destruction. Honduran President Carlos Roberto Flores said that fifty years of progress in the country had been reversed. Mitch destroyed about 70% of the country's crops and an estimated 70–80% of the transportation infrastructure, including nearly all bridges and secondary roads. Across Honduras 33,000 houses were destroyed, and an additional 50,000 damaged. Some 5,000 people killed, and 12,000 more injured. Total losses were estimated at US$3 billion.
In 2007, President of Honduras Manuel Zelaya and President of the United States George W. Bush began talks on US assistance to Honduras to tackle the latter's growing drug cartels in Mosquito, Eastern Honduras using US Special Forces. This marked the beginning of a new foothold for the US Military's continued presence in Central America.
Under Zelaya, Honduras joined ALBA in 2008, but withdrew in 2010 after the 2009 Honduran coup d'état. In 2009, a constitutional crisis resulted when power transferred in a coup from the president to the head of Congress. The OAS suspended Honduras because it did not regard its government as legitimate.
Countries around the world, the OAS, and the United Nations formally and unanimously condemned the action as a coup d'état, refusing to recognize the "de facto" government, even though the lawyers consulted by the Library of Congress submitted to the United States Congress an opinion that declared the coup legal. The Honduran Supreme Court also ruled that the proceedings had been legal. The government that followed the "de facto government" established a truth and reconciliation commission, "Comisión de la Verdad y Reconciliación", which after more than a year of research and debate concluded that the ousting had been a coup d'état, and illegal in the commission's opinion.
The north coast of Honduras borders the Caribbean Sea and the Pacific Ocean lies south through the Gulf of Fonseca. Honduras consists mainly of mountains, with narrow plains along the coasts. A large undeveloped lowland jungle, "La Mosquitia" lies in the northeast, and the heavily populated lowland Sula valley in the northwest. In La Mosquitia lies the UNESCO world-heritage site Río Plátano Biosphere Reserve, with the Coco River which divides Honduras from Nicaragua.
The Islas de la Bahía and the Swan Islands are off the north coast. Misteriosa Bank and Rosario Bank, north of the Swan Islands, fall within the Exclusive Economic Zone (EEZ) of Honduras.
Natural resources include timber, gold, silver, copper, lead, zinc, iron ore, antimony, coal, fish, shrimp, and hydropower.
The climate varies from tropical in the lowlands to temperate in the mountains. The central and southern regions are relatively hotter and less humid than the northern coast.
The region is considered a biodiversity hotspot because of the many plant and animal species found there. Like other countries in the region, it contains vast biological resources. Honduras hosts more than 6,000 species of vascular plants, of which 630 (described so far) are orchids; around 250 reptiles and amphibians, more than 700 bird species, and 110 mammalian species, of which half are bats.
In the northeastern region of La Mosquitia lies the Río Plátano Biosphere Reserve, a lowland rainforest which is home to a great diversity of life. The reserve was added to the UNESCO World Heritage Sites List in 1982.
Honduras has rain forests, cloud forests (which can rise up to nearly above sea level), mangroves, savannas and mountain ranges with pine and oak trees, and the Mesoamerican Barrier Reef System. In the Bay Islands there are bottlenose dolphins, manta rays, parrot fish, schools of blue tang and whale shark.
Deforestation resulting from logging is rampant in Olancho Department. The clearing of land for agriculture is prevalent in the largely undeveloped La Mosquitia region, causing land degradation and soil erosion.
Lake Yojoa, which is Honduras' largest source of fresh water, is polluted by heavy metals produced from mining activities. Some rivers and streams are also polluted by mining.
Honduras is governed within a framework of a presidential representative democratic republic. The President of Honduras is both head of state and head of government. Executive power is exercised by the Honduran government. Legislative power is vested in the National Congress of Honduras. The judiciary is independent of both the executive branch and the legislature.
The National Congress of Honduras ("Congreso Nacional") has 128 members ("diputados"), elected for a four-year term by proportional representation. Congressional seats are assigned the parties' candidates on a departmental basis in proportion to the number of votes each party receives.
In 1963, a military coup removed the democratically elected president, Ramón Villeda Morales. A string of authoritarian military governments held power uninterrupted until 1981, when Roberto Suazo Córdova was elected president.
The party system was dominated by the conservative National Party of Honduras (Partido Nacional de Honduras: PNH) and the liberal Liberal Party of Honduras (Partido Liberal de Honduras: PLH) until the 2009 Honduran coup d'état removed Manuel Zelaya from office and put Roberto Micheletti in his place.
In late 2012, 1540 persons were interviewed by ERIC in collaboration with the Jesuit university, as reported by Associated Press. This survey found that 60.3% believed the police were involved in crime, 44.9% had "no confidence" in the Supreme Court, and 72% thought there was electoral fraud in the primary elections of November 2012. Also, 56% expected the presidential, legislative and municipal elections of 2013 to be fraudulent.
Current Honduran president Juan Orlando Hernández took office on 27 January 2014. After managing to stand for a second term, a very close election in 2017 left uncertainty as to whether Hernandez or his main challenger, television personality Salvador Nasralla, had prevailed.
Honduras and Nicaragua had tense relations throughout 2000 and early 2001 due to a boundary dispute off the Atlantic coast. Nicaragua imposed a 35% tariff against Honduran goods due to the dispute.
In June 2009 a coup d'état ousted President Manuel Zelaya; he was taken in a military aircraft to neighboring Costa Rica. The General Assembly of the United Nations voted to denounce the coup and called for the restoration of Zelaya. Several Latin American nations, including Mexico, temporarily severed diplomatic relations with Honduras. In July 2010, full diplomatic relations were once again re-established with Mexico. The United States sent out mixed messages after the coup; Obama called the ouster a coup and expressed support for Zelaya's return to power. US Secretary of State Hillary Clinton, advised by John Negroponte, the former Reagan-era Ambassador to Honduras implicated in the Iran–Contra affair, refrained from expressing support. She has since explained that the US would have had to cut aid if it called Zelaya's ouster a military coup, although the US has a record of ignoring these events when it chooses. Zelaya had expressed an interest in Hugo Chávez' Bolivarian Alliance for Peoples of our America (ALBA), and had actually joined in 2008. After the 2009 coup, Honduras withdrew its membership.
This interest in regional agreements may have increased the alarm of establishment politicians. When Zelaya began calling for a "fourth ballot box" to determine whether Hondurans wished to convoke a special constitutional congress, this sounded a lot to some like the constitutional amendments that had extended the terms of both Hugo Chávez and Evo Morales. "Chávez has served as a role model for like-minded leaders intent on cementing their power. These presidents are barely in office when they typically convene a constitutional convention to guarantee their reelection," said a 2009 Spiegel International analysis, which noted that one reason to join ALBA was discounted Venezuelan oil. In addition to Chávez and Morales, Carlos Menem of Argentina, Fernando Henrique Cardoso of Brazil and Columbian President Álvaro Uribe had all taken this step, and Washington and the EU were both accusing the Sandanista government in Nicaragua of tampering with election results. Politicians of all stripes expressed opposition to Zelaya's referendum proposal, and the Attorney-General accused him of violating the constitution. The Honduran Supreme Court agreed, saying that the constitution had put the Supreme Electoral Tribunal in charge of elections and referenda, not the National Statistics Institute, which Zelaya had proposed to have run the count. Whether or not Zelaya's removal from power had constitutional elements, the Honduran constitution explicitly protects all Hondurans from forced expulsion from Honduras.
The United States maintains a small military presence at one Honduran base. The two countries conduct joint peacekeeping, counter-narcotics, humanitarian, disaster relief, humanitarian, medical and civic action exercises. U.S. troops conduct and provide logistics support for a variety of bilateral and multilateral exercises. The United States is Honduras' chief trading partner.
Honduras has a military with the Honduran Army, Honduran Navy and Honduran Air Force.
In 2017, Honduras signed the UN treaty on the Prohibition of Nuclear Weapons.
Honduras is divided into 18 departments. The capital city is Tegucigalpa in the Central District within the department of Francisco Morazán.
A new administrative division called ZEDE ("Zonas de empleo y desarrollo económico") was created in 2013. ZEDEs have a high level of autonomy with their own political system at a judicial, economic and administrative level, and are based on free market capitalism.
The World Bank categorizes Honduras as a low middle-income nation. The nation's per capita income sits at around 600 US dollars making it one of the lowest in North America.
In 2010, 50% of the population were living below the poverty line. By 2016 more than 66% were living below the poverty line.
Economic growth in the last few years has averaged 7% a year, one of the highest rates in Latin America (2010). Despite this, Honduras has seen the least development amongst all Central American countries. Honduras is ranked 130 of 188 countries with a Human Development Index of .625 that classifies the nation as having medium development (2015). The three factors that go into Honduras' HDI (an extended and healthy life, accessibility of knowledge and standard of living) have all improved since 1990 but still remain relatively low with life expectancy at birth being 73.3, expected years of schooling being 11.2 (mean of 6.2 years) and GNI per capita being $4,466 (2015). The HDI for Latin America and the Caribbean overall is 0.751 with life expectancy at birth being 68.6, expected years of schooling being 11.5 (mean of 6.6) and GNI per capita being $6,281 (2015).
The 2009 Honduran coup d'état led to a variety of economic trends in the nation. Overall growth has slowed, averaging 5.7 percent from 2006–2008 but slowing to 3.5 percent annually between 2010 and 2013. Following the coup trends of decreasing poverty and extreme poverty were reversed. The nation saw a poverty increase of 13.2 percent and in extreme poverty of 26.3 percent in just 3 years. Furthermore, unemployment grew between 2008 and 2012 from 6.8 percent to 14.1 percent.
Because much of the Honduran economy is based on small scale agriculture of only a few exports, natural disasters have a particularly devastating impact. Natural disasters, such as 1998 Hurricane Mitch, have contributed to this inequality as they particularly affect poor rural areas. Additionally, they are a large contributor to food insecurity in the country as farmers are left unable to provide for their families. A study done by Honduras NGO, World Neighbors, determined the terms "increased workload, decreased basic grains, expensive food, and fear" were most associated with Hurricane Mitch.
The rural and urban poor were hit hardest by Hurricane Mitch. Those in southern and western regions specifically were considered most vulnerable as they both were subject to environmental destruction and home to many subsistence farmers. Due to disasters such as Hurricane Mitch, the agricultural economic sector has declined a third in the past twenty years. This is mostly due to a decline in exports, such as bananas and coffee, that were affected by factors such as natural disasters. Indigenous communities along the Patuca River were hit extremely hard as well. The mid-Pataca region was almost completely destroyed. Over 80% of rice harvest and all of banana, plantain, and manioc harvests were lost. Relief and reconstruction efforts following the storm were partial and incomplete, reinforcing existing levels of poverty rather than reversing those levels, especially for indigenous communities. The period between the end of food donations and the following harvest led to extreme hunger, causing deaths amongst the Tawahka population. Those that were considered the most "land-rich" lost 36% of their total land on average. Those that were the most "land-poor", lost less total land but a greater share of their overall total. This meant that those hit hardest were single women as they constitute the majority of this population.
Since the 1970s when Honduras was designated a "food priority country" by the UN, organizations such as The World Food Program (WFP) have worked to decrease malnutrition and food insecurity. A large majority of Honduran farmers live in extreme poverty, or below 180 US dollars per capita. Currently one fourth of children are affected by chronic malnutrition. WFP is currently working with the Honduran government on a School Feeding Program which provides meals for 21,000 Honduran schools, reaching 1.4 million school children. WFP also participates in disaster relief through reparations and emergency response in order to aid in quick recovery that tackles the effects of natural disasters on agricultural production.
Honduras' Poverty Reduction Strategy was implemented in 1999 and aimed to cut extreme poverty in half by 2015. While spending on poverty-reduction aid increased there was only a 2.5% increase in GDP between 1999 and 2002. This improvement left Honduras still below that of countries that lacked aid through Poverty Reduction Strategy behind those without it. The World Bank believes that this inefficiency stems from a lack of focus on infrastructure and rural development. Extreme poverty saw a low of 36.2 percent only two years after the implementation of the strategy but then increased to 66.5 percent by 2012.
Poverty Reduction Strategies were also intended to affect social policy through increased investment in education and health sectors. This was expected to lift poor communities out of poverty while also increasing the workforce as a means of stimulating the Honduran economy. Conditional cash transfers were used to do this by the Family Assistance Program. This program was restructured in 1998 in an attempt to increase effectiveness of cash transfers for health and education specifically for those in extreme poverty. Overall spending within Poverty Reduction Strategies have been focused on education and health sectors increasing social spending from 44% of Honduras' GDP in 2000 to 51% in 2004.
Critics of aid from International Finance Institutions believe that the World Bank's Poverty Reduction Strategy result in little substantive change to Honduran policy. Poverty Reduction Strategies also excluded clear priorities, specific intervention strategy, strong commitment to the strategy and more effective macro-level economic reforms according to Jose Cuesta of Cambridge University. Due to this he believes that the strategy did not provide a pathway for economic development that could lift Honduras out of poverty resulting in neither lasting economic growth of poverty reduction.
Prior to its 2009 coup Honduras widely expanded social spending and an extreme increase in minimum wage. Efforts to decrease inequality were swiftly reversed following the coup. When Zelaya was removed from office social spending as a percent of GDP decreased from 13.3 percent in 2009 to 10.9 recent in 2012. This decrease in social spending exacerbated the effects of the recession, which the nation was previously relatively well equipped to deal with.
The World Bank Group Executive Board approved a plan known as the new Country Partnership Framework (CPF). This plan's objectives are to expand social program coverage, strengthen infrastructure, increase financing accessibility, strengthen regulatory framework and institutional capacity, improve the productivity of rural areas, strengthen natural disaster and climate change resiliency, and the buildup local governments so that violence and crime rates will decrease. The overall aim of the initiative is to decrease inequality and vulnerability of certain populations while increasing economic growth. Additionally the signing of the U.S.–Central America Free Trade Agreement (CAFTA) was meant to diversify the economy in order to promote growth and expand the range of exports the country is reliant on.
Levels of income inequality in Honduras are higher than in any other Latin American country. Unlike other Latin American countries, inequality steadily increased in Honduras between 1991 and 2005. Between 2006 and 2010 inequality saw a decrease but increased again in 2010.
When Honduras' Human Development Index is adjusted for inequality (known as the IHDI) Honduras' development index is reduced to .443. The levels of inequality in each aspect of development can also be assessed. In 2015 inequality of life expectancy at birth was 19.6%, inequality in education was 24.4% and inequality in income was 41.5% The overall loss in human development due to inequality was 29.2.
The IHDI for Latin America and the Caribbean overall is 0.575 with an overall loss of 23.4%. In 2015 for the entire region, inequality of life expectancy at birth was 22.9%, inequality in education was 14.0% and inequality in income was 34.9%. While Honduras has a higher life expectancy than other countries in the region (before and after inequality adjustments), its quality of education and economic standard of living are lower. Income inequality and education inequality have a large impact on the overall development of the nation.
Inequality also exists between rural and urban areas as it relates to the distribution of resources. Poverty is concentrated in southern, eastern, and western regions where rural and indigenous peoples live. North and central Honduras are home to the country's industries and infrastructure, resulting in low levels of poverty. Poverty is concentrated in rural Honduras, a pattern that is reflected throughout Latin America. The effects of poverty on rural communities are vast. Poor communities typically live in adobe homes, lack material resources, have limited access to medical resources, and live off of basics such as rice, maize and beans.
The lower class predominantly consists of rural subsistence farmers and landless peasants. Since 1965 there has been an increase in the number of landless peasants in Honduras which has led to a growing class of urban poor individuals. These individuals often migrate to urban centers in search of work in the service sector, manufacturing, or construction. Demographers believe that without social and economic reform, rural to urban migration will increase, resulting in the expansion of urban centers. Within the lower class, underemployment is a major issue. Individuals that are underemployed often only work as part-time laborers on seasonal farms meaning their annual income remains low. In the 1980s peasant organizations and labor unions such as the National Federation of Honduran Peasants, The National Association of Honduran Peasants and the National Union of Peasants formed.
It is not uncommon for rural individuals to voluntarily enlist in the military, however this often does not offer stable or promising career opportunities. The majority of high-ranking officials in the Honduran army are recruited from elite military academies. Additionally, the majority of enlistment in the military is forced. Forced recruitment largely relies on an alliance between the Honduran government, military and upper class Honduran society. In urban areas males are often sought out from secondary schools while in rural areas roadblocks aided the military in handpicking recruits. Higher socio-economic status enables individuals to more easily evade the draft.
Middle class Honduras is a small group defines by relatively low membership and income levels. Movement from lower to middle class is typically facilitated by higher education. Professionals, students, farmers, merchants, business employees, and civil servants are all considered a part of the Honduran middle class. Opportunities for employment and the industrial and commercial sectors are slow growing, limiting middle class membership.
The Honduran upper class has much higher income levels than the rest of the Honduran population reflecting large amounts of income inequality. Much of the upper class affords their success to the growth of cotton and livestock exports post-World War II. The wealthy are not politically unified and differ in political and economic views.
The currency is the Honduran lempira.
The government operates both the electrical grid, Empresa Nacional de Energía Eléctrica (ENEE) and the land-line telephone service, Hondutel. ENEE receives heavy subsidies to counter its chronic financial problems, but Hondutel is no longer a monopoly. The telecommunication sector was opened to private investment on 25 December 2005, as required under CAFTA. The price of petroleum is regulated, and the Congress often ratifies temporary price regulation for basic commodities.
Gold, silver, lead and zinc are mined.
In 2005 Honduras signed CAFTA, a free trade agreement with the United States. In December 2005, Puerto Cortés, the primary seaport of Honduras, was included in the U.S. Container Security Initiative.
In 2006 the U.S. Department of Homeland Security and the Department of Energy announced the first phase of the Secure Freight Initiative (SFI), which built upon existing port security measures. SFI gave the U.S. government enhanced authority, allowing it to scan containers from overseas for nuclear and radiological materials in order to improve the risk assessment of individual US-bound containers. The initial phase of Secure Freight involved deploying of nuclear detection and other devices to six foreign ports:
Containers in these ports have been scanned since 2007 for radiation and other risk factors before they are allowed to depart for the United States.
For economic development a 2012 memorandum of understanding with a group of international investors obtained Honduran government approval to build a zone (city) with its own laws, tax system, judiciary and police, but opponents brought a suit against it in the Supreme Court, calling it a "state within a state". In 2013, Honduras' Congress ratified Decree 120, which led to the establishment of ZEDEs. The government began construction of the first zones in June 2015.
About half of the electricity sector in Honduras is privately owned. The remaining generation capacity is run by ENEE ("Empresa Nacional de Energía Eléctrica").
Key challenges in the sector are:
Infrastructure for transportation in Honduras consists of: of railways; of roadways; seven ports and harbors; and 112 airports altogether (12 Paved, 100 unpaved). The Ministry of Public Works, Transport and Housing (SOPRTRAVI in Spanish acronym) is responsible for transport sector policy.
Water supply and sanitation in Honduras differ greatly from urban centers to rural villages. Larger population centers generally have modernized water treatment and distribution systems, but water quality is often poor because of lack of proper maintenance and treatment. Rural areas generally have basic drinking water systems with limited capacity for water treatment. Many urban areas have sewer systems in place to collect wastewater, but proper treatment of wastewater is rare. In rural areas sanitary facilities are generally limited to latrines and basic septic pits.
Water and sanitation services were historically provided by the (SANAA). In 2003, the government enacted a new "water law" which called for the decentralization of water services. Under the 2003 law, local communities have both the right and the responsibility to own, operate, and control their own drinking water and wastewater systems. Since this law passed, many communities have joined together to address water and sanitation issues on a regional basis.
Many national and international non-government organizations have a history of working on water and sanitation projects in Honduras. International groups include the Red Cross, Water 1st, Rotary Club, Catholic Relief Services, Water for People, EcoLogic Development Fund, CARE, the Canadian Executive Service Organization (CESO-SACO), Engineers Without Borders – USA, Flood The Nations, Students Helping Honduras (SHH), Global Brigades, and Agua para el Pueblo in partnership with AguaClara at Cornell University.
In addition, many government organizations work on projects in Honduras, including the European Union, the USAID, the Army Corps of Engineers, Cooperacion Andalucia, the government of Japan, and others.
In recent years Honduras has experienced very high levels of violence and criminality. Homicide violence reached a peak in 2012 with an average of 20 homicides a day. Cities such as San Pedro Sula and the Tegucigalpa have registered homicide rates among the highest in the world. The violence is associated with drug trafficking as Honduras is often a transit point, and with a number of urban gangs, mainly the MS-13 and the 18th Street gang. But as recently as 2017, organizations such as InSight Crime's show figures of 42 per 100,000 inhabitants; a 26% drop from 2016 figures.
Violence in Honduras increased after Plan Colombia was implemented and after Mexican President Felipe Calderón declared the war against drug trafficking in Mexico. Along with neighboring El Salvador and Guatemala, Honduras forms part of the Northern Triangle of Central America, which has been characterized as one of the most violent regions in the world. As a result of crime and increasing murder rates, the flow of migrants from Honduras to the U.S. also went up. The rise in violence in the region has received international attention.
Honduras had a population of in . The proportion of the population below the age of 15 in 2010 was 36.8%, 58.9% were between 15 and 65 years old, and 4.3% were 65 years old or older.
Since 1975, emigration from Honduras has accelerated as economic migrants and political refugees sought a better life elsewhere. A majority of expatriate Hondurans live in the United States. A 2012 US State Department estimate suggested that between 800,000 and one million Hondurans lived in the United States at that time, nearly 15% of the Honduran population. The large uncertainty about numbers is because numerous Hondurans live in the United States without a visa. In the 2010 census in the United States, 617,392 residents identified as Hondurans, up from 217,569 in 2000.
The ethnic breakdown of Honduran society was 90% Mestizo, 7% American Indian, 2% Black and 1% White (2017). The 1927 Honduran census provides no racial data but in 1930 five classifications were created: white, Indian, Negro, yellow, and mestizo. This system was used in the 1935 and 1940 census. Mestizo was used to describe individuals that did not fit neatly into the categories of white, Indian, negro or yellow or who are of mixed white-Indian descent.
John Gillin considers Honduras to be one of thirteen "Mestizo countries" (Mexico, Guatemala, El Salvador, Nicaragua, Panama, Colombia, Venezuela, Cuba, Ecuador, Peru, Bolivia, Paraguay). He claims that in much as Spanish America little attention is paid to race and race mixture resulting in social status having little reliance on one's physical features. However, in "Mestizo countries" such as Honduras, this is not the case. Social stratification from Spain was able to develop in these countries through colonization.
During colonization the majority of Honduras' indigenous population died of diseases like smallpox and measles resulting in a more homogenous indigenous population compared to other colonies. Nine indigenous and African American groups are recognized by the government in Honduras. The majority of Amerindians in Honduras are Lenca, followed by the Miskito, Cho'rti', Tolupan, Pech and Sumo. Around 50,000 Lenca individuals live in the west and western interior of Honduras while the other small native groups are located throughout the country.
The majority of blacks in Honduran are culturally ladino, meaning they are culturally Hispanic. Non-ladino groups in Honduras include the Black Carib, Miskito, Arab immigrants and the black population of the Islas de la Bahía The Black Carib population descended from freed slaves from Saint Vincent. The Miskito population (about 10,000 individuals) are the descendants of African and British immigrants and are extremely racially diverse. While the Black Carib and Miskito populations have similar origins, Black Caribs are considered black while Miskitos are considered indigenous. This is largely a reflection of cultural differences, as Black Caribs have retained much of their original African culture. The majority of Arab Hondurans are of Palestinian and Lebanese descent. They are known as "turcos" in Honduras because of migration during the rule of the Ottoman Empire. They have maintained cultural distinctiveness and prospered economically.
The male to female ratio of the Honduran population is 1.01. This ratio stands at 1.05 at birth, 1.04 from 15–24 years old, 1.02 from 25–54 years old, .88 from 55–64 years old, and .77 for those 65 years or older.
The Gender Development Index (GDI) was .942 in 2015 with an HDI of .600 for females and .637 for males. Life expectancy at birth for males is 70.9 and 75.9 for females. Expected years of schooling in Honduras is 10.9 years for males (mean of 6.1) and 11.6 for females (mean of 6.2). These measures do not reveal a large disparity between male and female development levels, however, GNI per capita is vastly different by gender. Males have a GNI per capita of $6,254 while that of females is only $2,680. Honduras' overall GDI is higher than that of other medium HDI nations (.871) but lower than the overall HDI for Latin America and the Caribbean (.981).
The United Nations Development Program (UNDP) ranks Honduras 116th for measures including women's political power, and female access to resources. The Gender Inequality Index (GII) depicts gender-based inequalities in Honduras according to reproductive health, empowerment, and economic activity. Honduras has a GII of .461 and ranked 101 of 159 countries in 2015. 25.8% of Honduras' parliament is female and 33.4% of adult females have a secondary education or higher while only 31.1% of adult males do. Despite this, while male participation in the labor market is 84.4, female participation is 47.2%. Honduras' maternal mortality ratio is 129 and the adolescent birth rate is 65.0 for women ages 15–19.
Familialism and machismo carry a lot of weight within Honduran society. Familialism refers to the idea of individual interests being second to that of the family, most often in relation to dating and marriage, abstinence, and parental approval and supervision of dating. Aggression and proof of masculinity through physical dominance are characteristic of machismo.
Honduras has historically functioned with a patriarchal system like many other Latin American countries. Honduran men claim responsibility for family decisions including reproductive health decisions. Recently Honduras has seen an increase in challenges to this notion as feminist movements and access to global media increases. There has been an increase in educational attainment, labor force participating, urban migration, late-age marriage, and contraceptive use amongst Honduran women.
Between 1971 and 2001 Honduran total fertility rate decreased from 7.4 births to 4.4 births. This is largely attributable to an increase in educational attainment and workforce participation by women, as well as more widespread use of contraceptives. In 1996 50% of women were using at least one type of contraceptive. By 2001 62% were largely due to female sterilization, birth control in the form of a pill, injectable birth control, and IUDs. A study done in 2001 of Honduran men and women reflect conceptualization of reproductive health and decision making in Honduras. 28% of men and 25% of women surveyed believed men were responsible for decisions regarding family size and family planning uses. 21% of men believed men were responsible for both.
Sexual violence against women has proven to be a large issue in Honduras that has caused many to migrate to the U.S. The prevalence of child sexual abuse was 7.8% in Honduras with the majority of reports being from children under the age of 11. Women that experienced sexual abuse as children were found to be twice as likely to be in violent relationships. Femicide is widespread in Honduras. In 2014, 40% of unaccompanied refugee minors were female. Gangs are largely responsible for sexual violence against women as they often use sexual violence. Between 2005 and 2013 according to the UN Special Repporteur on Violence Against Women, violent deaths increased 263.4 percent. Impunity for sexual violence and femicide crimes was 95 percent in 2014. Additionally, many girls are forced into human trafficking and prostitution.
Between 1995 and 1997 Honduras recognized domestic violence as both a public health issue and a punishable offense due to efforts by the Pan American Health Organization (PAHO). PAHO's subcommittee on Women, Health and Development was used as a guide to develop programs that aid in domestic violence prevention and victim assistance programs However, a study done in 2009 showed that while the policy requires health care providers to report cases of sexual violence, emergency contraception, and victim referral to legal institutions and support groups, very few other regulations exist within the realm of registry, examination and follow-up. Unlike other Central American countries such as El Salvador, Guatemala and Nicaragua, Honduras does not have detailed guidelines requiring service providers to be extensively trained and respect the rights of sexual violence victims. Since the study was done the UNFPA and the Health Secretariat of Honduras have worked to develop and implement improved guidelines for handling cases of sexual violence.
An educational program in Honduras known as "Sistema de Aprendizaje Tutorial" (SAT) has attempted to "undo gender" through focusing on gender equality in everyday interactions. Honduras' SAT program is one of the largest in the world, second only to Colombia's with 6,000 students. It is currently sponsored by "Asociacion Bayan", a Honduran NGO, and the Honduran Ministry of Education. It functions by integrating gender into curriculum topics, linking gender to the ideas of justice and equality, encouraging reflection, dialogue and debate and emphasizing the need for individual and social change. This program was found to increase gender consciousness and a desire for gender equality amongst Honduran women through encouraging discourse surrounding existing gender inequality in the Honduran communities.
Spanish is the official, national language, spoken by virtually all Hondurans. In addition to Spanish, a number of indigenous languages are spoken in some small communities. Other languages spoken by some include Honduran sign language and Bay Islands Creole English.
The main indigenous languages are:
The Lenca isolate lost all its fluent native speakers in the 20th century but is currently undergoing revival efforts among the members of the ethnic population of about 100,000. The largest immigrant languages are Arabic (42,000), Armenian (1,300), Turkish (900), Yue Chinese (1,000).
Although most Hondurans are nominally Roman Catholic (which would be considered the main religion), membership in the Roman Catholic Church is declining while membership in Protestant churches is increasing. The International Religious Freedom Report, 2008, notes that a CID Gallup poll reported that 51.4% of the population identified themselves as Catholic, 36.2% as evangelical Protestant, 1.3% claiming to be from other religions, including Muslims, Buddhists, Jews, Rastafarians, etc. and 11.1% do not belong to any religion or unresponsive. 8% reported as being either atheistic or agnostic. Customary Catholic church tallies and membership estimates 81% Catholic where the priest (in more than 185 parishes) is required to fill out a pastoral account of the parish each year.
The CIA Factbook lists Honduras as 97% Catholic and 3% Protestant. Commenting on statistical variations everywhere, John Green of Pew Forum on Religion and Public Life notes that: "It isn't that ... numbers are more right than [someone else's] numbers ... but how one conceptualizes the group." Often people attend one church without giving up their "home" church. Many who attend evangelical megachurches in the US, for example, attend more than one church. This shifting and fluidity is common in Brazil where two-fifths of those who were raised evangelical are no longer evangelical and Catholics seem to shift in and out of various churches, often while still remaining Catholic.
Most pollsters suggest an annual poll taken over a number of years would provide the best method of knowing religious demographics and variations in any single country. Still, in Honduras are thriving Anglican, Presbyterian, Methodist, Seventh-day Adventist, Lutheran, Latter-day Saint (Mormon) and Pentecostal churches. There are Protestant seminaries. The Catholic Church, still the only "church" that is recognized, is also thriving in the number of schools, hospitals, and pastoral institutions (including its own medical school) that it operates. Its archbishop, Óscar Andrés Rodriguez Maradiaga, is also very popular, both with the government, other churches, and in his own church. Practitioners of the Buddhist, Jewish, Islamic, Bahá'í, Rastafari and indigenous denominations and religions exist.
See Health in Honduras
About 83.6% of the population are literate and the net primary enrollment rate was 94% in 2004. In 2014, the primary school "completion" rate was 90.7%. Honduras has bilingual (Spanish and English) and even trilingual (Spanish with English, Arabic, or German) schools and numerous universities.
The higher education is governed by the National Autonomous University of Honduras which has centers in the most important cities of Honduras.
Crime in Honduras is rampant and criminals operate with a high degree of impunity. Honduras has one of the highest murder rates in the world. Official statistics from the Honduran Observatory on National Violence show Honduras' homicide rate was 60 per 100,000 in 2015 with the majority of homicide cases unprosecuted.
Highway assaults and carjackings at roadblocks or checkpoints set up by criminals with police uniforms and equipment occur frequently. Although reports of kidnappings of foreigners are not common, families of kidnapping victims often pay ransoms without reporting the crime to police out of fear of retribution, so kidnapping figures may be underreported.
Owing to measures taken by government and business in 2014 to improve tourist safety, Roatan and the Bay Islands have lower crime rates than the Honduran mainland.
In the less populated region of Gracias a Dios, narcotics-trafficking is rampant and police presence is scarce. Threats against U.S. citizens by drug traffickers and other criminal organizations have resulted in the U.S. Embassy placing restrictions on the travel of U.S. officials through the region.
The most renowned Honduran painter is José Antonio Velásquez. Other important painters include Carlos Garay, and Roque Zelaya. Some of Honduras' most notable writers are Lucila Gamero de Medina, Froylán Turcios, Ramón Amaya Amador and Juan Pablo Suazo Euceda, Marco Antonio Rosa, Roberto Sosa, Eduardo Bähr, Amanda Castro, Javier Abril Espinoza, Teófilo Trejo, and Roberto Quesada.
The José Francisco Saybe theater in San Pedro Sula is home to the Círculo Teatral Sampedrano (Theatrical Circle of San Pedro Sula)
Honduras has experienced a boom from its film industry for the past two decades. Since the premiere of the movie "Anita la cazadora de insectos" in 2001, the level of Honduran productions has increased, many collaborating with countries such as Mexico, Colombia, and the U.S. The most well known Honduran films are "El Xendra", "Amor y Frijoles", and "Cafe con aroma a mi tierra".
Honduran cuisine is a fusion of indigenous Lenca cuisine, Spanish cuisine, Caribbean cuisine and African cuisine. There are also dishes from the Garifuna people. Coconut and coconut milk are featured in both sweet and savory dishes. Regional specialties include fried fish, tamales, carne asada and baleadas.
Other popular dishes include: meat roasted with chismol and carne asada, chicken with rice and corn, and fried fish with pickled onions and jalapeños. Some of the ways seafood and some meats are prepared in coastal areas and in the Bay Islands involve coconut milk.
The soups Hondurans enjoy include bean soup, mondongo soup (tripe soup), seafood soups and beef soups. Generally these soups are served mixed with plantains, yuca, and cabbage, and served with corn tortillas.
Other typical dishes are the montucas or corn tamales, stuffed tortillas, and tamales wrapped in plantain leaves. Honduran typical dishes also include an abundant selection of tropical fruits such as papaya, pineapple, plum, sapote, passion fruit and bananas which are prepared in many ways while they are still green.
At least half of Honduran households have at least one television. Public television has a far smaller role than in most other countries. Honduras' main newspapers are La Prensa, El Heraldo, La Tribuna and Diario Tiempo. The official newspaper is .
Punta is the main music of Honduras, with other sounds such as Caribbean salsa, merengue, reggae, and reggaeton all widely heard, especially in the north, and Mexican rancheras heard in the rural interior of the country. The most well known musicians are Guillermo Anderson and Polache.
Some of Honduras' national holidays include Honduras Independence Day on 15 September and Children's Day or Día del Niño, which is celebrated in homes, schools and churches on 10 September; on this day, children receive presents and have parties similar to Christmas or birthday celebrations. Some neighborhoods have piñatas on the street. Other holidays are Easter, Maundy Thursday, Good Friday, Day of the Soldier (3 October to celebrate the birth of Francisco Morazán), Christmas, El Dia de Lempira on 20 July, and New Year's Eve.
Honduras Independence Day festivities start early in the morning with marching bands. Each band wears different colors and features cheerleaders. Fiesta Catracha takes place this same day: typical Honduran foods such as beans, tamales, baleadas, cassava with chicharrón, and tortillas are offered.
On Christmas Eve people reunite with their families and close friends to have dinner, then give out presents at midnight. In some cities fireworks are seen and heard at midnight. On New Year's Eve there is food and "cohetes", fireworks and festivities. Birthdays are also great events, and include piñatas filled with candies and surprises for the children.
La Ceiba Carnival is celebrated in La Ceiba, a city located in the north coast, in the second half of May to celebrate the day of the city's patron saint Saint Isidore. People from all over the world come for one week of festivities. Every night there is a little carnaval (carnavalito) in a neighborhood. On Saturday there is a big parade with floats and displays with people from many countries. This celebration is also accompanied by the Milk Fair, where many Hondurans come to show off their farm products and animals.
The flag of Honduras is composed of three equal horizontal stripes. The blue upper and lower stripes represent the Pacific Ocean and the Caribbean Sea. The central stripe is white. It contains five blue stars representing the five states of the Central American Union. The middle star represents Honduras, located in the center of the Central American Union.
The coat of arms was established in 1945. It is an equilateral triangle, at the base is a volcano between three castles, over which is a rainbow and the sun shining. The triangle is placed on an area that symbolizes being bathed by both seas. Around all of this an oval containing in golden lettering: "Republic of Honduras, Free, Sovereign and Independent".
The "National Anthem of Honduras" is a result of a contest carried out in 1914 during the presidency of Manuel Bonilla. In the end, it was the poet Augusto Coello that ended up writing the anthem, with German-born Honduran composer Carlos Hartling writing the music. The anthem was officially adopted on 15 November 1915, during the presidency of . The anthem is composed of a choir and seven stroonduran.
The national flower is the famous orchid, "Rhyncholaelia digbyana" (formerly known as "Brassavola digbyana"), which replaced the rose in 1969. The change of the national flower was carried out during the administration of general Oswaldo López Arellano, thinking that "Brassavola digbyana" "is an indigenous plant of Honduras; having this flower exceptional characteristics of beauty, vigor and distinction", as the decree dictates it.
The national tree of Honduras was declared in 1928 to be simply "the Pine that appears symbolically in our Coat of Arms" ("el Pino que figura simbólicamente en nuestro Escudo"), even though pines comprise a genus and not a species, and even though legally there's no specification as for what kind of pine should appear in the coat of arms "either". Because of its commonality in the country, the "Pinus oocarpa" species has become since then the species most strongly associated as the national tree, but legally it is not so. Another species associated as the national tree is the "Pinus caribaea".
The national mammal is the white-tailed deer ("Odocoileus virginianus"), which was adopted as a measure to avoid excessive depredation. It is one of two species of deer that live in Honduras.
The national bird of Honduras is the scarlet macaw ("Ara macao"). This bird was much valued by the pre-Columbian civilizations of Honduras.
Legends and fairy tales are paramount in Honduran culture. Lluvia de Peces (Rain of Fish) is an example of this. The legends of El Cadejo and La Llorona are also popular.
The major sports in Honduras are football, basketball, rugby, volleyball and cycling, with smaller followings for athletics, softball and handball. Information about some of the sports organisations in Honduras are listed below: | https://en.wikipedia.org/wiki?curid=13394 |
History of Honduras
Honduras was already occupied by many indigenous peoples when the Spanish arrived in the 16th century. The western-central part of Honduras was inhabited by the Lencas, the central north coast by the Tol, the area east and west of Trujillo by the Pech (or Paya), the Maya and Sumo. These autonomous groups maintained commercial relationships with each other and with other populations as distant as Panama and Mexico.
Archaeologists have demonstrated that Honduras has a multi-ethnic prehistory. An important part of that prehistory was the Mayan presence around the city of Copán in western Honduras near the Guatemalan border. Copán was a major Maya city that began to flourish around 150 A.D. but reached its height in the Late Classic (700–850 A.D.). It has left behind many carved inscriptions and stelae. The ancient kingdom, named "Xukpi", existed from the 5th century to the early 9th century, and had antecedents going back to at least the 2nd century. Other Mayan city was El puente, that ended up being conquered by Copan during the classic period, which ruins are a few kilometers from Copan.
The Mayan civilization began a marked decline in population in the 9th century, but there is evidence of people still living in and around the city until at least 1200. By the time the Spanish came to Honduras, the once great city-state of Copán was overrun by the jungle, and the surviving Ch’orti' were isolated from their Choltian linguistic peers to the west. The non-Maya Lencas were then dominant in western Honduras.
Many other regions were host to large societies. Archaeological sites include , La Sierra, and Playa de los Muertos in the northwest (thought to have been populated by Western Jicaque speakers), Los Naranjos north of Lake Yojoa, and Yarumela in the Comayagua valley (thought to have been Lenca).
Honduras was mainly part of Mesoamerica, and was home to complex settled societies for several thousand consecutive years, just as the other neighboring regions, and it is clear that neighboring Maya societies and more distant Central Mexican societies were a major influence on Honduran communities, both through trade (especially with the Maya civilization, and, during the Formative Period, the Olmec civilization) and occasionally migration. For example, during internal conflict in the late Toltec Empire, around 1000 to 11000 AD, Nahuatl-speakers migrated from Central Mexico and dispersed into different parts of Central America, including Honduras, especially Chapagua. In present day El Salvador, they became the Pipil and founded Kuskatan, and in Nicaragua, they became the Nicarao.
La Ciudad Blanca is the major exception, which lies on the very fringe of Mesoamerica and is better described in relation to the Isthmo-Colombian area. This civilization
thrived from 500 AD to 1000 AD, and included sophisticated management of the environment in accordance with large urban centers.
Honduras was first sighted by Europeans when Christopher Columbus arrived at the Bay Islands on 30 July 1502 on his fourth voyage. On 14 August 1502 Columbus landed on the mainland near modern Trujillo. Columbus named the country Honduras ("depths") for the deep waters off its coast.
In January 1524, Hernán Cortés directed captain Cristóbal de Olid to establish a colony in Honduras. Olid sailed with several ships and over 400 soldiers and colonists to Cuba to pick up supplies Cortés had arranged for him. There Governor Diego Velázquez de Cuéllar convinced him to claim the colony he was to found as his own. Olid sailed to the coast of Honduras and came ashore east of Puerto Caballos at Triunfo de la Cruz where he settled and declared himself governor. Cortés got word of Olid's insurrection however, and sent his cousin Francisco de las Casas with several ships to Honduras to remove Olid and claim the area for Cortés. Las Casas, however, lost most of his fleet in a series of storms along the coast of Belize and Honduras. His ships limped into the bay at Triunfo, where Olid had established his headquarters.
When Las Casas arrived at Olid's headquarters, a large part of Olid's army was inland, dealing with another threat from a party of Spaniards under Gil González Dávila. Nevertheless, Olid decided to launch an attack with two caravels. Las Casas returned fire and sent boarding parties to capture Olid's ships. Under the circumstances, Olid proposed a truce. Las Casas agreed, and did not land his forces. During the night, a fierce storm destroyed his fleet and about a third of his men were lost. The remainder were taken prisoner after two days of exposure and no food. After being forced to swear loyalty to Olid, they were released. But Las Casas was kept prisoner, and soon joined by González, who had been captured by Olid's inland force.
The Spanish record two different stories about what happened next. Antonio de Herrera y Tordesillas, writing in the 17th century, said that Olid's soldiers rose up and murdered him. Bernal Diaz del Castillo, in his "Verdadera Historia de la Conquista de Nueva España", says that Las Casas captured Olid and beheaded him at Naco. In the meantime Cortés marched overland from Mexico to Honduras, arriving in 1525. Cortés ordered the founding of two cities, Nuestra Señora de la Navidad, near modern Puerto Cortés and Trujillo, and named Las Casas governor. However, both Las Casas and Cortés sailed back to Mexico before the end of 1525, where Las Casas was arrested and returned to Spain as a prisoner by Estrada and Alboronoz. Las Casas returned to Mexico in 1527, and returned again to Spain with Cortés in 1528.
On 25 April 1526, before going back to Mexico, Cortes appointed Hernando de Saavedra governor of Honduras with instructions to treat the indigenous people well. On 26 October 1526, Diego López de Salcedo was appointed by the emperor as governor of Honduras, replacing Saavedra. The next decade was marked by clashes between the personal ambitions of the rulers and conquerors, which hindered the installation of good government. The Spanish colonists rebelled against their leaders, and the indigenous people rebelled against the Spanish and against the abuses they imposed.
Salcedo, seeking to enrich himself, seriously clashed with Pedro Arias Dávila, governor of Castilla del Oro, who wanted Honduras as part of his domains. In 1528, Salcedo arrested Pedarias and forced him to cede part of his Honduran domain, but Charles V, Holy Roman Emperor rejected that outcome. After the death of Salcedo in 1530, settlers became arbiters of power. Governors hung and removed. In this situation, the settlers asked Pedro de Alvarado to end the anarchy. With the arrival of Alvarado in 1536, chaos decreased, and the region was under authority.
In 1537 Francisco de Montejo was appointed governor. He set aside the division of territory made by Alvarado on arriving in Honduras. One of his principal captains, Alonso de Cáceres, quelled the indigenous revolt led by the cacique Lempira in 1537 and 1538. In 1539 Alvarado and Montejo disagreed over who was governor, which caught the attention of the Council of India. Montejo went to Chiapas, and Alvarado became governor of Honduras.
During the period leading up to the conquest of Honduras by Pedro de Alvarado, many indigenous people along the north coast of Honduras were captured and taken as slaves to work on Spain's Caribbean plantations. It wasn't until Alvarado defeated the indigenous resistance headed by Çocamba near Ticamaya that the Spanish began to conquer the country in 1536. Alvarado divided the native towns and gave their labor to the Spanish conquistadors as "repartimiento". Further indigenous uprisings near Gracias a Dios, Comayagua, and Olancho occurred in 1537–38. The uprising near Gracias a Dios was led by Lempira, who is honored today by the name of the Honduran currency.
The defeat of Lempira's revolt, and the decline in fighting among rival Spanish factions all contributed to expanded settlement and increased economic activity in Honduras. In late 1540, Honduras looked to be heading towards development and prosperity, thanks to the establishment of Gracias as the regional capital of the Audiencia of Guatemala (1544). However, this decision created resentment in the populated areas of Guatemala and El Salvador. In 1549, the capital was moved to Antigua, Guatemala, and Honduras and remained a new province within the Captaincy General of Guatemala until 1821.
The first mining centers were located near the Guatemalan border, around Gracias. In 1538 these mines produced significant quantities of gold. In the early 1540s, the center for mining shifted eastward to the Río Guayape Valley, and silver joined gold as a major product. This change contributed to the rapid decline of Gracias and the rise of Comayagua as the center of colonial Honduras. The demand for labor also led to further revolts and accelerated the decimation of the native population. As a result, African slavery was introduced into Honduras, and by 1545 the province may have had as many as 2,000 slaves. Other gold deposits were found near San Pedro Sula and the port of Trujillo.
Mining production began to decline in 1560, and thus the importance of Honduras. In early 1569, new silver discoveries briefly revived the economy, which led to the founding of Tegucigalpa, which soon began to rival Comayagua as the most important city of the province. The silver boom peaked in 1584, and economic depression returned shortly thereafter. Honduran mining efforts were hampered by lack of capital and labor, and by difficult terrain. Mercury, vital for the production of silver, was scarce, besides the neglect of officials.
While the Spanish made significant conquests in the south, they had less success on the Caribbean coast, to the north. They founded a number of towns on the coast such as Puerto Caballos in the east, and sent minerals and other exports across the country from the Pacific coast to be shipped to Spain from the Atlantic ports. They founded a number of inland towns on the northwestern side of the province, notably Naco and San Pedro Sula.
In the northeast, the province of tegucigalpa resisted all attempts to conquer it, physically in the sixteenth century, or spiritually by missionaries in the 17th and 18th centuries. Among the groups found along the northern coast and in neighboring Nicaragua were the Miskito, who although organized in a democratic and egalitarian way, had a king, and hence were known as the Mosquito Kingdom.
One of the major problems for the Spanish rulers of Honduras, was the activity of the British in northern Honduras, a region over which they had only tenuous control. These activities began in the sixteenth century and continued until the nineteenth century. In the early years, European pirates frequently attacked the villages on the Honduran Caribbean. The Providence Island Company, which occupied Providence Island not far from the coast, raided it occasionally and probably also had settlements on the shore, possibly around Cape Gracias a Dios. Around 1638, the king of the Miskito visited England and made an alliance with the English crown. In 1643 an English expedition destroyed the city of Trujillo, Honduras's main port.
The Spanish sent a fleet from Cartagena which destroyed the English colony at Providence island in 1641, and for a time the presence of an English base so close to the shore was eliminated. At about the same time, however, a group of slaves revolted and captured the ship on which they were traveling, then wrecked it at Cape Gracias a Dios. Managing to get ashore, they were received by the Miskito, which led within a generation to the Miskito Zambo, a mixed-race group that by 1715 had become the leaders of the kingdom.
Meanwhile, the English captured Jamaica in 1655 and soon sought allies on the coast, and hit upon the Miskito, whose king Jeremy I visited Jamaica in 1687.
A variety of other Europeans settled in the area during this time. An account from 1699 reveals a patchwork of private individuals, large Miskito family groups, Spanish settlements and pirate hideouts along the coast. Britain declared much of the area a protectorate in 1740, though they exercised little authority there as a result of the decision. British colonization was particularly strong in the Bay Islands, and alliances between the British and Miskito as well as more local supporters made this an area the Spanish could not easily control, and a haven for pirates.
In the early eighteenth century, the House of Bourbon, linked to the rulers of France, replaced the Habsburgs on the throne of Spain. The new dynasty began a series of reforms throughout the empire (the Bourbon Reforms), designed to make administration more efficient and profitable, and to facilitate defense of the colonies. Among these reforms was a reduction in tax on precious metals and of the price of mercury, a royal monopoly. In Honduras, these reforms contributed to the resurgence of the mining industry in the 1730s.
Under the Bourbons, the Spanish government made several efforts to regain control of the Caribbean coast. In 1752, the Spaniards built the fort of San Fernando de Omoa. In 1780, the Spanish returned to Trujillo, which started out as base of operations against British settlements to the east. During the 1780s, the Spanish regained control of the Bay Islands and took most of the British and their allies in the Black River area. They were not, however, able to expand their control beyond Puerto Caballos and Trujillo, thanks to determined Miskito resistance. The Anglo-Spanish Convention of 1786 issued the final recognition of Spanish sovereignty over the Caribbean coast.
In the early 19th century, Napoleon's occupation of Spain led to the outbreak of revolts all across Spanish America. In New Spain, all of the fighting by those seeking independence was done in the center of that area from 1810 to 1821, what today is central Mexico. Once the Viceroy was defeated in the capital, Mexico City, in 1821, the news of the independence were sent to all the territories of New Spain including the Intendencies of the former Captaincy of Guatemala. Accepting this as a fact, Honduras joined the other Central American Intendencies in a joint declaration of independence from Spain. The public proclamation was done through the Act of Independence in 1821.
After the declaration of independence it was the intention of the New Spain parliament to establish a commonwealth whereby the King of Spain, Ferdinand VII, would also be Emperor of New Spain, and in which both countries were to be governed by separate laws and with their own legislative offices. Should the king refuse the position, the law provided for a member of the House of Bourbon to accede to the New Spain throne. Ferdinand VII, did not recognize the independence and said that Spain would not allow any other European prince to take the throne of New Spain.
By request of Parliament, the president of the regency Agustín de Iturbide was proclaimed emperor of New Spain but the Parliament also decided to rename New Spain to Mexico. The Mexican Empire was the official name given to this monarchical regime from 1821 to 1823. The territory of the Mexican Empire included the continental intendencies and provinces of New Spain proper (including those of the former Captaincy General of Guatemala) (See: History of Central America).
In 1823, a revolution in Mexico ousted Emperor Agustín de Iturbide, and a new Mexican congress voted to allow the Central American Intendencies to decide their own fate. That year, the United Provinces of Central America was formed of the five Central American Intendencies under General Manuel José Arce. The intendencies took the new name of "states".
Among the most important figures of the federal era include the first democratically elected president in Honduras, Dionisio de Herrera, a lawyer, whose government, begun in 1824 established the first constitution, Gen. Francisco Morazán, Federal President 1830–1834 and 1835–1839, whose figure embodies the ideal American Unionist, and José Cecilio del Valle, editor of the Declaration of Independence signed in Guatemala on 15 September 1821 and Foreign Minister of Mexico in 1823.
Soon, social and economic differences between Honduras and its regional neighbors exacerbated harsh partisan strife among Central American leaders and brought the collapse of the Federation from 1838 to 1839. General Morazán led unsuccessful efforts to maintain the federation. Restoring Central American unity remained the officially stated chief aim of Honduran foreign policy until after World War I.
Honduras broke away from the Central American Federation in October 1838 and became independent and sovereign state.
Comayagua was the capital of Honduras until 1880, when it was transferred to Tegucigalpa.
In the decades of 1840 and 1850 Honduras participated in several failed attempts to restore Central American unity, such as the Confederation of Central America (1842–1845), the covenant of Guatemala (1842), the Diet of Sonsonate ( 1846), the Diet of Nacaome (1847) and National Representation in Central America (1849–1852).
Although Honduras eventually adopted the name Republic of Honduras, the unionist ideal never waned, and Honduras was one of the Central American countries that pushed hardest for the policy of regional unity.
In 1850, Honduras attempted to build, with foreign assistance, an Inter-Oceanic Railroad from Trujillo to Tegucigalpa and then on to the Pacific Coast. The project stalled due to difficulties in the work, corruption and other issues, and in 1888, ran out of money when it reached San Pedro Sula, resulting in its growth into the nation's main industrial center and second largest city. Since independence, nearly 300 small internal rebellions and civil wars have occurred in the country, including some changes of government.
Political stability and instability both aided and distracted the economic revolution which transformed Honduras through the development of a plantation economy on the north coast. As American corporations consolidated increasingly large landholdings in Honduras, they lobbied the US government to protect their investments. Conflicts over land ownership, peasant rights, and a US-aligned comprador class of elites led to armed conflicts and multiple invasions by US armed forces. In the first decades of the century, US military incursions took place in 1903, 1907, 1911, 1912, 1919, 1924, and 1925. Because the country was effectively controlled by American fruit corporations, it was the original inspiration for the term "banana republic".
In 1899, the banana industry in Honduras was growing rapidly. A peaceful transfer of power from Policarpo Bonilla to General Terencio Sierra marked the first time in decades that a constitutional transition had taken place. By 1902, railroads had been built along the country's Caribbean coast to accommodate the growing banana industry. However, Sierra made efforts to stay in office and refused to step down when a new president was elected in 1902, and was overthrown by Manuel Bonilla in 1903.
After toppling Sierra, Bonilla, a conservative, imprisoned ex-president Policarpo Bonilla, a liberal rival, for two years and made other attempts to suppress liberals throughout the country, as they were the only other organized political party. The conservatives were divided into a host of personalist factions and lacked coherent leadership, but Bonilla reorganized the conservatives into a "national party." The present-day National Party of Honduras (Partido Nacional de Honduras—PNH) traces its origins to his administration.
Bonilla proved to be an even better friend to the banana companies than Sierra. Under Bonilla's rule, companies gained exemptions from taxes and permission to construct wharves and roads, as well as permission to improve interior waterways and to obtain charters for new railroad construction. He also successfully established the border with Nicaragua and resisted an invasion from Guatemala in 1906. After fending off Guatemalan military forces, Bonilla sought peace and signed a friendship pact with both Guatemala and El Salvador.
Nicaragua's president José Santos Zelaya saw this friendship pact as an alliance to counter Nicaragua and began to undermine Bonilla. Zelaya supported liberal Honduran exiles in Nicaragua in their efforts to topple Bonilla, who had established himself as a dictator. Supported by elements of the Nicaraguan army, the exiles invaded Honduras in February 1907. With the assistance of Salvadoran troops, Manuel Bonilla tried to resist, but in March his forces were decisively beaten in a battle notable for the introduction of machine guns into Central America. After toppling Bonilla, the exiles established a provisional junta, but this junta did not last.
American elites noticed: it was in their interests to contain Zelaya, protect the region of the new Panama Canal, and defend the increasingly important banana trade. This Nicaragua-assisted invasion by Honduran exiles strongly displeased the United States government, which concluded that Zelaya wanted to dominate the entire Central American region, sent marines to Puerto Cortes to protect the banana trade. US naval units were also sent to Honduras and were able to successfully defend Bonilla's last defense position at Amapala in the Gulf of Fonseca. Through a peace settlement arranged by the US chargé d'affaires in Tegucigalpa, Bonilla stepped down and the war with Nicaragua came to an end.
The settlement also provided for a compromise régime headed by General Miguel R. Davila in Tegucigalpa. Zelaya however was not pleased by the settlement, as he strongly distrusted Davila. Zelaya made a secret arrangement with El Salvador to oust Davila from office. The plan failed to reach fruition, but alarmed American stakeholders in Honduras. Mexico and the U.S. called the five Central American countries into diplomatic talks at the Central American Peace Conference to increase stability in the area. At the conference, the five countries signed the General Treaty of Peace and Amity of 1907, which established the Central American Court of Justice to resolve future disputes among the five nations. Honduras also agreed to become permanently neutral in any future conflicts among the other nations.
In 1908, opponents of Davila made an unsuccessful attempt to overthrow him. Despite the failure of this coup, American elites became concerned over Honduran instability. The Taft Administration saw the huge Honduran debt, over $120 million, as a contributing factor to the instability and began efforts to refinance the largely British debt with provisions for a United States customs receivership or some similar arrangement. Negotiations were arranged between Honduran representatives and New York bankers, headed by J.P. Morgan. By the end of 1909, an agreement had been reached providing for a reduction in the debt and the issuance of new 5% bonds: The bankers would control the Honduran railroad, and the United States government would guarantee continued Honduran independence and would take control of custom revenue.
The terms proposed by the bankers met with considerable opposition in Honduras, further weakening the Dávila government. A treaty incorporating the key provisions of this agreement with J.P. Morgan was finally signed in January 1911 and submitted to the Honduran legislature by Dávila. However, that body, in a rare display of independence, rejected it by a vote of thirty-three to five.
An uprising in 1911 against Dávila interrupted efforts to deal with the debt problem. The United States Marines landed, which forced both sides to meet on a US warship. The revolutionaries, headed by former president Manuel Bonilla, and the government agreed to a cease-fire and the installation of a provisional president who would be selected by the United States mediator, Thomas Dawson. Dawson selected Francisco Bertrand, who promised to hold early, free elections, and Dávila resigned.
The 1912 elections were won by Manuel Bonilla, but he died after just over a year in office. Bertrand, who had been his vice president, returned to the presidency and in 1916 won election for a term that lasted until 1920. Between 1911 and 1920, Honduras saw relative stability. Railroads expanded throughout Honduras and the banana trade grew rapidly. This stability however would prove to be difficult to maintain in the years following 1920. Revolutionary intrigues also continued throughout the period, accompanied by constant rumors that one faction or another was being supported by one of the banana companies.
The development of the banana industry contributed to the beginnings of organized labor movements in Honduras and to the first major strikes in the nation's history. The first of these occurred in 1917 against the Cuyamel Fruit Company. The strike was suppressed by the Honduran military, but the following year additional labor disturbances occurred at the Standard Fruit Company's holding in La Ceiba. In 1920, a general strike hit the Caribbean coast. In response, a United States warship was sent to the area, and the Honduran government began arresting leaders. When Standard Fruit offered a new wage—equivalent to US$1.75 per day—the strike ultimately collapsed. Labor troubles in the banana trade however were far from over.
The Liberal government opted to expand production in mining and agriculture, and in 1876 began granting substantial grants of land and tax exemptions to foreign concerns as well as to local businesses. Mining was particularly important, and the new policies coincided with the growth of banana exports, which began in the Bay Islands in the 1870s and was pursued on the mainland by small and middling farmers in the 1880s. Liberal concessions allowed U.S.-based concerns to enter the Honduran market, first as shipping companies, then as railroad and banana producing enterprises. The U.S. companies created very large plantations worked by labor that flooded into the region from the densely-settled Pacific coast, other Central American countries, and thanks to the company's policies favoring English speaking people, from the English-speaking Caribbean. The result was an enclave economy centered on the settlements and activities of the three major companies, Cuyamel Fruit Company, Standard Fruit and particularly United Fruit after it absorbed Cuyamel in 1930.
In 1899, Vaccaro Brothers and Company (later known as Standard Fruit),a New Orleans-based fruit corporation, came to Honduras in 1899 to buy coconuts, oranges and bananas on Roatán. After successfully selling the fruit in New Orleans, the company moved to the mainland of Honduras. In 1901, Vaccaro Brothers established offices in La Ceiba and Salado and eventually controlled the banana industry between Boca Cerrada and Balfate (an area of about 80 kilometers of coastline). In 1900, American businessman Samuel Zemurray and United Fruit came to Honduras to purchase banana plantations. In 1905, Zemurray started buying his own plantations and in 1910, after purchasing of plantation land in Honduras, formed his own company, the Cuyamel Fruit Company. The two companies' wealth and powerful connections allowed them to gain extraordinary influence in the Honduran government.
Rivalries between the companies, however, escalated in 1910, when the United Fruit came to Honduras to set up operations; the company had already been a local producer of bananas in Honduras. By 1912, United Fruit had two concessions it had purchased with government approval. One was to build a railroad from Tela to Progreso in the Sula Valley, and the other was to build a railroad from Trujillo to the city of Juticalpa in Olancho. In 1913, United Fruit established the Tela Railroad Company and shortly thereafter a similar subsidiary, the Trujillo Railroad Company; these two railroads managed the concessions which the Honduran government granted them. Through these two railroad companies, United Fruit dominated the banana trade in Honduras.
An 1899 census showed that northern Honduras had been exporting bananas for several years and that over 1,000 people in the region between Puerto Cortes and La Ceiba (and inland as far as San Pedro Sula) were tending bananas, most of them small holders. The fruit companies received very large concessions of land, often forcing small holders who had been growing and exporting bananas on their land out of business. In addition, they brought in many workers from Jamaica and Belize, both to work on the plantations, but also as lower managers and skilled workers. The companies often favored the West Indian workers because they spoke English and were sometimes better educated than their Honduran counterparts. This perception of foreign occupation, coupled with a growing race-prejudice against the African-descended West Indians, led to considerable tension, as the arrival of the West Indians drove demographic change in the region.
The connection between the wealth of the banana trade and the influence of outsiders, particularly North Americans, led O. Henry, the American writer who took temporary refuge in Honduras in 1896–97, to coin the term "banana republic" to describe a fictional nation he modeled on Honduras. By 1912, three companies dominated the banana trade in Honduras: Samuel Zemurray's Cuyamel Fruit Company, Vaccaro Brothers and Company and the United Fruit Company; all of which tended to be vertically integrated, owning their own lands and railroad companies and ship lines such as United's "Great White Fleet". Through land subsidies granted to the railroads, they soon came to control vast tracts of the best land along the Caribbean coast. Coastal cities such as La Ceiba, Tela, and Trujillo and towns further inland such as El Progreso and La Lima became virtual company towns.
For the next twenty years, the U.S. government was involved in quelling Central American disputes, insurrections, and revolutions, whether supported by neighboring governments or by United States companies. As part of the so-called Banana Wars all around the Caribbean, Honduras saw the insertion of American troops in 1903, 1907, 1911, 1912, 1919, 1924 and 1925. For instance, in 1917 the Cuyamel Fruit Company extended its rail lines into disputed Guatemalan territory.
In 1919, it became obvious that Francisco Bertrand would refuse to allow an open election to choose his successor. This course of action was opposed by the United States and had little popular support in Honduras. The local military commander and governor of Tegucigalpa, General Rafael López Gutiérrez, took the lead in organizing PLH opposition to Bertrand. López Gutiérrez also solicited support from the liberal government of Guatemala and even from the conservative regime in Nicaragua. Bertrand, in turn, sought support from El Salvador.
Determined to avoid an international conflict, the United States government, after some hesitation, offered to meditate the dispute, hinting to the Honduran president that if he refused the offer, open intervention might follow. The United States landed US Marines on 11 September 1919. Bertrand promptly resigned and left the country. The United States ambassador helped install an interim government headed by Francisco Bográn, who promised to hold free elections. General López Gutiérrez, who now controlled the military, made it clear that he was determined to be the next president. After considerable negotiation and some confusion, a formula was worked out under which elections were held. López Gutiérrez won easily in a manipulated election, and in October 1920 he assumed the presidency.
During Bográn's brief time in office, he had agreed to a United States proposal to invite a United States financial adviser to Honduras. Arthur N. Young of the Department of State was selected for this task and began work in Honduras in August 1920, continuing to August 1921. While there, Young compiled extensive data and made numerous recommendations, even persuading the Hondurans to hire a New York police lieutenant to reorganize their police forces. Young's investigations clearly demonstrated the desperate need for major financial reforms in Honduras, whose always precarious budgetary situation was considerably worsened by the renewal of revolutionary activities.
In 1919, for example, the military had spent more than double the amount budgeted for them, accounting for over 57 percent of all federal expenditures. Young's recommendations for reducing the military budget, however, found little favor with the new López Gutiérrez administration, and the government's financial condition remained a major problem. If anything, continued uprisings against the government and the threat of a renewed Central America conflict made the situation even worse. From 1919 to 1924, the Honduran government expended US$7.2 million beyond the amount covered by the regular budgets for military operations.
From 1920 through 1923, seventeen uprisings or attempted coups in Honduras contributed to growing United States concern over political instability in Central America. In August 1922, the presidents of Honduras, Nicaragua, and El Salvador met on the USS Tacoma in the Gulf of Fonseca. Under the watchful eye of the United States ambassadors to their nations, the presidents pledged to prevent their territories from being used to promote revolutions against their neighbors and issued a call for a general meeting of Central American states in Washington at the end of the year.
The Washington conference concluded in February with the adoption of the General Treaty of Peace and Amity of 1923, which had eleven supplemental conventions. The treaty in many ways followed the provisions of the 1907 treaty. The Central American court was reorganized, reducing the influence of the various governments over its membership. The clause providing for withholding recognition of revolutionary governments was expanded to preclude recognition of any revolutionary leader, his relatives, or anyone who had been in power six months before or after such an uprising unless the individual's claim to power had been ratified by free elections. The governments renewed their pledges to refrain from aiding revolutionary movements against their neighbors and to seek peaceful resolution for all outstanding disputes.
The supplemental conventions covered everything from the promotion of agriculture to armament limitation. One, which remained unratified, provided for free trade among all of the states except Costa Rica. The arms limitation agreement set a ceiling on the size of each nation's military forces (2,500 men in the case of Honduras) and included a United States-sponsored pledge to seek foreign assistance in establishing more professional armed forces.
The October 1923 Honduran presidential elections and subsequent political and military conflicts provided the first real tests of these new treaty arrangements. Under heavy pressure from Washington, López Gutiérrez allowed an unusually open campaign and election. The long-fragmented conservatives reunited as the National Party of Honduras (Partido Nacional de Honduras—PNH), which ran as its candidate General Tiburcio Carías Andino, the governor of the department of Cortés.
The liberal PLH was unable to unite around a single candidate and split into two dissident groups, one supporting former president Policarpo Bonilla, the other advancing the candidacy of Juan Angel Arias. As a result, no candidate secured a majority. Carías received the greatest number of votes, with Bonilla second and Arias a distant third. Under the terms of the Honduran constitution, this stalemate left the final choice of president up to the legislature, but that body was unable to obtain a quorum and reach a decision.
In January 1924, López Gutiérrez announced his intention to remain in office until new elections could be held, but he repeatedly refused to specify a date for the elections. Carías, reportedly with the support of United Fruit, declared himself president, and an armed conflict broke out. In February the United States, warning that recognition would be withheld from anyone coming to power by revolutionary means, suspended relations with the López Gutiérrez government for its failure to hold elections.
Conditions rapidly deteriorated in the early months of 1924. On 28 February, a pitched battle took place in La Ceiba between government troops and rebels. Even the presence of the USS Denver and the landing of a force of United States Marines were unable to prevent widespread looting and arson resulting in over US$2 million in property damage. Fifty people, including a United States citizen, were killed in the fighting. In the weeks that followed, additional vessels from the United States Navy Special Service Squadron were concentrated in Honduran waters, and landing parties put ashore to protect United States interests. One force of marines and sailors was dispatched inland to Tegucigalpa to provide additional protection for the United States legation. Shortly before the arrival of the force, López Gutiérrez died, and what authority remained with the central government was being exercised by his cabinet. General Carías and a variety of other rebel leaders controlled most of the countryside but failed to coordinate their activities effectively enough to seize the capital.
In an effort to end the fighting, the United States government dispatched Sumner Welles to the port of Amapala; he had instructions to try to produce a settlement that would bring to power a government eligible for recognition under the terms of the 1923 treaty. Negotiations, which were once again held on board a United States cruiser, lasted from 23 to 28 April. An agreement was worked out that provided for an interim presidency headed by General Vicente Tosta, who agreed to appoint a cabinet representing all political factions and to convene a Constituent Assembly within ninety days to restore constitutional order. Presidential elections were to be held as soon as possible, and Tosta promised to refrain from running himself. Once in office, the new president showed signs of reneging on some of his pledges, especially those related to a bipartisan cabinet. Under heavy pressure from the United States delegation, however, he ultimately complied with the provisions of the peace agreement.
Keeping the 1924 elections on track proved difficult. To put pressure on Tosta to conduct a fair election, the United States continued an embargo on arms to Honduras and barred the government from access to loans—including a requested US$75,000 from the Banco Atlántida. Furthermore, the United States persuaded El Salvador, Guatemala, and Nicaragua to join in declaring that under the 1923 treaty provision, no leader of the recent revolution would be recognized as president for the coming term. These pressures ultimately helped persuade Carías to withdraw his candidacy and also helped ensure the defeat of an uprising led by General Gregorio Ferrera ( great-grandfather of American Actress America Ferrera) of the PNH. The PNH nominated Miguel Paz Barahona (1925–29), a civilian, as president. The PLH, after some debate, refused to nominate a candidate, and on 28 December Paz Barahona won virtual unanimous election.
Despite another minor uprising led by General Ferrera in 1925, Paz Barahona's administration was, by Honduran standards, rather tranquil. The banana companies continued to expand, the government's budgetary situation improved, and there was even an increase in labor organizing. On the international front, the Honduran government, after years of negotiations, finally concluded an agreement with the British bondholders to liquidate most of the immense national debt. The bonds were to be redeemed at 20 percent of face value over a thirty-year period. Back interest was forgiven, and new interest accrued only over the last fifteen years of this arrangement. Under the terms of this agreement, Honduras, at last, seemed on the road to fiscal solvency.
Fears of disturbances increased again in 1928 as the scheduled presidential elections approached. The ruling PNH nominated General Carías while the PLH, united again following the death of Policarpo Bonilla in 1926, nominated Vicente Mejía Colindres. To the surprise of most observers, both the campaign and the election were conducted with a minimum of violence and intimidation. Mejía Colindres won a decisive victory—obtaining 62,000 votes to 47,000 for Carías. Even more surprising was Carías's public acceptance of defeat and his urging of his supporters to accept the new government.
Mejía Colindres took office in 1929 with high hopes for his administration and his nation. Honduras seemed on the road to political and economic progress. Banana exports, then accounting for 80 percent of all exports, continued to expand. By 1930 Honduras had become the world's leading producer of the fruit, accounting for one-third of the world's supply of bananas. United Fruit had come increasingly to dominate the trade, and in 1929 it bought out the Cuyamel Fruit Company, one of its two principal remaining rivals. Because conflicts between these companies had frequently led to support for rival groups in Honduran politics, had produced a border controversy with Guatemala, and may have even contributed to revolutionary disturbances, this merger seemed to promise greater domestic tranquility. The prospect for tranquility was further advanced in 1931 when Ferrera and his insurgents were killed, while leading one last unsuccessful effort to overthrow the government, after government troops discovered their hiding place in Chamelecon.
Many of Mejía Colindres's hopes, however, were dashed with the onset of the Great Depression. Banana exports peaked in 1930, then declined rapidly. Thousands of workers were laid off, and the wages of those remaining on the job were reduced, as were the prices paid to independent banana producers by the giant fruit companies. Strikes and other labor disturbances began to break out in response to these conditions, but most were quickly suppressed with the aid of government troops. As the depression deepened, the government's financial situation deteriorated; in 1931 Mejía Colindres was forced to borrow US$250,000 from the fruit companies to ensure that the army would continue to be paid.
Despite growing unrest and severe economic strains, the 1932 presidential elections in Honduras were relatively peaceful and fair. The peaceful transition of power was surprising because the onset of the depression had led to the overthrow of governments elsewhere throughout Latin America, in nations with much stronger democratic traditions than those of Honduras. After United Fruit bought out Cuyamel, Sam Zemurray, a strong supporter of the Liberal Party, left the country and the Liberals were short on cash by the 1932 general election. Mejía Colindres, however, resisted pressure from his own party to manipulate the results to favor the PLH candidate, . As a result, the PNH candidate, Carías, won the election by a margin of some 20,000 votes. On 16 November 1932, Carías took office, beginning what was to be the longest period of continuous time in power by any individual in Honduran history.
Shortly before Carías's inauguration, dissident liberals, despite the opposition of Mejía Colindres, had risen in revolt. Carías had taken command of the government forces, obtained arms from El Salvador, and crushed the uprising in short order. Most of Carías's first term in office was devoted to efforts to avoid financial collapse, improve the military, engage in a limited program of road building, and lay the foundations for prolonging his own hold on power.
The economy remained extremely bad throughout the 1930s. In addition to the dramatic drop in banana exports caused by the depression, the fruit industry was further threatened by the outbreak in 1935 of epidemics of Panama disease (a debilitating fungus) and sigatoka (leaf blight) in the banana-producing areas. Within a year, most of the country's production was threatened. Large areas, including most of those around Trujillo, were abandoned, and thousands of Hondurans were thrown out of work. By 1937 a means of controlling the disease had been found, but many of the affected areas remained out of production because a significant share of the market formerly held by Honduras had shifted to other nations.
Carías had made efforts to improve the military even before he became president. Once in office, both his capacity and his motivation to continue and to expand such improvements increased. He gave special attention to the fledgling air force, founding the Military Aviation School in 1934 and arranging for a United States colonel to serve as its commandant.
As months passed, Carías moved slowly but steadily to strengthen his hold on power. He gained the support of the banana companies through opposition to strikes and other labor disturbances. He strengthened his position with domestic and foreign financial circles through conservative economic policies. Even in the height of the depression, he continued to make regular payments on the Honduran debt, adhering strictly to the terms of the arrangement with the British bondholders and also satisfying other creditors. Two small loans were paid off completely in 1935.
Political controls were instituted slowly under Carías. The Communist Party of Honduras (Partido Comunista de Honduras—PCH) was outlawed, but the PLH continued to function, and even the leaders of a small uprising in 1935 were later offered free air transportation should they wish to return to Honduras from their exile abroad. At the end of 1935, however, stressing the need for peace and internal order, Carías began to crack down on the opposition press and political activities. Meanwhile, the PNH, at the president's direction, began a propaganda campaign stressing that only keeping Carías in office could give the nation continued peace and order. The constitution, however, prohibited immediate reelection of presidents.
To extend his term of office Carías called a constituent assembly to write a new constitution and select the individual to serve for the first presidential term under that document. Except for the president's desire to perpetuate himself in office, there seemed little reason to alter the nation's basic charter. Earlier constituent assemblies had written thirteen constitutions (only ten of which had entered into force), and the latest had been adopted in 1924. The handpicked Constituent Assembly of 1936 incorporated thirty of the articles of the 1924 document into the 1936 constitution.
The major changes were the elimination of the prohibition on immediate reelection of a president and vice president and lengthening the presidential term from four years to six. Other changes included restoration of the death penalty, reductions in the powers of the legislature, and denial of citizenship to women, and therefore also of the right to vote. Finally, the new constitution included an article specifying that the incumbent president and vice president would remain in office until 1943. But Carías, by then a virtual dictator, wanted even more, so in 1939 the legislature, now completely controlled by the PNH, extended his term in office by another six years (to 1949).
The PLH and other opponents of the government reacted to these changes by attempting to overthrow Carías. Numerous coup attempts in 1936 and 1937, succeeded only in further weakening the PNH's opponents. By the end of the 1930s, the PNH was the only organized functioning political party in the nation. Numerous opposition leaders had been imprisoned, and some had reportedly been chained and put to work in the streets of Tegucigalpa. Others, including the leader of the PLH, Zúñiga Huete, had fled into exile.
During his presidency, Carías cultivated close relations with his fellow Central American dictators, generals Jorge Ubico in Guatemala, Maximiliano Hernández Martínez in El Salvador, and Anastasio Somoza García in Nicaragua. Relations were particularly close with Ubico, who helped Carías reorganize his secret police and also captured and shot the leader of a Honduran uprising who had made the mistake of crossing into Guatemalan territory. Relations with Nicaragua were somewhat more strained as a result of the continuing border dispute, but Carías and Somoza managed to keep this dispute under control throughout the 1930s and 1940s.
The value of these ties became somewhat questionable in 1944 when popular revolts in Guatemala and El Salvador deposed Ubico and Hernández Martínez. For a time, it seemed as if revolutionary contagion might spread to Honduras as well. A plot, involving some military officers as well as opposition civilians, had already been discovered and crushed in late 1943. In May 1944, a group of women began demonstrating outside of the Presidential Palace in Tegucigalpa, demanding the release of political prisoners.
Despite strong government measures, tension continued to grow, and Carías was ultimately forced to release some prisoners. This gesture failed to satisfy the opposition, and antigovernment demonstrations continued to spread. In July several demonstrators were killed by troops in San Pedro Sula. In October a group of exiles invaded Honduras from El Salvador but were unsuccessful in their efforts to topple the government. The military remained loyal, and Carías continued in office.
Anxious to curb further disorder in the region, the United States began to urge Carías to step aside and allow free elections when his term of office expired. Carías, by then in his early seventies, ultimately yielded and announced October 1948 elections, in which he would not run. He continued, however, to find ways to use his power. The PNH nominated Carías's choice for president – Juan Manuel Gálvez, who had been minister of war since 1933. Exiled opposition figures were allowed to return to Honduras, and the PLH, trying to overcome years of inactivity and division, nominated Zúñiga Huete, the same individual whom Carías had defeated in 1932. The PLH rapidly became convinced that it had no chance to win and, charging the government with manipulation of the electoral process, boycotted the elections. This act gave Gálvez a virtually unopposed victory, and in January 1949, he assumed the presidency.
Evaluating the Carías presidency is a difficult task. His time in office provided the nation with a badly needed period of relative peace and order. The country's fiscal situation improved steadily, education improved slightly, the road network expanded, and the armed forces were modernized. At the same time, nascent democratic institutions withered, opposition and labor activities were suppressed, and national interests at times were sacrificed to benefit supporters and relatives of Carías or major foreign interests.
Once in office Gálvez showed more independence than expected. He continued and expanded some policies of the Carías administration, such as road building and development of coffee exports. By 1953 nearly one-quarter of the government budget was allocated to road construction. Gálvez also continued most of the prior administration's fiscal policies, reducing external debt and paying off the last of the British bonds. The fruit companies continued to receive favorable treatment at the hands of the Gálvez administration; for example, United Fruit received a highly favorable twenty-five-year contract in 1949.
Galvez however did institute some notable innovations. Education got more attention and a larger share of the national budget. Congress passed an income tax law, although enforcement was sporadic at best. A considerable degree of press freedom was restored, the PLH and other groups were allowed to organize, and some worker organization was permitted. Labor also benefited from legislation during this period. Congress passed, and the president signed, legislation establishing the eight-hour workday, paid holidays for workers, limited employer responsibility for work-related injuries, and regulations over the employment of women and children.
After the general strike in 1954, young military reformists staged a coup in October 1955 that installed a provisional junta. Capital punishment was abolished in 1956, though Honduras hadn't had an execution since 1940. Constituent assembly elections in 1957 appointed Ramón Villeda as president, and the constituent assembly itself became a national Congress with a 6-year term. The Liberal Party of Honduras (PLH) held power in 1957–63. The military began to become a professional institution independent of politics, with the newly created military academy graduating its first class in 1960. In October 1963, conservative military officers preempted constitutional elections and deposed Ramón Villeda Morales in a bloody coup. These officers exiled PLH members and governed under General Oswaldo López until 1970.
In July 1969, El Salvador invaded Honduras in the short Football War. Tensions in the aftermath of the conflict remain.
A civilian president for the PNH, Ramón Ernesto Cruz, took power briefly in 1970 until, in December 1972, López staged another coup. This time he adopted more progressive policies, including land reform.
López' successors continued armed forces modernization, building army and security forces, concentrating on Honduran air force superiority over its neighbors. During the governments of General Juan Alberto Melgar Castro (1975–78) and General Policarpo Paz García (1978–82), Honduras built most of its physical infrastructure and electricity and terrestrial telecommunications systems, both state monopolies. The country experienced economic growth during this period, with greater international demand for its products and increased availability of foreign commercial capital.
In 1982, the country returned to civilian rule. A constituent assembly was popularly elected in April 1980 and general elections were held in November 1981. A new constitution was approved in 1982 and the PLH government of Roberto Suazo assumed power.
Roberto Suazo Córdova won the elections on an ambitious program of economic and social development to tackle the country's recession. During this time, Honduras also assisted the contra guerillas.
President Suazo launched ambitious social and economic development projects sponsored by American development aid. Honduras became host to the largest Peace Corps mission in the world, and nongovernmental and international voluntary agencies proliferated.
From 1972 to 1983, Honduras was governed by soldiers. The influence of the United States is so strong that the term "proconsul" is used to designate its ambassador. In the 1980s, the Reagan administration used the country as a platform in its war against the Sandinista government of Nicaragua and the leftist guerrillas of El Salvador and[Guatemala]. U.S. military assistance to Honduras increased from $4 million in 1981 to $77.4 million in 1984. While stressing internally that Honduran government forces commit "hundreds of human rights violations (...), most of them for political reasons", the CIA supports death squads which, in particular Battalion 3–16, torture, murder, or cause dozens of trade unionists, academics, farmers and students to disappear. Subsequently, declassified documents indicate that Ambassador John Negroponte personally intervenes to prevent possible disclosures of these state crimes, in order to avoid "creating human rights problems in Honduras".
The United States established a continuing military presence in Honduras with the purpose of supporting the Contra guerillas fighting the Nicaraguan government and also developed an air strip and a modern port in Honduras. Though spared the bloody civil wars wracking its neighbors, the Honduran army quietly waged a campaign against Marxist–Leninist militias such as the Cinchoneros Popular Liberation Movement, notorious for kidnappings and bombings, and many non-militants. The operation included a CIA-backed campaign of extrajudicial killings by government-backed units, most notably Battalion 316.
President Suazo, relying on U.S. support, created ambitious social and economic development projects to help with a severe economic recession and with the perceived threat of regional instability.
As the November 1985 election approached, the PLH could not settle on a presidential candidate and interpreted election law as permitting multiple candidates from any one party. The PLH claimed victory when its presidential candidates collectively outpolled the PNH candidate, Rafael Leonardo Callejas, who received 42% of the total vote. José Azcona, the candidate receiving the most votes (27%) among the PLH, assumed the presidency in January 1986. With strong endorsement and support from the Honduran military, the Suazo administration ushered in the first peaceful transfer of power between civilian presidents in more than 30 years. In 1989 he oversaw the dismantling of Contras which were based in Honduras.
In 1988, in Operation Golden Pheasant, US forces were deployed to Honduras in response to Nicaraguan attacks on Contra supply caches in Honduras.
In January 1990, Rafael Leonardo Callejas won the presidential election and took office, concentrating on economic reform and reducing the deficit. He began a movement to place the military under civilian control and laid the groundwork for the creation of the public prosecution service.
In 1993, PLH candidate Carlos Roberto Reina was elected with 56% of the vote against PNH contender Oswaldo Ramos Soto. He won on a platform calling for "moral revolution" and made active efforts to prosecute corruption and pursue those responsible for alleged human rights abuses in the 1980s. The Reina administration successfully increased civilian control over the armed forces and transferred the national police from military to civilian authority. In 1996, Reina named his own defense minister, breaking the precedent of accepting the nominee of the armed forces leadership.
His administration substantially increased Central Bank net international reserves, reduced inflation to 12.8% a year, restored a better pace of economic growth (about 5% in 1997), and held down spending to achieve a 1.1% non-financial public sector deficit in 1997.
The Liberal Party of Honduras (PLH)'s Carlos Roberto Flores took office 27 January 1998 as Honduras' fifth democratically-elected president since free elections were restored in 1981, with a 10% margin over his main opponent, PNH nominee Nora Gúnera de Melgar, widow of former leader Juan Alberto Melgar). Flores inaugurated International Monetary Fund (IMF) programs of reform and modernization of the Honduran government and economy, with emphasis on maintaining the country's fiscal health and improving international competitiveness.
In October 1998, Hurricane Mitch devastated Honduras, leaving more than 5,000 people dead and 1.5 million displaced. Damages totaled nearly $3 billion. International donors came forward to assist in rebuilding infrastructure, donating US$1400 million in 2000.
In November 2001, the National Party won presidential and parliamentary elections. The PNH gained 61 seats in Congress and the PLH won 55. The PLH candidate Rafael Pineda was defeated by the PNH candidate Ricardo Maduro, who took office in January 2002. Maduro administration emphasized on stopping "mara" growth, especially Mara 18 and Mara Salvatrucha.
On 27 November 2005 the PLH candidate Manuel Zelaya beat the PNH candidate and current Head of Congress Porfirio "Pepe" Lobo, and became the new President on 27 January 2006.
Jose Manuel Zelaya Rosales of the Liberal Party of Honduras won 27 November 2005 presidential elections with less than a 4% margin of victory, the smallest margin ever in Honduran electoral history. Zelaya's campaign theme was "citizen power," and he vowed to increase transparency and combat narcotrafficking, while maintaining macroeconomic stability. The Liberal Party won 62 of the 128 congressional seats, just short of an absolute majority.
In 2009 Zelaya caused controversy with his call to have a constitutional referendum in June to decide about convening a Constitutional National Assembly to formulate a new constitution. The constitution explicitly bars changes to some of its clauses, including the term limit, and the move precipitated a Constitutional Crisis.
An injunction against holding the referendum was issued by the Honduran Supreme Court.
Zelaya rejected the ruling and sacked Romeo Vásquez Velásquez, the head of Honduras's armed forces. Vásquez had refused to help with the referendum because he did not want to violate the law. The sacking was deemed unlawful by the Supreme Court as well as by the Congress and Vásquez was reinstated. The President then further defied the Supreme Court by pressing ahead with the vote, which the Court had deemed "illegal". The military had confiscated the ballots and polls in a military base in Tegucigalpa. On 27 June, a day before the election, Zelaya followed by a big group of supporters entered the base and ordered, as Commanding Officer of the Armed Forces, for the ballots and polls to be returned to him. The congress saw this as abuse of power and ordered his capture.
On 28 June 2009, the military removed Zelaya from office and deported him to Costa Rica, a neutral country. Elvin Santos, the vice-president during the start of Zelaya's term, had resigned in order to run for president in the coming elections, and by presidential line of succession the head of Congress, Roberto Micheletti, was appointed president. However, due to the stance taken by the United Nations and the Organization of American States on use of military force to depose a president, most countries in the region and in the world continued to recognize Zelaya as the President of Honduras and denounced the actions as an assault on democracy .
Honduras continued to be ruled by Micheletti's administration under strong foreign pressure. On 29 November, democratic general elections were held, with former Congressional president and 2005 nominee, Pepe Lobo as victor.
Inaugurated on 27 January 2010, Pepe Lobo and his administration focused throughout the first year for foreign recognition of presidential legitimacy and Honduras's reinstitution in the OAS.
General: | https://en.wikipedia.org/wiki?curid=13395 |
Geography of Honduras
Honduras is a country in Central America. Honduras borders the Caribbean Sea and the North Pacific Ocean. Guatemala lies to the west, Nicaragua south east and El Salvador to the south west. Honduras is the second largest Central American republic, with a total area of .
Honduras has a Caribbean coastline extending from the mouth of the Río Motagua in the west to the mouth of the Río Coco in the east, at Cape Gracias a Dios. The southeastern side of the triangle is a land border with Nicaragua. It follows the Río Coco near the Caribbean Sea and then extends southwestward through mountainous terrain to the Gulf of Fonseca on the Pacific Ocean. The southern apex of the triangle is a coastline on the Gulf Fonseca, which opens onto the Pacific Ocean. In the west there are two land borders: with El Salvador as and with Guatemala as .
Honduras has three distinct topographical regions: an extensive interior highland area and two narrow coastal lowlands. The interior, which constitutes approximately 80 percent of the country's terrain, is mountainous. The larger Caribbean lowlands in the north and the Pacific lowlands bordering the Gulf of Fonseca are characterized by alluvial plains.
The interior highlands are the most prominent feature of Honduran topography. This mountain area makes up about 80% of the country's area, and is home to the majority of the population. Because the rugged terrain has made the land difficult to traverse and equally difficult to cultivate, this area has not been highly developed. The soil here is poor: Honduras lacks the rich volcanic ash found in other Central American countries. Until the early 20th century, the highland economy consisted primarily of mining and livestock.
In the west, Honduras' mountains blend into the mountain ranges of Guatemala. The western mountains have the highest peaks, with the Pico Congolón at an elevation of and the Cerro Las Minas at . The Honduran border with El Salvador crosses the peak of Cerro El Pital, the highest point in El Salvador at over . These mountains are woodland covered with mainly pine forests.
In the east, the mountains merge with those in Nicaragua. Although generally not as high as the mountains near the Guatemalan border, the eastern ranges possess some high peaks, such as the Montaña de la Flor at , El Boquerón (Monte El Boquerón) at , and Pepe Bonito at .
One of the most prominent features of the interior highlands is a depression that runs from the Caribbean Sea to the Gulf of Fonseca. This depression splits the country's cordilleras into eastern and western parts and provides a relatively easy transportation route across the isthmus. Widest at its northern end near San Pedro Sula, the depression narrows as it follows the upper course of the Río Humuya. Passing first through Comayagua and then through narrow passes south of the city, the depression widens again as it runs along the border of El Salvador into the Gulf of Fonseca.
Scattered throughout the interior highlands are numerous flat-floored valleys, at in elevation, which vary in size. The floors of the large valleys provide sufficient grass, shrubs, and dry woodland to support livestock and, in some cases, commercial agriculture. Subsistence agriculture has been relegated to the slopes of the valleys, with the limitations of small-sized holdings, primitive technology, and low productivity that traditionally accompany hillside cultivation. Villages and towns, including the capital, Tegucigalpa, are tucked in the larger valleys.
Vegetation in the interior highlands is varied. Much of the western, southern, and central mountains are open woodland; supporting pine forest interspersed with some oak, scrub, and grassy clearings. The ranges toward the east are primarily continuous areas of dense, broad-leaf evergreen forest. Around the highest peaks, remnants of dense rainforest that formerly covered much of the area are still found.
This area of river valleys and coastal plains, which most Honduras call "the north coast," or simply "the coast," has traditionally been Honduras's most exploited region. The central part of the Caribbean lowlands, east of La Ceiba, is a narrow coastal plain only a few kilometers wide.
To the east and west of this section the Caribbean lowlands widen and in places extend inland a considerable distance along broad river valleys. The broadest river valley, along the Río Ulúa near the Guatemalan border, is Honduras's most developed area. Both Puerto Cortés, the country's largest port, and San Pedro Sula, Honduras's industrial capital, are located here, as is La Ceiba, the third largest city in the country.
To the east, near the Nicaraguan border, the Caribbean lowlands broaden to an extensive area known as La Mosquitia. Unlike the western part of the Caribbean lowlands, the Mosquitia is Honduras's least-developed area. Underpopulated and culturally distinct from the rest of the country, the area consists of inland savannah with swamps and mangrove near the coast. During times of heavy rainfall, much of the savannah area is covered by shallow water, making transportation by means other than a shallow-draft boat almost impossible.
More than 46 "campesinos" from the Aguán Valley, in the far north-east of Honduras, have either been killed or have disappeared since the 2009 coup. In the 1970s, government policy encouraged agricultural cooperatives and collectives to establish themselves in the lightly-populated area, but after 1992 government policy favored privatization. One of the biggest beneficiaries of the new policy and one of the richest men in Honduras, Miguel Facussé, owned some in the lower Aguán, which he planted in African palms for his palm oil venture.
The smallest geographic region of Honduras, the Pacific lowlands, is a strip of land averaging on the north shore of the Gulf of Fonseca. The land is flat, becoming swampy near the shores of the gulf, and is composed mostly of alluvial soils washed down from the mountains. The gulf is shallow and the water rich in fish and mollusks. Mangroves along the shore make shrimp and shellfish particularly abundant by providing safe and abundant breeding areas amid their extensive networks of underwater roots.
Several islands in the gulf fall under Honduras's jurisdiction. The two largest, Zacate Grande and El Tigre, are eroded volcanoes, part of the chain of volcanoes that extends along the Pacific coast of Central America. Both islands have volcanic cones more than in elevation that serve as landmarks for vessels entering Honduras's Pacific.
Honduras controls a number of islands as part of its offshore territories. In the Caribbean Sea, the islands of Roatán (Isla de Roatán), Utila, and Guanaja together form the "Islas de la Bahía" (Bay Islands), one of the eighteen departments into which Honduras is divided. Roatán, the largest of the three islands, is . The Islas de la Bahía archipelago also has a number of smaller islands, among them the islets of Barbareta (Isla Barbareta), Santa Elena (Isla Santa Elena), and Morat (Isla Morat).
Farther out in the Caribbean are the Islas Santanillas, formerly known as Swan Islands. A number of small islands and keys can be found nearby, among them Cayos Zapotillos and Cayos Cochinos. In the Gulf of Fonseca, the main islands under Honduran control are El Tigre, Zacate Grande (Isla Zacate Grande), and Exposición (Isla Exposición).
Honduras has a rainy tropical climate.
The climatic types of each of the three physiographic regions differ. The Caribbean lowlands have a tropical wet climate with consistently high temperatures and humidity, and rainfall fairly evenly distributed throughout the year. The Pacific lowlands have a tropical wet and dry climate with high temperatures but a distinct dry season from November through April. The interior highlands also have a distinct dry season, but, as is characteristic of a tropical highland climate, temperatures in this region decrease as elevation increases.
Unlike in more northerly latitudes, temperatures in the tropics vary primarily with elevation instead of with the season. Land below is commonly known as "tierra caliente" (hot land), between as "tierra templada" (temperate land), and above as "tierra fría" (cold land). Both the Caribbean and Pacific lowlands are "tierra caliente", with daytime highs averaging between throughout the year.
In the Pacific lowlands, April, the last month of the dry season, brings the warmest temperatures; the rainy season is slightly cooler, although higher humidity during the rainy season makes these months feel more uncomfortable. In the Caribbean lowlands, the only relief from the year-round heat and humidity comes during December or January when an occasional strong cold front from the north (a "norte") brings several days of strong northwest winds and slightly cooler temperatures.
The interior highlands range from tierra templada to tierra fría. Tegucigalpa, in a sheltered valley and at an elevation of , has a pleasant climate, with an average high temperature ranging from in April, the warmest month, to in January, the coolest. Above , temperatures can fall to near freezing at night, and frost sometimes occurs.
Rain falls year round in the Caribbean lowlands but is seasonal throughout the rest of the country. Amounts are copious along the north coast, especially in the Mosquitia, where the average rainfall is . Nearer San Pedro Sula, amounts are slightly less from November to April, but each month still has considerable precipitation. The interior highlands and Pacific lowlands have a dry season, known locally as "summer," from November to April. Almost all the rain in these regions falls during the "winter," from May to September. Total yearly amounts depend on surrounding topography; Tegucigalpa, in a sheltered valley, averages only of precipitation.
Honduras lies within the hurricane belt, and the Caribbean coast is particularly vulnerable to hurricanes or tropical storms that travel inland from the Caribbean. Hurricane Francelia in 1969 and Tropical Storm Alleta in 1982 affected thousands of people and caused extensive damage to crops. Hurricane Fifi in 1974 killed more than 8,000 and destroyed nearly the entire banana crop.
In 1998 Hurricane Mitch became the most deadly hurricane to strike the Western Hemisphere in the last two centuries. This massive hurricane not only battered the Honduran coastline, but engulfed nearly the entire country with its powerful winds and torrential downpours. Throughout Central America, Mitch claimed in excess of 11,000 lives, with thousands of others missing. More than three million people were either homeless or severely affected. Most Hurricanes occasionally form over the Pacific and move north to affect southern Honduras, but Pacific storms are generally less severe and their landfall rarer.
On September 4, 2007, Hurricane Felix made landfall at Honduras and Nicaragua, as a Category 5 hurricane. In November 2008, Hurricane Paloma, along with the October 2008 Central America floods, left at least 60 people dead and more than 300,000 in need of assistance.
Drought in Honduras has become a driver of emigration, causing poor crop yields for poor subsistence farmers, and has been a factor in the formation of migrant caravans to the United States.
According to the FAO, migrants leaving central and western Honduras between 2014 and 2016 most frequently cited "no food" as their reason for leaving.
The frequency of natural disasters in Honduras, such as floods, mudslides, tropical storms and hurricanes, "is expected to increase as climate change intensifies," according to a United States Agency for International Development factsheet.
Honduras is one of the countries which is most at risk from climate change.
Over 40 percent of Hondurans work in the agricultural sector, which is impacted by increasing temperatures and reduced rainfall.
A 2013 bark beetle outbreak destroyed a quarter of all forests in Honduras.
Honduras contributes only 0.1 percent of global greenhouse gas emissions.
Honduras is a water-rich country. The most important river in Honduras is the Ulúa, which flows to the Caribbean through the economically important Valle de Sula. Numerous other rivers drain the interior highlands and empty north into the Caribbean. These other rivers are important, not as transportation routes, but because of the broad fertile valleys they have produced. The Choluteca River runs south from Tegucigalpa through Choluteca and out at the Gulf of Fonseca.
Rivers also define about half of Honduras's international borders. The Río Goascorán, flowing to the Gulf of Fonseca, and the Río Lempa define part of the border between El Salvador and Honduras. The Coco River marks about half of the border between Nicaragua and Honduras.
Despite an abundance of rivers, large bodies of water are rare. Lago de Yojoa, located in the west-central part of the country, is the sole natural lake in Honduras. This lake is twenty-two kilometers long and at its widest point measures fourteen kilometers. Several large, brackish lagoons open onto the Caribbean in northeast Honduras. These shallow bodies of water allow limited transportation to points along the coast.
The natural resources include: timbers, gold, silver, copper, lead, zinc, iron ore, antimony, coal, fish, and hydropower from the mountain rivers.
Frequent mild earthquakes, damaging hurricanes, and floods along the Caribbean coast are examples of Honduran natural hazards.
Deforestation poses a particular problem for Honduras; the goals of conserving endangered natural resources and promoting economic development has often been quite difficult to combine, which has resulted in conflicting policies that fail to protect forests. Honduras has suffered the greatest percentage loss of forest cover of any country in Latin America. The forests in Honduras are an important source of economic resources to finance government programs. The tropical forests in Honduras are diminishing rapidly due to poverty in the country. The majority of the population of Honduras see the forests as an obstacle to the expansion of ranching and agricultural activities, ignoring the significance that forests have for the society through protection of fauna, soils, recreation, purification of air, and the regulation of water sources. The urban population is also increasing rapidly over the years, which means that it has led to the clearing of land for farming and the farming of marginal soils in rural areas, as well as to uncontrolled development in the fringes of urban areas.
Illegal logging is also a major problem in Honduras. The majority of the production of timber in the country is illegal. According to the Center for International Policy and the Environmental Investigation Agency, the timber trade corruption involves politicians, timber companies, bureaucrats, mayors, and even the police. All of these factors contribute to deforestation and consequently to soil erosion. According to the Food and Agriculture Organization, Honduras lost 59,000 hectares of forest per year between 1990 and 2000.
Deforestation in regions dominated by tropical dry forests has advanced faster than regions dominated by other types of forests. Tropical dry forests have lower species richness compared to moist forests. However, tropical dry forests possess higher levels of endemic species, greater utility for humans, and also have a higher human population density. The effects of deforestation are more noticeable during tropical storms and hurricanes. In 1998, Hurricane Mitch killed thousands and also caused damage to the country. According to aerial surveys following the storm, mudslides were worse in deforested areas than forested areas. Many endangered species live in the forests of Honduras,
and they may soon be extinct if deforestation continues. The climate has also changed because of the lack of trees in Honduras. This has caused the growing season for farmers to be shortened.
The ground in deforested areas is absorbing all the water as well. The largest source of freshwater in Honduras, Lake Yojoa, is on the verge of turning into a swamp. This is due to the high rate of pollution and logging as well. Lake Yojoa is also being polluted by heavy metals from local mining activities. Lake Yojoa is home to more than 400 species of birds, but the area surrounding the lake is suffering from deforestation and water pollution. However, not only Lake Yojoa is being polluted with heavy metals, nearby rivers and streams are also being polluted. | https://en.wikipedia.org/wiki?curid=13396 |
Demographics of Honduras
This article is about the ethnic groups and population of Honduras.
According to the total population was in , compared to 1,487,000 in 1950 (a fivefold increase in 60 years). The proportion of the population aged below 15 in 2010 was 36.8%, 58.9% were aged between 15 and 65 years of age, and 4.3% were aged 65 years or older.
As of 2014, 60% of Hondurans live below the poverty line. More than 30% of the population is divided between the lower middle and upper middle class, less than 10% are wealthy or belong to the higher social class (most live in Tegucigalpa and San Pedro Sula).
Structure of the population (01.07.2007) (estimates) (data refer to projections based on the 2001 Population Census):
Structure of the population (01.07.2010) (estimates):
Registration of vital events is in Honduras not complete. The Population Department of the United Nations prepared the following estimates.
Births and deaths
Total Fertility Rate (TFR) (Wanted Fertility Rate) and Crude Birth Rate (CBR):
Mestizos (Native American mixed with European) have been reported by the CIA World Factbook to be about 90% of the population of Honduras. As in other Latin American countries, the question of racial breakdown of a national population is contentious. Since the beginning of the 20th century at least, Honduras has publicly framed itself as a mestizo nation, ignoring and at times disparaging both the African component of the population and often also the surviving indigenous population that was still regarded as pure blood.
Because of social stigmas attached, many people denied having African ancestry, and after African descended Caribbean workers arrived in Honduras, an active campaign to denigrate all people of African descent, made persons of mixed race anxious to deny any African ancestry. Hence official statistics quite uniformly under-represent those people who have ancestry in favor of a "two race" solution.
According to the 2001 census the Amerindian population in Honduras included 381,495 people (6.3% of the total population). With the exception of the Lenca and the Ch'orti' they still keep their language.
Six different Amerindian groups were counted at the 2001 census:
The Afro-Honduran population consists of Garifuna and Creoles.
Examples of well-known Afro-Hondurans are footballers David Suazo, Victor "Muma" Bernardez, and Wilson Palacios; and actor Skai Jackson.
White people along with Afro-descendants and Amerindians belong to the minorities of Honduras. Most of the Honduran whites are descendants of the Spanish colonist, they inhabit most of the western part of the country. Other Honduran whites are descendants of European immigrants who arrived at the beginning of the 20th century.
Examples of white Hondurans are film director Juan Carlos Fanconi, former president of the Central American federation Francisco Morazan.
Honduras hosts a significant Palestinian community (the vast majority of whom are Christian Arabs). These Arab-Hondurans are sometimes called "Turcos", because they arrived in Honduras using Turkish travel documents, as their homeland was then under the control of the Ottoman Empire. The Palestinians arrived in the country in the late 19th and early 20th centuries, establishing themselves especially in the city of San Pedro Sula.
The Italians in Honduras were 389 in 2014, nearly all of them concentrated in the capital area
There is also a small Chinese community in Honduras. A lawyer of the Committee for the Defense of Human Rights in Honduras (CODEH) stated that the Chinese community in Honduras is rather small. Many of the Chinese are immigrants who arrived from China after the revolution and their descendants. | https://en.wikipedia.org/wiki?curid=13397 |
Politics of Honduras
Politics of Honduras takes place in a framework of a multi-party system presidential representative democratic republic. The President of Honduras is both head of state and head of government. Executive power is exercised by the government. Legislative power is vested in the National Congress of Honduras. The party system is dominated by the conservative National Party of Honduras and the Liberal Party of Honduras.
The Judiciary is independent of the executive and the legislature.
The 1981 Constitution of Honduras provides for a fairly strong executive in some ways, but many powers conceded to the executive elsewhere are designated duties of the unicameral National Congress. A judiciary is appointed by the National Congress.
That constitution delineates mechanisms for amending it, but it also declares eight articles immutable and unalterable and not subject to change, which include a guarantee of a republican form of government, and an explicit prohibition against presidential candidacy of anyone who has been president previously at any time or for any reason.
The constitution also provides for an independent organ to supervise and implement elections, the Superior Electoral Tribunal. Another organ similarly independent of the three main branches of government a Special Court for Resolution of Conflicts Between Branches of Government. The current president, Juan Orlando Hernandez, is considered to be a divisive figure with political support within the country as well as vocal opposition from the public.
The president is both the chief of state and head of government and is elected by popular vote for a four-year term with no possibility of re-election. In the most recent election, however, President Juan Orlando Hernández was reelected despite national protest and dispute over ballots, after The Supreme court voided a single-term limit for the country’s presidency in 2015.
The National Congress of Honduras "(Congreso Nacional)" has 128 members "(diputados)", elected for a four-year term by proportional representation; congressional seats are assigned the parties' candidates on a departmental basis in proportion to the number of votes each party receives.
The judiciary includes a Supreme Court of Justice - the Supreme Court of Honduras, courts of appeal, and several courts of original jurisdiction – such as labor, tax, and criminal courts. The judges of the Supreme Court of Justice or Corte Suprema de Justicia, are elected for seven-year terms by the National Congress.
For administrative purposes, Honduras is divided into 18 departments, with departmental and municipal officials selected for four-year terms.
Honduras has six registered political parties:
Since about 1920 Honduras has had essentially a two-party system, with the Liberal Party and the National Party dominating electoral politics. The early 1980s were a relatively peaceful period compared to other countries in Central America buffeted by left-wing guerrillas. The Honduran government provided bases for U.S. backed counter-revolutionary armies operating in Nicaragua.
Between 1981 and 1984, several forced disappearances were carried out by the military, as proved before the Inter-American Court of Human Rights. and in the Report of the National Commissioner for the Protection of Human Rights in Honduras. In 1984, armed-forces chief General Gustavo Alvarez was deposed amid anti-US demonstrations in the capital, Tegucigalpa; this marked a decrease in counter-revolutionary activity, and the government continued to assist the United States' anti-Sandinista activities in Nicaragua in return for economic aid.
In 1986, the Liberal Party's José Azcona del Hoyo was elected president. Allegations of human rights abuses, and summary executions by police—especially of street gangs—have diminished steadily in recent years up to the present (2009), while political violence has been a constant.
Rafael Callejas became president in 1990 and introduced neo-liberal economic reforms and austerity measures. He is credited with a major push to improve the country's transportation infrastructure. He implemented a policy of requiring cabinet member nominees to first pass appropriate examinations, unique among politicians anywhere.
In 1993, the Liberal Party's Carlos Reina was elected president, promising to reform the judicial system and limit the power of the armed forces. In April 1995 compulsory military service was abolished. The Liberal Party's Carlos Roberto Flores Facussé was elected in 1997, also promising to restructure the armed forces; in 1999 the armed forces were brought under civilian control.
In 2001, Ricardo Maduro was elected president on a platform that promised to stop rampant inflation afflicting the nation, and to put a stop to the brutal trademark violence of street gangs. At the time, the abuse of child-protection laws by gangs recruiting minors, and aggressive recruitment of members under threat of violence, lent broad popular support for Maduro's enlistment of the armed forces for a greater role in fighting crime during this time, as the police were seen as overwhelmed.
A major political issue in Honduras since about 1990 has been the high level of violent crime associated with the "maras" (Spanish for gangs, predominantly of young people), and drug trafficking organizations involved in the transport of cocaine from South America to the United States. Although gangs existed in Tegucigalpa in the 1980s, the phenomenon exploded around 1990. The range of criminal activities that street gangs carry out is broad, from kidnapping and human trafficking to drug, auto and weapons smuggling, as well as domestic extortion. A recent estimate by the. FBI and their counterparts in Central America put the number of gang members in Honduras at 36,000.
Gang membership is partly attributable to population movement between Honduras and the United States. During the 1980s, many Hondurans fled to the US to avoid civil war and strife, and emigration continued for economic reasons after that. Other than civil war, high rates of poverty and unemployment and lack of education make at-risk youth more vulnerable to gangs. In Honduras, close to 30% of the population is aged 15–24.
Immigrant children who formed or joined urban gangs in cities such as Los Angeles began to have an impact in Honduras around 1990 because gang members completing prison sentences were deported. Deportees brought the two main gangs in Honduras, MS-13 and the 18th Street gang. In 2004, the U.S. Department of Homeland Security's Office of Immigration and Enforcement reported that Honduras received 2,345 total criminal deportations. However, it is unclear how many were gang-affiliated.
Almost a third of Hondurans feel a sense of insecurity related to crime. The report listed as causes and risk factors, "Lack of opportunities and alternatives for youth and adolescents, family breakdown, movement of Hondurans to and from the United States, and abuse of drugs and alcohol, and presence of weapons".
However, the "departamento" with more weapons per person than anywhere else in the nation, Olancho, is the only area with no gang presence at all.
The report adds however, that the "overwhelming attention given to gang violence by the media and the government" is partly responsible. Gang members often compete to see which crime receives the most coverage. It has been recently contended though that the media tends to exaggerate the gang problem, thus making Hondurans believe their communities less secure than they really are,, because of the extreme violence that accompanies the crimes perpetrated by these gangs. Another reason for the attention is that they most affect the lower-income population disproportionately, and almost all areas of public activities were affected.
The murder rate in 1999 was 154 murders per 100,000; around 2005 this had fallen to 49 per 100,000. (The death rate from all causes is roughly 1000 per 100,000 population.) Most of the crime in Honduras takes place in the big cities of Tegucigalpa and San Pedro Sula. A survey by Mitchell A. Seligson in 2004 found that 18% of the population thought public security and violence – delinquency, crime, violence, drug trafficking, and gangs – were the most serious problem facing the country.
Honduras has been not only a transit point for cocaine running between Colombia and the United States, a pattern broken substantially after the arrest and exile of the ex-president Mel Zelaya, but also has an internal market, creating all sorts of inner-city problems. Gangs sell crack, commit other crimes, and hire themselves out to organised drug smugglers. Those engaged in international trafficking are better resourced than the state authorities combating them. Although gang members have been arrested for selling drugs at the street level, it is still unclear how much interaction they have with the larger drug cartels and their operations within Honduras.
Some would use this argument to justify increasing US military aid to Honduras to help fight the organised drug gangs, while others claim that Honduras would be better off legalizing drugs, thus avoiding military solutions to Honduran security problems. A recent form of U.S. aid that addresses the gang problem was the creation of the Central American Regional Security Initiative (CARSI), originally seen as a part of the U.S.- Mexico Mérida Initiative. In 2010 the U.S. Congress separated funding for Central America totaling $83 million. Although some of the aid came in the form of military hardware, some components focused on strengthening the receiving country's judicial system.
President Ricardo Maduro, a former chairman of the Central Bank of Honduras, ran on an anti-crime platform after his only son was murdered on 28 April 1999. During his tenure at the Central Bank of Honduras, a banking license was given to Banco de Producción. After leaving the Central Bank he became chairman and majority stockholder of Banco de Producción, and the general manager of the Central Bank, Ana Cristina Mejia de Pereira, became general manager of Banco de la Producción.
Maduro came into power in January 2002 with a wave of measures against gangs and delinquency, the most noticeable, soldiers patrolling the streets. Many gang members were jailed for illicit association. His "Mano Duro" policy (name used to describe Central American leaders taking a hard stance against crime) led to the creation of a penal code in 2003 which made street gangs like MS-13 and M-18 illegal and established jail sentences up to 12 years for proven membership.
Violent crime dipped noticeably under Maduro. These "mano duro" policies had significant downsides as well. For example, many youth are wrongly arrested for membership but later become recruited into gangs while in jail. Also, these gang round-ups led to the overcrowding in the prison system. Regardless of the initial signs of success, gangs learned to adapt and continued to carry out their activities. Some reports say that gang leaders from El Salvador come into Honduras to help stop their decline.
Under President Zelaya's term, the government attempted to create dialog with gang members to sway them to renounce their violence and re-integrate into society. However, this program relied mainly on private groups to implement the actual re-entry programs. Zelaya also created a specialized anti-gang unit within the police force which he used to coordinate patrols with the Honduran military. Although these patrols led to the arrests of 1,200 gang members, the rate of violence in Honduras did not subside.
Their desperation resulted in a "declaration of war" against the government, and three major events over the last few years brought this tiny country to the attention of the world media: a massacre of 68 prisoners at the prison farm just outside La Ceiba on 5 March 2003, a fire in the prison at San Pedro Sula that killed 107 prisoners on 18 May 2004, and the massacre of 27 innocent men, women and children in San Pedro Sula, on 23 December 2004.
A massacre in the San Pedro Sula suburb of Chamelecón left 27 dead and 29 injured. The murderers left behind a message, claiming to come from the Cinchoneros, railing against Maduro, Lobo, Álvarez and the death penalty. The Cinchoneros are believed to be defunct, however. The attackers promised another massacre before the new year. However one suspect was detained very shortly afterwards in another part of San Pedro Sula, and further arrests were later made. Local police said that the gunmen were members of the street gang Mara Salvatrucha (MS-13), and the supposed mastermind of the attack, Ebner Anibal Rivera-Paz, was later arrested in Falfurrias,Texas.
After Maduro left office gang resurgence was felt and their presence continued, although less than before, but now using the cover of anti-government demonstrations for their activities.
The PNH and PLH have ruled the country for decades. In the last years, Honduras has had five Liberal presidents: Roberto Suazo Córdova, José Azcona del Hoyo, Carlos Roberto Reina, Carlos Roberto Flores and Manuel Zelaya, and three Nationalists: Rafael Leonardo Callejas Romero Porfirio Lobo Sosa and Ricardo Maduro.
The elections have been full of controversies, including questions about whether Azcona was born in Honduras or Spain, and whether Maduro should have been able to stand given he was born in Panama.
On February 20, 2005 the PNH and the PLH held internal party elections (primaries) to decide who would represent them in the forthcoming presidential elections in November. Porfirio Pepe Lobo became the PNH candidate. Manuel Zelaya became the Liberal Party candidate. Forty-five percent of the electorate voted in the primaries: 24% for the Liberals and 21% for the National Party. According to the Country Report quoted in the U.C. San Diego Library "Latin American election results", "The low participation rate in the primaries . . . is a reflection of the lack of public faith in Honduras's political institutions and leaders."
A Presidential and general election was held on November 27, 2005. Manuel Zelaya of the Liberal Party of Honduras (Partido Liberal de Honduras: PLH) won, with Porfirio Pepe Lobo of the National Party of Honduras (Partido Nacional de Honduras: PNH) coming in second. Voter turnout was 55% of the 3.9 million eligible. The PNH challenged the election results, and Lobo Sosa did not concede until December 7. Towards the end of December the government finally released the total ballot count, giving Zelaya the official victory. Zelaya was inaugurated as Honduras' new president on January 27, 2006.
On 20 December 2007, the National Congress, at the urging of the leaders of both of the dominant parties, passed a set of electoral reforms.
The reforms were opposed by President Manuel Zelaya, who indicated that he would veto them, citing constitutional objections.
The reforms would move the date of the presidential primaries ahead from February 2009 to November 2008, change the location of vote-counting from a central one to the individual municipalities, and radically increase public funding of political parties, from about US$3.2 million every election cycle to about US$52 million every election cycle.
The President Manuel Zelaya's affiliation in 2008 with the Bolivarian Alliance for the Americas ALBA sparked controversy. There was further controversy when he refused to submit the government budget for Congressional approval.
In April and May 2009 Zelaya announced plans for a non-binding poll on whether to hold a referendum about whether to convene a constituent assembly that would rewrite the constitution.
The Honduran Supreme Court had upheld a lower court injunction against the 28 June poll, and on 26 June – while Zelaya ignored the injunction – it issued a secret order for his detention.
On June 28 Honduran soldiers entered the presidential palace and arrested Zelaya, preempting the poll. They put him on a military airplane which flew him to Costa Rica.
Subsequently on June 28, the Honduran Congress, in an extraordinary session, voted to remove Zelaya from office and appoint his constitutional successor, Speaker of Congress Roberto Micheletti, in his place as interim President for a term that ended on 27 January 2010.
International reaction was universally negative with widespread condemnation of the events as a coup d'état. Almost no foreign government had recognized Micheletti as president.
Some of the main political pressure groups are the Committee for the Defense of Human Rights in Honduras (CODEH); Confederation of Honduran Workers (CTH); Coordinating Committee of Popular Organizations or CCOP; General Workers Confederation or CGT; Honduran Council of Private Enterprise (COHEP); National Association of Honduran Campesinos or ANACH; National Union of Campesinos or UNC; United Federation of Honduran Workers or FUTH | https://en.wikipedia.org/wiki?curid=13398 |
Economy of Honduras
The economy of Honduras is based mostly on agriculture, which accounts for 14% of its gross domestic product (GDP) in 2013. Leading export coffee (US$340 million) accounted for 22% of total Honduran export revenues. Bananas, formerly the country's second-largest export until being virtually wiped out by 1998's Hurricane Mitch, recovered in 2000 to 57% of pre-Mitch levels. Cultivated shrimp is another important export sector. Since the late 1970s, towns in the north began industrial production through maquiladoras, especially in San Pedro Sula and Puerto Cortés.
Honduras has extensive forests, marine, and mineral resources, although widespread slash and burn agricultural methods continue to destroy Honduran forests. The Honduran economy grew 4.8% in 2000, recovering from the Mitch-induced recession (−1.9%) of 1999. The Honduran maquiladora sector, the third-largest in the world, continued its strong performance in 2000, providing employment to over 120,000 and generating more than $528 million in foreign exchange for the country. Inflation, as measured by the consumer price index, was 10.1% in 2000, down slightly from the 10.9% recorded in 1999. The country's international reserve position continued to be strong in 2000, at slightly over US$1 billion. Remittances from Hondurans living abroad (mostly in the US) rose 28% to $410 million in 2000. The Lempira (currency) was devaluing for many years but stabilized at L19 to the US dollar in 2005. The Honduran people are among the poorest in Latin America; gross national income per capita (2007) is US$1,649; the average for Central America is $6,736.
Honduras is the fourth poorest country in the Western Hemisphere; only Haiti, Nicaragua, and Guyana are poorer. Using alternative statistical measurements in addition to the gross domestic product can provide greater context for the nation's poverty.
The country signed an Enhanced Structural Adjustment Facility (ESAF) – later converted to a Poverty Reduction and Growth Facility (PRGF) with the International Monetary Fund in March 1999. Honduras (as of the about year 2000) continues to maintain stable macroeconomic policies. It not been swift to implementing structural changes such as privatization of the publicly owned telephone and energy distribution companies—changes which are desired by the IMF and other international lenders. Honduras received significant debt relief in the aftermath of Hurricane Mitch, including the suspension bilateral debt service payments and bilateral debt reduction by the Paris Club—including the US – worth over $400 million. In July 2000, Honduras reached its decision point under the Heavily Indebted Poor Countries Initiative (HIPC), qualifying the country for interim multilateral debt relief.
Land appears to be plentiful and readily exploitable, but the presence of apparently extensive land is misleading because the nation's rugged, mountainous terrain restricts large-scale agricultural production to narrow strips on the coasts and to a few fertile valleys. Honduras's manufacturing sector has not yet developed beyond simple textile and agricultural processing industries and assembly operations. The small domestic market and competition from more industrially advanced countries in the region have inhibited more complex industrialization.
After Honduras achieved independence from Spain in the early 19th century, its economic growth became closely related to its ability to develop attractive export products. During much of the 19th century, the Honduran economy languished; traditional cattle raising and subsistence agriculture produced no suitable major export. In the latter part of the century, economic activity quickened with the development of large-scale, precious metal mining. The most important mines were in the mountains near the capital of Tegucigalpa and were owned by the New York and Honduras Rosario Mining Company (NYHRMC).
Silver was the principal metal extracted, accounting for about 55% of exports in the 1880s. Mining income stimulated commercial and ancillary enterprises, built infrastructure, and reduced monetary restraints on trade. There were few other beneficial economic effects, however, because the mining industry was never well integrated into the rest of the Honduran economy. The foreign mining companies employed a small workforce, provided little or no government revenue, and relied mostly on imported mining equipment.
Honduras's international economic activity surged in the early 20th century. Between 1913 and 1929, its agricultural exports rose from $3 million ($2 million from bananas) to $25 million ($21 million from bananas). These "golden" exports were supported by more than $40 million of specialized banana company investment in the Honduran infrastructure and were safeguarded by US pressure on the national government when the companies felt threatened.
The overall performance of the Honduran economy remained closely tied to banana prices and production from the 1920s until after the mid-century because other forms of commercial export agriculture were slow to emerge. In addition, until drastically reduced in the mid-1950s, the workforce associated with banana cultivation represented a significant proportion of the wage earners in the country. Just before the banana industry's largest strike in 1954, approximately 35,000 workers held jobs on the banana plantations of the United Fruit Company (later United Brands Company, then Chiquita Brands International) or the Standard Fruit Company (later brought by Castle and Cook, then Dole Food Company).
After 1950 Honduran governments encouraged agricultural modernization and export diversification by spending heavily on transportation and communications infrastructure, agricultural credit, and technical assistance. During the 1950s—as a result of these improvements and the strong international export prices—beef, cotton, and coffee became significant export products for the first time. Honduran sugar, timber, and tobacco also were exported, and by 1960 bananas had declined to a more modest share (45 percent) of total exports. During the 1960s, industrial growth was stimulated by the establishment of the Central American Common Market (CACM—see Appendix B).
As a result of the reduction of regional trade barriers and the construction of a high common external tariff, some Honduran manufactured products, such as soaps, sold successfully in other Central American countries. Because of the greater size and relative efficiency of the Salvadoran and Guatemalan industrial sectors, however, Honduras bought far more manufactured products from its neighbors than it sold to them. After the 1969 Soccer War with El Salvador, Honduras effectively withdrew from the CACM. Favorable bilateral trade arrangements between Honduras and the other former CACM partners were subsequently negotiated, however.
A political shift in the 1980s had strong and unexpected repercussions on the country's economic condition. Beginning in late 1979, as insurgency spread in neighboring countries, Honduran military leaders enthusiastically came to support United States policies in the region. This alignment resulted in financial support that benefited the civilian as well as the military ministries and agencies of Honduras. Honduran defense spending rose throughout the 1980s until it consumed 20 to 30 percent of the national budget. Before the military buildup began in fiscal year (FY) 1980, United States military assistance to Honduras was less than US$4 million. Military aid more than doubled to reach just under US$9 million by FY 1981, surged to more than $31 million by FY 1982, and stood at $48.3 million in FY 1983. Tiny Honduras soon became the tenth largest recipient of United States assistance aid; total economic and military aid rose to more than $200 million in 1985 and remained at more than $100 million for the rest of the 1980s.
The increasing dependence of the Honduran economy on foreign aid was aggravated by a severe, regionwide economic decline during the 1980s. Private investment plummeted in 1980, and capital flight for that year was $500 million. To make matters worse, coffee prices plunged on the international market in the mid-1980s and remained low throughout the decade. In 1993 average annual per capita income remained depressingly low at about $580, and 75 percent of the population was poor by internationally defined standards.
Traditionally, Honduran economic hopes have been pinned on land and agricultural commodities. Despite those hopes, however, usable land has always been severely limited. Honduras's mostly mountainous terrain confines agriculturally exploitable land to narrow bands along the coasts and to some previously fertile but now largely depleted valleys. The country's once abundant forest resources have also been dramatically reduced, and Honduras has not derived economically significant income from mineral resources since the 19th century. Similarly, Honduras's industrial sector never was fully developed. The heady days of the CACM (mid to -late 1960s), which produced an industrial boom for El Salvador and Guatemala, barely touched the Honduran economy except to increase its imports because of the comparative advantages enjoyed by the Salvadoran and Guatemalan economies and Honduras's inability to compete.
Bananas and coffee have also proven unreliable sources of income. Although bananas are less subject to the vagaries of international markets than coffee, natural disasters such as Hurricane Fifi in 1974, drought, and disease have appeared with a regular, albeit random, frequency to take their economic toll through severely diminished harvests. Moreover, bananas are grown and marketed mostly by international corporations, which keep the bulk of wealth generated. Coffee exports, equally unreliable as a major source of economic support, surpassed bananas in the mid1970s as Honduras's leading export income earner, but international price declines coupled with huge fiscal deficits underlined the vulnerability of coffee as an economic base.
As Honduras entered the 1990s, it did have some factors working in its favor—relative peace and a stronger civilian government with less military interference in the politics and economy of the country than in past years. The country was hobbled, however, by horrendous foreign debt, could claim only diminished natural resources, and had one of the fastest-growing and urbanizing populations in the world. The government's daunting task then became how to create an economic base able to compensate for the withdrawal of much United States assistance without becoming solely dependent on traditional agricultural exports.
In the 1990s, bananas were booming again, particularly as new European trade agreements increased market size. Small banana producing cooperatives lined up in the 1990s to sell their land to the commercial giants, and the last banana-producing lands held by the government were privatized. Like most of Central America, Honduras in the 1990s began to woo foreign investors, mostly Asian clothing assembly firms, and it held high hopes for revenue to be generated by privatizing national industries. With one of the most strike-prone labor forces in Central America, debt-burdened and aging industrial assets, and a dramatically underdeveloped infrastructure, Honduras, however, has distinct economic disadvantages relative to its Central American and Caribbean neighbors, who compete with Honduras in the same export markets.
Honduran president Rafael Leonardo Callejas Romero, elected in November 1989, enjoyed little success in the early part of his administration as he attempted to adhere to a standard economic austerity package prescribed by the International Monetary Fund (IMF) and the World Bank. As the November 1993 presidential elections drew closer, the political fallout of austere economic measures made their implementation even less likely. Any hope for his party's winning the 1993 election was predicated on improving social programs, addressing employment needs, and appeasing a disgruntled, vocal public sector. However, reaching those goals required policies that moved away from balancing the budget, lowering inflation, and reducing the deficit and external debt to attract investment and stimulate economic growth.
Callejas inherited an economic mess. The economy had deteriorated rapidly, starting in 1989, as the United States Agency for International Development (AID) pointedly interrupted disbursements of its grants to Honduras to signal displeasure with the economic policies of the old government and to push the new government to make economic reforms. Nondisbursal of those funds greatly exacerbated the country's economic problems. Funds from the multilateral lending institutions, which eventually would help fill the gap left by the reduction of United States aid, were still under negotiation in 1989 and would be conditioned first on payment of arrears on the country's enormous external debt.
Between 1983 and 1985, the government of Honduras—pumped up by massive infusions of external borrowing—had introduced expensive, high-tech infrastructure projects. The construction of roads and dams, financed mostly by multilateral loans and grants, was intended to generate employment to compensate for the impact of the regionwide recession. In reality, the development projects served to swell the ranks of public-sector employment and line the pockets of a small elite. The projects never sparked private-sector investment or created substantial private employment. Instead, per capita income continued to fall as Honduras's external debt doubled. Even greater injections of foreign assistance between 1985 and 1988 kept the economy afloat, but it soon became clear that the successive governments had been borrowing time as well as money.
Foreign aid between 1985 and 1989 represented about 4.6 percent of the gross domestic product (GDP). About 44 percent of the government's fiscal shortfall was financed through cash from foreign sources. Side effects of the cash infusion were that the national currency, the lempira became overvalued and the number of exports dropped. A booming public sector, with its enhanced ability to import, was enough to keep the economy showing growth, based on private consumption and government spending. But the government did little to address the historical, underlying structural problems of the economy—its overdependence on too few traditional commodities and lack of investment. Unemployment mushroomed, and private investment withered.
By 1989 president Callejas's broad economic goal became to return Honduran economic growth to 1960–80 levels. During the decades of the 1960s and 1970s, the country's economy, spurred mostly by erratically fluctuating traditional agricultural commodities, nevertheless averaged real annual growth of between 4 and 5 percent. At the end of the 1980s, however, Callejas had few remaining vehicles with which to pull the country out of the deep regionwide recession of the 1980s. Real growth between 1989 and 1993 translated to mostly negative or small positive per capita changes in the GDP for a population that was growing at close to 4 percent annually.
President Callejas attempted to adhere to conditions of desperately needed new loans. Cutting the size of the public sector workforce, lowering the deficit, and enhancing revenues from taxes—as mandated by the multilateral lending institutions—were consistently his biggest stumbling blocks. Despite his all-out effort to reduce the public-sector deficit, the overall ratio of fiscal deficit to the GDP in 1990 showed little change from that in 1989. The total public-sector deficit actually grew to 8.6 percent of the GDP, or nearly L1 billion, in 1991.
The 1993 deficit expanded to 10.6 percent of GDP. The Honduran government's medium-term economic objectives, as dictated by the IMF, were to have generated real GDP growth of 3.5 percent by 1992 and 4 percent by 1993. In fact, GDP growth was 3.3 percent in 1991, 5.6 percent in 1992, and an estimated 3.7 percent in 1993. The economy had operated so long on an ad hoc basis that it lacked the tools to implement coherent economic objectives. Solving the most immediate crisis frequently took precedence over long-term goals.
Inflation
By 1991 President Callejas had achieved modest success in controlling inflation. Overall inflation for 1990 had reached 36.4 percent—not the hyperinflation experienced by some Latin American counties—but still the highest annual rate for Honduras in forty years. The Honduran government and the IMF had set an inflation target of 12 percent for 1992 and 8 percent for 1993. The actual figures were 8.8 percent in 1992 and an estimated 10.7 percent for 1993. Hondurans had been accustomed to low inflation (3.4 percent in 1985, rising to 4.5 percent by the end of 1986), partly because pegging the lempira to the dollar-linked Honduras's inflation rate to inflation rates in developed countries. But the expectation for low inflation made the reality of high inflation that much worse and created additional pressures on the government for action when inflation soared in 1990.
Between 1980 and 1983, 20 percent of the workforce was unemployed—double the percentage of the late 1970s. Job creation remained substantially behind the growth of the labor force throughout the 1980s. Unemployment grew to 25 percent by 1985, and combined unemployment and underemployment jumped to 40 percent in 1989. By 1993, 50 to 60 percent of the Honduran labor force was estimated to be either underemployed or unemployed.
The government's acceptance of foreign aid during the 1980s, in lieu of economic growth sparked by private investment, allowed it to ignore the necessity of creating new jobs. Honduras's GDP showed reasonable growth throughout most of the 1980s, especially when compared to the rest of Latin America, but it was artificially buoyed by private consumption and public-sector spending.
Mainstay agricultural jobs became scarcer in the late 1970s. Coffee harvests and plantings in border area decreased because fighting in neighboring Nicaragua and El Salvador spilled over into Honduras. Other factors contributing to the job scarcity were limited land, a reluctance on the part of coffee growers to invest while wars destabilized the region, and a lack of credit. Small farmers became increasingly unable to support themselves as their parcels of land diminished in size and productivity.
Problems in the agricultural sector have fueled urbanization. The Honduran population was 77 percent rural in 1960. By 1992 only 55 percent of the Honduran population continued to live in rural areas. Peasants (campesinos) flocked to the cities in search of work but found little there. Overall unemployment has been exacerbated by an influx of refugees from the wars in neighboring countries, attracted to Honduras, ironically, by its relatively low population density and relative peace. In the agricultural sector (which in 1993 still accounted for about 60 percent of the labor force), unemployment has been estimated to be far worse than the figures for the total labor force.
Honduran urban employment in the early 1990s has been characterized by underemployment and marginal informal-sector jobs, as thousands of former agricultural workers and refugees have moved to the cities seeking better lives. Few new jobs have been generated in the formal sector, however, because the domestic private sector and foreign investments have dropped and coveted public-sector jobs have been reserved mostly for the small Honduran middle-class with political or military connections. Only one of ten Honduran workers were securely employed in the formal sector in 1991.
In the mid-1980s, the World Bank reported that only 10,000 new jobs were created annually; the low rate of job creation resulted in 20,000 people being added to the ranks of the unemployed every year. The actual disparity between jobs needed for full employment and new jobs created exceeded that projection, however. For those with jobs, the buying power of their wages tumbled throughout the 1980s while the cost of basic goods, especially food, climbed precipitously.
Throughout the 1960s and most of the 1970s, the military-led governments of Honduras ran a state-sponsored and state-financed economy. The governments provided most guarantees for loans to a strong but patronage-dominated and somewhat corrupt public sector that included recipients of graft extracted from foreign and domestic investors, and to costly state-developed enterprises. By 1989 and the election of president Callejas, however, a heavy toll had been taken by regionwide economic recession, civil war in neighboring countries, the drying up of most external credit, and capital flight equaling more than $1.5 billion.
Callejas began to shift economic policy toward privatizing government-owned enterprises, liberalizing trade and tariff regulations, and encouraging increased foreign investment through tax and other incentives. The Callejas administration did not seek less government control. Rather it changed the government's objectives by focusing on reducing public-sector spending, the size of the public-sector workforce, and the trade deficit. Overall economic planning became the responsibility of the National Superior Planning Council, directed by the minister of economy and commerce. President Callejas, a US-trained economist, brought new professionalism and technical skills to the central government as he began the arduous task of long-term economic reform.
The official exchange rate of the lempira, pegged at US$1=L2 since 1918, was dramatically devalued in 1990. Exchange controls had been introduced in 1982, resulting in a parallel currency market (black market) and several confusing official exchange rates operating simultaneously. Some of those rates were legally recognized in 1990 when President Callejas introduced a major series of economic policy reforms, which included reducing the maximum import tariff rate from 90 to 40 percent and getting rid of most surcharges and exemptions.
The value of the lempira was adjusted to US$1=L4, with the exception of the rate for debt equity conversions, which remained at the old rate of US$1=L2. The official conversion rate of the lempira fell to US$1=L7.26 in December 1993. The president also introduced temporary taxes on exports, which were intended to increase central government revenue. Additional price and trade liberalization measures and fewer government regulations became part of his ongoing reforms.
Budget
Throughout the 1980s, the Honduran government was heavily financed by foreign assistance. External financing—mostly bilateral credit from the United States—rose dramatically until it reached 87 percent of the public deficit in 1985, rising even further in subsequent years. By 1991 the public-sector deficit was entirely financed with net external credit. That financing permitted the government to reduce the demand for internal credit and, therefore, to maintain its established exchange rate.
In 1991 Callejas managed to give the appearance of having reduced the overall fiscal deficit, a requirement for new credit. But the deficit decrease was mostly an accounting device because it resulted from the postponement of external payments to the Paris Club debtors and eventually would be offset by pressure to raise public investment. During 1991, loan negotiations with multilateral and bilateral lending institutions brought Honduras $39.5 million in United States development assistance, $70 million in balance-of-payments assistance in the form of cash grants, and $18.8 million in food aid.
Honduras country also negotiated $302.4 million in concessional loans from the multilateral lending institutions. Total outstanding external debt as a percentage of GDP fell from 119 percent in 1990 to 114 percent in 1991 and to 112 percent in 1993. This drop was largely the result of debt forgiveness of $448.4 million by the United States, Switzerland, and the Netherlands. Scheduled amortization payments of an average $223.2 million per year, however, guaranteed that Honduras's gross funding requirements would remain large indefinitely.
The government of Honduras projected that overall tax revenues would increase from 13.2 percent of GDP in 1989 to about 15.7 percent in 1991. Adjustments for low coffee prices and the continuation of lax collection methods, however, undermined those goals. Despite these tax increases, compared to developed countries, Honduras has low tax rates with, particularly low property taxes.
Honduras suffers from an overabundance of unskilled and uneducated laborers. Most Honduran workers in 1993 continued to be employed in agriculture, which accounted for about 60 percent of the labor force. More than half of the rural population, moreover, remains landless and heavily dependent on diminishing seasonal labor and low wages. Fifty-five percent of the farming population subsists on less than two hectares and earns less than $70 per capita per year from those plots, mostly by growing subsistence food crops.
In 1993 only about 9–13 percent of the Honduran labor force was engaged in the country's tiny manufacturing sector—one of the smallest in Central America. Skilled laborers are scarce. Only 25,000 people per year, of which about 21 percent are industrial workers, graduate yearly from the National Institute of Professional Training (Instituto Nacional de Formación Profesional- -INFOP) established in 1972.
Hundreds of small manufacturing firms, the traditional backbone of Honduran enterprise, began to go out of business beginning in the early 1990s, as import costs rose and competition through increasing wages for skilled labor from the mostly Asian-owned assembly industries strengthened. The small Honduran shops, most of which had manufactured clothing or food products for the domestic market, traditionally received little support in the form of credit from the government or the private sector and were more like artisans than conventional manufacturers. Asian-owned export assembly firms (maquiladoras), operating mostly in free zones established by the government on the Caribbean coast, attract thousands of job seekers and swell the populations of new city centers such as San Pedro Sula, Tela, and La Ceiba. Those firms employ approximately 16,000 workers in 1991.
About one-third of the Honduran labor force was estimated to be working in the service or "other" sector in 1993. That classification usually means that a person ekes out a precarious livelihood in the urban informal sector or as a poorly paid domestic. As unemployment soared throughout Central America in the 1980s, more and more people were forced to rely on their own ingenuity in order to simply exist on the fringes of Honduran society.
As for the informal sector, research has shown that evidence of child labor has been observed mostly in the Honduran agricultural sector. In 2014, the U.S. Department of Labor's "List of Goods Produced by Child Labor or Forced Labor" cites three goods produced in such working conditions in Honduras; namely coffee, lobsters and melons.
Honduran governments have set minimum wages since 1974, but enforcement has generally been lax. That laxity increased at the beginning of the 1980s. Traditionally, most Honduran workers have not been covered by social security, welfare, or minimum wages. Multinational companies usually paid more than the standard minimum wage, but, overall, the Honduran wage earner has experienced a diminution of real wages and purchasing ability for more than a decade. When they occurred, minimum wage adjustments generally did not keep up with the cost of living increases.
After a major currency devaluation in 1990, average Honduran workers were among the most poorly paid workers in the Western Hemisphere. By contrast, the banana companies paid relatively high wages as early as the 1970s. Banana workers continued at the top of the wage scale in the 1990s; however, in the 1980s, as banana production became less labor-intensive, the companies had decreased their investment and workforce. Consequently, fewer workers were employed as relatively well-paid agricultural wage earners with related benefits.
President Callejas responded to the severe poverty by implementing a specially financed Honduran Social Investment Fund (Fondo Hondureño de Inversión Social—FHIS) in 1990. The fund created public works programs such as road maintenance and provided United States surplus food to mothers and infants. Many Hondurans slipped through that fragile social safety net. As a continuing part of the social pact, and even more as the result of a fierce union-government battle, President Callejas announced in 1991 a 27.8 percent increase over a minimum wage that the government had earlier agreed upon. That increase was in addition to raises of 50 and 22 percent set, respectively, in January and September 1990. Despite those concessions, the minimum daily rate in 1991 was only $1.75 for workers employed by small agricultural enterprises and $3.15 for workers in the big exporting concerns; most workers did not earn the minimum wage.
Honduras has long been heavily unionized. In 1993 approximately 15 to 20 percent of the overall formal workforce was represented by some type of union, and about 40 percent of urban workers were union members. There were forty-eight strikes in the public sector alone in 1990, protesting the government's economic austerity program and layoffs of public-sector workers. More than 4,000 public-sector employees from the Ministry of Communications, Public Works, and Transport were fired in 1990. About 70,000 unionized workers remained in the faltering public sector at the beginning of 1991. However, the government largely made good its pledge to trim that number by 8,000 to 10,000 throughout 1991 as part of its austerity program.
In the private sector, 1990 saw 94 strikes in 64 firms, as workers fought for wage increases to combat inflation. A forty-two-day strike at the Tela Railroad Company (owned by Chiquita Brands International—formerly United Brands and United Fruit Company) was unsuccessful, however, and that defeat temporarily ended union efforts at direct confrontation.
In 1993 Honduras had three major labor confederations: the Confederation of Honduran Workers (Confederación de Trabajadores de Honduras—CTH), claiming a membership of about 160,000 workers; the General Workers Central (Central General de Trabajadores—CGT), claiming to represent 120,000 members; and the Unitary Confederation of Honduran Workers (Confederación Unitaria de Trabajadores de Honduras—CUTH), a new confederation formed in May 1992, with an estimated membership of about 30,000. The three confederations included numerous trade union federations, individual unions, and peasant organizations.
The CTH, the nation's largest trade confederation, was formed in 1964 by the nation's largest peasant organization, the National Association of Honduran Peasants (Asociación Nacional de Campesinos de Honduras—Anach), and by Honduran unions affiliated with the Inter-American Regional Organization of Workers (Organización Regional Interamericana de Trabajadores—ORIT), a hemispheric labor organization with close ties to the American Federation of Labor-Congress of Industrial Organizations (AFL-CIO).
In the early 1990s, the confederation had three major components: the 45,000-member Federation of Unions of National Workers of Honduras (Federación Sindical de Trabajadores Nacionales de Honduras—Fesitranh); the 22,000 member Central Federation of Honduran Free Trade Unions (Federación Central de Sindicatos Libres de Honduras); and the 2,200-member Federation of National Maritime Unions of Honduras (Federación de Sindicales Marítimas Nacionales de Honduras). In addition, Anach, claiming to represent between 60,000 and 80,000 members, was affiliated with Fesitranh.
Fesitranh was by far the country's most powerful labor federation, with most of its unions located in San Pedro Sula and the Puerto Cortés Free Zone. The unions of the United States-owned banana companies and the United States-owned petroleum refinery also were affiliated with Fesitranh. The CTH received support from foreign labor organizations, including ORIT, the American Institute for Free Labor Development (AIFLD), and Germany's Friedrich Ebert Foundation and was an affiliate of the International Confederation of Free Trade Unions (ICFTU).
Although it was not legally recognized until 1982, the CGT was originally formed in 1970 by the Christian Democrats and received external support from the World Confederation of Labour (WCL) and the Latin American Workers Central (Central Latinoamericana de Trabajadores—CLAT), a regional organization supported by Christian Democratic parties. In the late 1980s and early 1990s, however, the CGT leadership developed close ties to the National Party of Honduras (Partido Nacional de Honduaras—PNH), and several leaders served in the Callejas government. Another national peasant organization, the National Union of Peasants (Unión Nacional de Campesinos—UNC), claiming a membership of 40,000, was affiliated with the CGT for many years and was a principal force within the confederation.
The CUTH was formed in May 1992 by two principal labor federations, the Unitary Federation of Honduran Workers (Federación Unitaria de Trabajadores de Honduras—FUTH) and the Independent Federation of Honduran Workers (Federación Independiente de Trabajadores de Honduras—FITH), as well as several smaller labor groups, all critical of the Callejas government's neoliberal economic reform program.
The Marxist FUTH, with an estimated 16,000 members in the early 1990s, was first organized in 1980 by three communist-influenced unions, but did not receive legal status until 1988. The federation had external ties with the World Federation of Trade Unions (WFTU), the Permanent Congress for Latin American Workers Trade Union Unity (Congreso Permanente de Unidad Sindical de Trabajadores de América Latina—CPUSTAL), and the Central American Committee of Trade Union Unity (Comité de Unidad Sindical de Centroamérica—CUSCA). Its affiliations included water utility, university, electricity company, brewery, and teacher unions, as well as several peasant organizations, including the National Central of Farm Workers (Central Nacional de Trabajadores del Campo—CNTC), formed in 1985 and active in land occupations in the early 1980s.
FUTH also became affiliated with a number of leftist popular organizations in a group known as the Coordinating Committee of Popular Organizations (Comité Coordinadora de las Organizaciones Populares—CCOP) that was formed in 1984. Dissident FUTH member formed the FITH, which was granted legal status in 1988. The FITH consisted of fourteen unions claiming about 13,000 members in the early 1990s.
The total land area of Honduras is 11.2 million hectares, of which a scant 1.7 million hectares (about 15 percent) are well suited for agriculture. Most land in Honduras is covered by mountains, giving rise to the country's nickname, "the Tibet of Central America." Nevertheless, the Honduran economy has always depended almost exclusively on agriculture, and in 1992 agriculture was still the largest sector of the economy, contributing 28 percent to the GDP.
Less than half of Honduras's cultivable land was planted with crops as recently as the mid-1980s. The rest was used for pastures or was forested and was owned by the government or the banana corporations. Potential for additional productivity from fallow land was questionable, however, because much of Honduras's soil lacks the thick volcanic ash found elsewhere in Central America. By 1987 about 750,000 hectares of Honduran land had been seriously eroded as a result of misuse by cattle ranchers and slash-and-burn squatters who planted unsuitable food crops.
The Honduran government and two banana companies—Chiquita Brands International and Dole Food Company—owned approximately 60 percent of Honduras's cultivable land in 1993. The banana companies acquired most of their landholdings in the early 20th century in return for building the railroads used to transport bananas from the interior to the coast. Much of their land remained unused because it lacked irrigation. Only about 14 percent of cultivated land was irrigated in 1987. Most land under cultivation in 1992 was planted in bananas, coffee, and specialized export crops such as melons and winter vegetables.
The agricultural sector's output showed little or no growth between 1970 and 1985. As a result of favorable weather and market conditions beginning in 1995, however, the agricultural sector grew at a rate of 2.6 percent annually, slightly above the average for Latin America during that period. Production of basic grains and coffee increased; the export price of bananas was high; and pork, poultry, and milk produced for the domestic market increased. Nontraditional fruits and vegetables also increased in value.
Honduran agricultural production overall has tended to be low because the amount of crop yielded by a given amount of land has been low. For example, Honduran chocolate yields historically have been only about half those of Costa Rica. Instead of using improved techniques to increase the productivity of the land, Honduran farmers have merely expanded the hectarage under cultivation to produce more crops—pushing their fields ever farther into the forests. Given the limited amount of good quality agricultural land, to begin with, that policy has resulted in continual deforestation and subsequent erosion. This reluctance to improve techniques, coupled with generally poor soil, a lack of credit, and poor infrastructure, has contributed to low production figures.
The Honduran government nominally began to address inequitable land ownership in the early 1960s. Those efforts at reform focused on organizing rural cooperatives. About 1,500 hectares of government-owned land were distributed by the National Agrarian Institute (Instituto Nacional Agrario—INA) beginning in 1960.
A military coup in 1963 resulted in an end to the land reform program. Lacking even modest government-directed land reforms, illegal squatting became the primary means for poor people to gain land throughout the early 1970s. These actions spurred the government to institute new agrarian reforms in 1972 and 1975. Although all lands planted in export crops were exempted from reform, about 120,000 hectares were, nevertheless, divided among 35,000 poor families.
By 1975 the pendulum had swung back, and agrarian reform was all but halted. From 1975 through the 1980s, illegal occupations of unused land increased once again. The need for land reform was addressed mostly by laws directed at granting titles to squatters and other landholders, permitting them to sell their land or to use it as collateral for loans.
Despite declarations by the Callejas government in 1989 of its intent to increasingly address social issues, including land tenure and other needs of small farmers, the early 1990s were jolted by increased conflicts between peasants and the Honduran security forces. Agricultural credit and government support increasingly favored export crop producers at the expense of producers of basic food crops.
The Honduran land reform process under President Callejas between 1989 and 1992 was directed primarily at large agricultural landowners. An agrarian pact, signed by landowners and peasant organizations in August 1990, remained underfunded and largely unimplemented. Furthermore, violence erupted as discharged members of the Honduran military forcibly tried to claim land that had already been awarded to the peasant organization Anach in 1976.
In May 1991, violence initiated by members of the Honduran military resulted in the deaths of eight farmers. To keep similar situations around the country from escalating into violence, the government promised to parcel out land belonging to the National Corporation for Investment (Corporación Nacional de Inversiones—Conadin). The government also pledged to return to peasants land that had been confiscated by the Honduran military in 1983.
An Agricultural Modernization Law, passed in 1992, accelerated land titling and altered the structure of land cooperatives formed in the 1960s. The law permitted cooperative members to break up their holdings into small personal plots that could be sold. As a result, some small banana producers suffering from economic hard times chose to sell their land to the giant banana producers. After an agreement was reached with the European Union (EU) to increase Honduras's banana quota to the EU, the large banana companies were avid for additional land for increased production to meet the anticipated new demand from Europe.
Throughout the 20th century, Honduras's agriculture has been dominated first by bananas and then to a lesser extent by coffee and sugar. In 1992, bananas and coffee together accounted for 50 percent of the value of Honduran exports and made the biggest contribution to the economy. Total banana sales were $287 million and total coffee sales amounted to $148 million. These figures are impressive yet reflect production losses suffered by banana producers and the withholding of coffee exports from the market in an effort to fight steep price declines.
Another major blow to Honduran agriculture came from Hurricane Mitch and its aftermath in 1998 and 1999. As of 2012 both industries are on the upswing. The banana industry is dominated by Chiquita and the Dole Food Company, two multinational corporations. The coffee industry, in contrast, offers better opportunities for small Honduran family farms to compete. Sugar has also been an important Honduran crop.
Chiquita Brands International and Dole Food Company now account for most Honduran banana production and exports. Honduras's traditional system of independent banana producers, who, as late as the 1980s, sold their crops to the international banana companies, was eroded in the 1990s. In the absence of policies designed to protect independent suppliers, economically strapped cooperatives began to sell land to the two large corporations.
Although Honduran banana production is dominated by multinational giants, such is not the case with coffee, which is grown by about 55,000 mostly small producers. Coffee production in Honduras has been high despite relatively low independent yields because of the large numbers of producers. Honduras, in fact, consistently produced more than its international quota until growers began to withhold the crop in the 1980s in an attempt to stimulate higher prices. Despite the efforts of the growers, coffee prices plunged on the international market from a high of more than $2.25 per kilogram in the mid-1970s to less than $0.45 per kilogram in the early 1990s. As a result of the declining prices, coffee producers were becoming increasingly marginalized. With the aid of affordable loans from foreign investors, more and more Honduran coffee growers are learning to produce high-value organic coffee for today's economy.
The outlook for the sugar industry, which had boomed during the 1980s when Honduran producers were allowed to fill Nicaragua's sugar quota to the United States, seemed bleak in 1993. Restoration of the sugar quota to Nicaraguan growers has been a major blow to Honduras's small independent producers, who had added most of Nicaragua's quota to their own during the United States embargo of Nicaragua. Higher costs for imported fertilizers because of the devaluation of the lempira add to the problem.
Honduran producers seek relief from a relatively low official price of 25 lempiras per kilogram of sugar by smuggling sugar across the borders to Nicaragua and El Salvador, where the support prices are higher. Sugar growers who can afford it have begun to diversify by growing pineapples and rice. Many independent sugar growers, like independent banana producers, have become indignant over the relatively high profits shown by refiners and exporters. Strikes by producers at harvest time in 1991 forced the closure of the Choluteca refinery for a short time but had little effect on the depressed long-term outlook for the industry.
While the total value of export merchandise fell in 1990 and 1991 and had still not recovered in 1993 to its 1989 level, the overall agricultural sector output has grown somewhat because of growth in the sale of winter vegetables and shrimp. Nontraditional vegetables and fruit produced $23.8 million in export revenue in 1990, a figure that was almost double the 1983 figure. Nontraditional agricultural crops represented 4.8 percent of the value of total exports in 1990, compared to 2.8 percent in 1983.
Some development experts argue that government protection of corn, bean, and rice production by small farmers is a futile effort in the long-term goal of poverty reduction. On the other hand, they see significant economic potential for nontraditional crops, if they are handled properly. Analysts also note, however, that Honduras is at a distinct disadvantage relative to its Central American neighbors because of its poor transportation system. Nontraditional exports require the ability to get fresh produce from the fields to distant markets rapidly.
In the early 1980s, the cattle industry appeared to have the potential to be an important part of the Honduran economy. The Honduran cattle sector, however, never developed to the extent that it did in much of the rest of Central America. Cattle production grew steadily until 1980–81 but then declined sharply when profits fell because of high production costs. The small Honduran meat packing industry declined at the same time, and several meat packing plants closed. As late as 1987, livestock composed 16 percent of the value-added agricultural sector but the industry continued to decline. By 1991–92, beef exports accounted for only 2.9 percent of the value of total exports.
Sales of refrigerated meat were the third or fourth highest source of export earnings in the mid-1980s, but like other Honduran agricultural products, beef yields were among the lowest in Central America. As world prices fell and production costs, exacerbated by drought, rose, there was less incentive to raise cattle. For a period of time, cattle farmers illegally smuggled beef cattle to Guatemala and other neighboring countries where prices were higher, but the Honduran cattle sector never became competitive internationally. The two large banana companies have also owned large cattle ranches where they raised prime beef, but these large companies had the flexibility to change crops as the market demanded.
Honduran dairy herds fared about the same as beef cattle, and Honduran milk yields were also among the lowest in Central America. The dairy industry was further handicapped by the difficulties of trying to transport milk over poor roads in a tropical country, as well as by stiff competition in the domestic market from subsidized foreign imports, mostly from the United States.
Honduras significantly developed its shrimp industry during the 1980s and in the Latin American market was second only to Ecuador in shrimp exports by 1991. In 1992 shrimp and lobster jumped to 12 percent of export earnings. Shrimp contributed $97 million in export sales to the economy in 1992—an increase of 33 percent over the previous year. The industry was dependent, however, on larvae imported from the United States to augment its unstable natural supply.
Technicians from Taiwan were contracted by large producers in 1991 to help develop laboratory larvae, but bitter feuds developed between independent shrimpers and the corporations. Local shrimpers charged that corporate methods were damaging the environment and destroying natural stock through destruction of the mangrove breeding swamps. Corporate shrimp farmers then began to move their operations farther inland, leaving local shrimpers to contend with diminished natural supplies on the mosquito-infested coast.
As in much of Central America, Honduras's once abundant forest resources have been badly squandered. In 1964 forests covered 6.8 million hectares, but by 1988 forested areas had declined to 5 million hectares. Honduras continued to lose about 3.6 percent of its remaining forests annually during the 1980s and early 1990s. The loss is attributable to several factors. Squatters have consistently used land suitable only for forests to grow scantyield food crops; large tracts have been cleared for cattle ranches; and the country has gravely mismanaged its timber resources, focusing far more effort on logging than on forestry management.
The government began an intensive forestry development program in 1974, supposedly intended to increase management of the sector and to prevent exploitation by foreign-owned firms. The Honduran Corporation for Forestry Development (Corporación Hondureña de Desarrollo Forestal—Cohdefor) was created in 1974, but it quickly developed into a corrupt monopoly for overseeing forest exports. Timber was mostly produced by private sawmills under contracts selectively granted by Cohdefor officials.
Ongoing wasteful practices and an unsustainable debt, which was contracted to build infrastructure, appear to have undercut most conservation efforts. The military-dominated governments contracted huge debt with the multilateral development agencies, then extracted timber to pay for it. Cohdefor generally granted licenses to private lumber companies with few demands for preservation, and it had little inclination or incentive to enforce the demands it did make.
With encouragement from the United States Agency for International Development (AID), the Honduran government began to decentralize Cohdefor beginning in 1985. Under the decentralization plan, regulatory responsibilities were transferred from the central government to mayors and other municipal officials on the assumption that local officials would provide better oversight. Despite decentralization and the sale of government assets, Cohdefor's remaining debt was $240 million in 1991. The government also assumed continued financial responsibility for the construction of a new airstrip in the area of timber extraction, upgrading facilities at Puerto Castilla and Puerto Lempira, and providing electricity at reduced prices to lumber concerns as part of the privatization package.
Major legislation was passed in 1992 to promote Honduran reforestation by making large tracts of state-owned land more accessible to private investors. The legislation also supplied subsidies for development of the sector. The same law provided for replanting mountainous regions of the country with pine to be used for fuel.
Mining, the mainstay of the Honduran economy in the late 19th century, declined dramatically in importance in the 20th century. The New York and Honduras Rosario Mining Company (NYHRMC) produced $60 million worth of gold and silver between 1882 and 1954 before discontinuing most of its operations.
Mining's contribution to the GDP steadily declined during the 1980s, to account for a 2 percent contribution in 1992. El Mochito mine in western Honduras, the largest mine in Central America, accounted for most mineral production. Ores containing gold, silver, lead, zinc, and cadmium were mined and exported to the United States and Europe for refining.
Honduras has for many years relied on fuelwood and biomass (mostly waste products from agricultural production) to supply its energy needs. The country has never been a producer of petroleum and depends on imported oil to fill much of its energy needs. In 1991 Honduras consumed about of oil daily. Honduras spent about $143 million, or 13 percent of its total export earnings, to purchase oil in 1991. The country's one small refinery at Puerto Cortés closed in 1993.
Various Honduran governments have done little to encourage oil exploration, although substantial oil deposits have long been suspected in the Río Sula valley and offshore along the Caribbean coast. An oil exploration consortium consisting of the Venezuelan state oil company, Venezuelan Petroleum, Inc. (Petróleos de Venezuela, S.A.--PDVSA), Cambria Oil, and Texaco expressed interest in the construction of a refinery at Puerto Castilla in 1993, with production aimed at the local market.
Gasolineras Uno is a Honduran gas stations company that has expanded its presence to include stores in most of Central America and in South America.
Fuelwood and biomass have traditionally met about 67 percent of the country's total energy demand; petroleum, 29 percent; and electricity, 4 percent. In 1987 Honduran households consumed approximately 60 percent of total energy used, transportation and agriculture used about 26 percent, and industry used about 14 percent. Food processing consumed about 50 percent of industrial sector energy, followed by petroleum and chemical manufacturing.
Honduran electrification is low and uneven relative to other countries in Latin America. The World Bank estimates that only about 36 percent of the Honduran population had access to electricity (20 percent of the rural population) in 1987. The country's total capacity in 1992 was 575 megawatts (MW), with 2,000 megawatt-hours produced. A mammoth hydroelectric plant, the 292-MW project at El Cajón, began producing electricity in 1985 to help address the country's energy needs. The plant, however, soon became heavily indebted because of the government's electricity pricing policies (not charging public-sector institutions, for example) and because of the appointment of political cronies as top management officials. El Cajón also developed costly structural problems requiring extensive maintenance and repairs.
Officials estimated that the government's decision to provide free service to public-sector institutions contributed to a 23 percent increase in publicsector consumption in 1990. Experts estimated that additional electrical generation capacity would likely be needed to keep pace with demand. The Honduran Congress assumed authority for setting electric prices beginning in 1986 but then became reluctant to increase rates. Under pressure from the World Bank, it did agree to a 60 percent increase in 1990, with additional increases in 1991. To offset these increased rates for residential users, the National Congress initiated a system of direct subsidies that ran through 1992.
The country's manufacturing sector was small, contributing only 15 percent to the total GDP in 1992. Textile exports, primarily to the US, led the Honduran manufacturing sector. The maquiladora, or assembly industry, was a growth industry in the generally bleak economy. Asian-owned firms dominated the sector, with twenty-one South Korean-owned companies in export processing zones located in the Río Sula valley in 1991.
The maquiladoras employed approximately 16,000 workers in 1991; another nine firms opened in 1992. Job creation, in fact, is considered to be the primary contribution of the assembly operations to the domestic economy. The export textile manufacturing industry all but wiped out small, Honduran manufacturers, and food processors, whose goods were historically aimed at the domestic market, were also adversely affected.
The small Honduran firms could not begin to compete with the assembly industry for labor because of the maquiladoras' relatively high wage scale of close to $4 per day. Small firms also found it increasingly difficult to meet the high cost of mostly imported inputs. Membership in the Honduran Association of Small and Medium Industry (Asociación Hondureña de Empresas Pequeñas y Medianas) declined by 70 percent by 1991, compared to pre-maquiladora days, foreshadowing the likely demise of most of the small shops.
Honduran domestic manufacturers also suffered from increased Central American competition resulting from a trade liberalization pact signed in May 1991 by Honduras, El Salvador, and Guatemala. Overall, the Honduran manufacturing sector has mimicked other sectors of the economy—it is mostly noncompetitive, even in a regional context, because of insufficient credit and the high cost of inputs. Relatively high interest rates and a complicated investment law have also inhibited the foreign-dominated manufacturing sector from taking off.
The government-sponsored Puerto Cortés Free Zone was opened in 1976. By 1990 an additional five free zones were in operation in Omoa, Coloma, Tela, La Ceiba, and Amapala. A series of privately run Export Processing Zones were also established in competition with the government-sponsored free zones. These privately run zones offered the same standard import-export incentives as the government zones. Most of the government and privately run zones were located along the Caribbean coast in a newly developing industrial belt.
Firms operating outside of the special "enterprise zones" (either privately run, export-processing zones or government sponsored free zones) enjoy many of the same benefits as those operating within the zones. The Honduran Temporary Import Law permits companies that export 100 percent of their production to countries outside the CACM countries to hold ten-year exemptions on corporate income taxes and duty-free import of industrial inputs.
Analysts continue to debate the actual benefits of the shift away from the import-substitution industrialization (ISI) policies of the 1960s and 1970s toward a new focus on free zones and assembly industries in the 1990s. Critics point to the apparent lack of commitment by foreign manufactures to any one country site or to the creation of permanent infrastructure and employment. They question whether new employment will be enough to offset the loss of jobs in the more traditional manufacturing sector. A value of $195 million to the Honduran economy from assembly industries in 1991—when the value of clothing exports was greater than that of coffee—was a compelling argument in favor of the shift, however.
High interests rates, particularly for housing, continued to hurt the Honduran construction industry in 1993, but danger from high rates was partially offset by some public-sector investment. Privatization of formerly state-owned industries through debt swaps also negatively affected construction as prices for basic materials such as cement increased and credit tightened. A major devaluation of the lempira added to the already high cost of construction imports. Construction contributed 6.0 percent to the GDP in 1992.
The Honduran financial sector is small in comparison to the banking systems of its neighbors. After 1985, however, the sector began to grow rapidly. The average annual growth rate of value added to the economy from the financial sector for the 1980s was the second-highest in Latin America, averaging 4 percent. By 1985 Honduras had twenty-five financial institutions with 300 branch offices. Honduran commercial banks held 60 percent of the financial system's assets in 1985 and nearly 75 percent of all deposits. With the exception of the Armed Forces Social Security Institute, all commercial banks were privately owned, and most were owned by Honduran families. In 1985 there were two government-owned development banks in Honduras, one specializing in agricultural credit and the other providing financing to municipal governments.
At the behest of the International Monetary Fund (IMF) and World Bank, Honduras began a process of financial liberalization in 1990. The process began with the freeing of agricultural loan rates and was quickly followed by the freeing of loan rates in other sectors. Beginning in late 1991, Honduran banks were allowed to charge market rates for agricultural loans if they were using their own funds. By law, the banks had to report their rates to monetary authorities and could fix rates within two points of the announced rate.
In 1991 commercial banks pressured the government to reduce their 35 percent minimum reserve ratio. This rate remained standard until June 1993 when the minimum requirement was temporarily lifted to 42 percent. The rate was dropped to 36 percent three months later. The banks had excess reserves, and lending rates were in the area of 26 to 29 percent, with few borrowers. Prior to liberalization measures, the Central Bank of Honduras (Banco Central de Honduras) maintained interest rate controls, setting a 19 percent ceiling, with the market lending rate hovering around 26 percent in late 1991. With inflation hitting 33 percent in 1990, there was, in fact, a negative real interest rate, but this situation reversed in 1991 when rates were high relative to inflation. Rates of 35 to 43 percent in 1993 were well above the inflation rate of 13 to 14 percent. Bankers argued for further liberalization, including easing of controls in the housing and nonexport agricultural sectors.
A Honduran stock exchange was established in August 1990 with transactions confined to trading debt. Nine companies were registered with the exchange in 1991; in 1993 this number had grown to eighteen. It appears doubtful, however, that the market will develop fully, given the reluctance of family-held firms to open their books to public scrutiny.
Foreign tourists are attracted to Honduras by the Mayan ruins in Copán and coral reef skin-diving off the Islas de la Bahía (Bay Islands). Poor infrastructure, however, has discouraged the development of substantial international tourism. Despite these problems, the number of visitors arriving in Honduras rose from fewer than 200,000 in 1987 to almost 250,000 in 1989. Small ecotourism projects, in particular, are considered to have significant potential.
In the early 1990s, the United States was by far Honduras's leading trading partner, with Japan a distant second. United States exports to Honduras in 1992 were valued at US$533 million, about 54 percent of the country's total imports of $983 million. Most of the rest of Honduras's imports come from its Central American neighbors. Despite its status as a beneficiary of both the Caribbean Basin Initiative (CBI) and the Generalized System of Preferences (GSP)--both of which confer duty-free status on Honduran imports to the United States—Honduras has run a long-standing trade deficit with the United States.
Total exports of goods and services by Honduras in 1992 was $843 million, of which about 52 percent went to the United States. The current amount exported by Honduras as of 2017 is $8.675 billion (USD$), with 34.5% of the said exports now going to the United States.
As with most Latin American countries, Honduras's economy is closely tied to the US. The US is Honduras's primary trading partner and the source of about two-thirds of the country's foreign direct investment.
US multinationals Dole Food Company and Chiquita control a large portion of Honduras's agricultural exports. Presently, Honduras participates alongside the Rainforest Alliance for the exporting of agricultural goods to the US.
Hondurans working in the US send more than $2 billion each year to their families in Honduras; these remittances account for 28.2% of Honduras's GDP (2007 information
).
With the exception of relatively recent, Asian-dominated investment in assembly firms along Honduras's northern coast, the country remains heavily dependent on United States-based multinational corporations for most of its investment needs in the early 1990s. Overall investment as a percentage of GDP declined dramatically during the 1980s, from about 25 percent in 1980 to a meager 15 percent in 1990. Dole Food Company and Chiquita Brands International together have invested heavily in Honduran industries as diverse as breweries and plastics, cement, soap, cans, and shoes.
As Honduras enters the 1990s, it faces challenging economic problems. The solutions relied on in the past—traditional export crops, the maquiladora assembly industry, and the 1980s' development schemes—appear unlikely to provide enough new jobs for a rapidly growing population. The major economic challenge for Honduras over the next decade will be to find dependable sources of sustainable economic growth.
The slowed rate of growth in 2008 (4%, vs. 6.3% in 2007) reflected the general downturn in the world economy that year. The "Banco Central de Honduras" (central bank) named the debilitation of global demand, and loss of dynamism in final consumer demand, as important factors in the slowing of Honduras's economic growth in 2008.
The table here shows the slowing of growth in 2008 versus 2007 in various economies.
The above graph reflects Honduras performance in the World Development Indicators since 2008 up to 2013. The information was extracted from the World Bank Data webpage | https://en.wikipedia.org/wiki?curid=13399 |
Telecommunications in Honduras
Telecommunication in Honduras started in 1876 when the first telegraph was introduced, continued development with the telephone in 1891, radio in 1928, television in 1959, the Internet in the early 1990s, and cellphones in 1996.
The first radio station in Honduras was Tropical Radio, which started operations in 1928.
The first TV station in Honduras was Canal 5, which started operations in 1959.
Television in Honduras includes both local and foreign channels, normally distributed by cable.
The Comisión Nacional de Telecomunicaciones (CONATEL) adopted the ATSC standard for digital terrestrial television broadcasting in January 2007. The first digital high definition TV station, CampusTv, was founded by Universidad de San Pedro Sula.
Hondutel, created in 1976, is the state owned telecommunications company in Honduras.
The first cellular company in Honduras, Celtel (now Tigo), started operations in 1996. Megatel (now Claro) started in 2001, Honducel in 2007, and Digicel (now América Móvil) in 2008.
The Internet has been used in Honduras since 1990 and is common in all the major centers of population. Broadband Internet access is also common. All major media have an Internet presence.
Hondutel provides dial-up Internet access.
There are no government restrictions on access to the Internet or credible reports that the government monitors e-mail or Internet chat rooms without judicial oversight. The constitution and laws provide for freedom of speech and press, and the government generally respects these rights in practice. The constitution and law generally prohibit arbitrary interference with privacy, family, home, or correspondence.
Four journalists were killed during 2012, compared with five in 2011. Reports of harassment of journalists and social communicators (persons not employed as journalists, but who serve as bloggers or conduct public outreach for NGOs) continued to rise. There also were multiple reports of intimidation of members of the media and their families. Government officials at all levels denounced violence and threats of violence against members of the media and social communicators. During 2012 the efforts of the Special Victims Unit (SVU) created in January 2011 to address violent crimes against vulnerable communities, including journalists, led to seven arrests and one prosecution in cases involving killings of journalists and social communicators. Members of the media and NGOs stated that the press “self-censored” due to fear of reprisal from organized crime. | https://en.wikipedia.org/wiki?curid=13400 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.